audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2018-03-09
2018-03-09 20:12:43
2018-03-09
2018-03-09 20:54:13
1
false
en
2018-03-09
2018-03-09 20:54:13
2
1beef317bb2e
1.969811
0
0
0
The basic definition of dark data is data that has been collected, but is unstructured and, therefore, not currently being used. It is data…
5
What is Dark Data? The basic definition of dark data is data that has been collected, but is unstructured and, therefore, not currently being used. It is data that has been continuously collected and stored, but has not been organized via categorization, labels, or any other effective organization tool. Though this massive treasure trove of unstructured data could hold valuable insights if it were to be organized and, subsequently, analyzed, it is currently in the “dark”. Potentially highly influential in the decision-making processes of a business, dark data is often waiting indefinitely to be evaluated and analyzed via data analytics. Examples of Dark Data One example of dark data is a customer call record. Potentially holding valuable information on a customer’s thoughts and geolocation, these types of records are regularly recorded and stored, but rarely organized or analyzed. Another example of dark data is a website log file. Potentially holding valuable information on visitor behavior and traffic, these logs are regularly collected, but rarely analyzed in any organized or meaningful way. Growth of Dark Data According to a 2011 IDC study, 90% of digital data is unstructured data, or dark data. The study also found that the world’s digital data is doubling every two years, significantly faster than Moore’s Law predicted. New technologies and technological advancements are paving the way for low-cost solutions to capturing and storing massive amounts of information. In 2011, the overall cost of capturing and storing large amounts of unstructured information dropped to just one-sixth of the cost seen in 2005. We are closer than ever to a cost-effective method for analyzing dark data. Issues with Dark Data Considering the increasing awareness and usage of big data and data analytics, there is now a large demand to organize dark data and make it usable. However, this type of data is often complex, very large in size, and stored in multiple locations. This makes analysis very difficult and costly. Nonetheless, the potential value of analyzing unstructured dark data is staggering. Due to the potential value, there have been many proposed big data solutions. Solutions to the Dark Data Problem machine learning, or allowing some type of artificial intelligence to develop a computer program that changes and improves based on a constant supply of new unstructured data open data, or making unstructured data available for everyone to analyze and explore software that converts dark data to graphics, or creating a program that automatically organizes data into easy-to-understand graphics All feasible solutions, these methods are now being actively explored and attempted by various companies. In the race to acquire and utilize the newest and most valuable big data, new and better technologies will continue to emerge. A developing field, the potential value and insights of large amounts of dark data remains undetermined.
What is Dark Data?
0
what-is-dark-data-1beef317bb2e
2018-06-15
2018-06-15 14:39:41
https://medium.com/s/story/what-is-dark-data-1beef317bb2e
false
469
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Jill Platts
Developer, designer, writer, etc. Interested in all things related to computers. 💾
a71aacf8c30
jillplatts
112
2
20,181,104
null
null
null
null
null
null
0
null
0
b1d1ab71f14a
2017-08-08
2017-08-08 15:10:55
2018-06-15
2018-06-15 23:41:22
1
false
en
2018-06-17
2018-06-17 23:54:39
4
1bef92b9a328
3.411321
8
0
0
PyBay2018 is only months away! We’re so excited to announce the list of speakers who will be presenting the main talks on August 17–19…
5
Announcing PyBay2018’s Speakers! PyBay2018 is only months away! We’re so excited to announce the list of speakers who will be presenting the main talks on August 17–19, 2018 in San Francisco, California. First of all, thank you to everyone who’s taken the time to respond to our call for proposals. Our goal was to curate a program that: Includes the latest developments and use of Python in various fields Is diverse and inclusive of everyone who’s passionate about Python and doing good things with it! With a record number of amazing proposals and a limited number of slots this year, we had such a challenging time deciding talks. We’re pretty proud of what we’ve accomplished though, with a line-up of: Seasoned speakers who’ve written popular Python libraries, CTOs of startups, developer evangelists from larger companies, a first timer who’s recently graduated from bootcamp, and everything in between Perspectives from people living in the SF Bay Area, Los Angeles, WA, MA, OH, NE, TX, and 3 other continents 25% of speakers are women Excellent coverage on Python fundamentals and libraries, ML/AI/DS, DevOps/infrastructure. Some talks on performance, hardware, iOT and the people side of engineering too! Here’s the final talk list, ordered by the speaker’s first name. Deprecating the state machine: building conversational AI with the Rasa stack — Alan Nichol Robots, biology and unsupervised model selection — Amelia Taylor Detecting business chains at scale with PySpark and machine learning — Andrew Danks Automated responses to questions about your health — Austin Powell Reproducible performance — profiling all the code, all the time, for free — Bartosz Wróblewski An import loop and a fiery reentry — Brandon Rhodes An absolute beginner’s guide to deep learning with Keras — Dr. Brian Spiering Diving into production issues at scale — Brian Weber Using JupyterLab with JupyterHub and Binder — Carol Willing Machine learning at Twitter: Twitter meets Tensorflow — Cibele Montez Bootstrapping a visual search engine — Cung Tran Airflow on Kubernetes: dynamically scaling Python-based DAG workflows — Daniel Imberman, Seth Edwards Ask Alexa: how do I create my first Alexa skill? — Darlene Wong & Varang Amin Finding Your Place in SRE and SRE in Your Place — David Blank-Edelman Using Keras & Numpy to detect voice disorders — Deborah Hanus How I learned to stop shell scripting and love the StdLib — Elaine Yeung How to read Python you didn’t write — Erin Allard Modern C extensions: why, how, and the future — Ethan Smith Tools to manage large Python codebases — Fabio Fleitas 1 + 1 = 1 or record deduplication with Python — Flávio Juvenal Clearer code at scale: static types at Zulip and Dropbox — Greg Price Docker for data scientists: simplify your workflow and avoid pitfalls — Jeff Fischer High-performance Python microservice communication — Joe Cabrera Zebras and lasers: a crash course on barcodes with Python — Jonas Neubert First steps to transition from SQL to pandas — Kasia Rachuta 2FA, WTF? — Kelley Robinson Finding vulnerabilities for free: the magic of static analysis — Kevin Hock Python services at scale — Lisa Roach Parse NBA statistics with Openpyxl — Lizzie Siegle Pull requests: merging good practices into your project — Luca Bezerra Amusing algorithms — Max Humber Production-ready Python applications — Michael Kehoe Serverless for data scientists — Mike Lee Williams Let robots nitpick instead of humans — Moshe Zadka Deploying Python3 application to Kubernetes using Envoy — Natalie Serebryakova How to make a multi-tenant microservice — Navin Kumar Building Google Assistant apps with Python — Paul Bailey Data science on geospatial data and climate change — Paige Bailey Building an AI-powered Twitter bot that guesses locations of pictures from pixels — Randall Hunt Why you need to know the internals of list and tuple — Ravi Chityala Django Channels and websockets in production! — Rudy Mutter Beyond accuracy: interpretability in “black-box” model settings — Sara Hooker How to instantly publish data to the internet with Datasette — Simon Willison Recent advances in deep learning and Tensorflow — Sourabh Bajaj Service testing with Apache Airflow — Zhangyuan Hu From batching to streaming: a challenging migration tale — Srivatsan Sridharan The bots are coming! Writing chatbots with Python — Wesley Chun asyncio: what’s next — Yury Selivanov Please help us congratulate our PyBay 2018 speakers Want to join in on the fun at 2018’s largest get-together of SF Bay Area Python devs? Grab your conference pass! There are a few ways to get involved in making this conference even more awesome: Ask your company to join our amazing list of sponsors. Spread the word about PyBay 2018 on Twitter with #PyBay2018 and share this post! Share it out on your other social media and mailing lists. Stay tuned for the talk schedule, pre-conference workshop lineup, diversity and inclusion drive, financial aid, volunteering opportunities, and more! We’ll be sharing more announcements here and on Twitter. Once again, tremendous gratitude to everyone who’s taken the time to submit talk proposals. We’re amazed at the breadth and the depth of your knowledge!
Announcing PyBay2018’s Speakers!
15
announcing-pybay2018s-speakers-1bef92b9a328
2018-06-19
2018-06-19 04:54:14
https://medium.com/s/story/announcing-pybay2018s-speakers-1bef92b9a328
false
851
Regional Python Conference in San Francisco Bay Area
null
PyBayConf
null
PyBay2018
info@pybay.com
pybay
PYTHON,SOFTWARE ENGINEERING,SAN FRANCISCO,CONFERENCE,DATA SCIENCE
py_bay
Python
python
Python
20,142
Grace Law
Tech Recruiter turned Yoga Teacher + Python Conference Organizer. Meaningful transformations every breath I take
33f278a7f841
py_bay
22
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-24
2017-10-24 08:31:42
2017-10-24
2017-10-24 09:13:12
9
false
en
2017-10-24
2017-10-24 09:13:12
45
1befa0b11b47
14.467925
0
0
0
New original ‘TV’ shows announced by both Facebook and Snapchat
5
The new, new thing — September 2017 “Try to be a rainbow in someone’s cloud” M.Angelou New original ‘TV’ shows announced by both Facebook and Snapchat http://advanced-television.com/2017/09/01/discovery-to-create-shows-for-facebook-watch/ http://advanced-television.com/2017/08/25/facebook-orders-original-docu-series/ This week we finally heard some concrete examples of the sorts of original video content that Facebook and Snapchat have commissioned. Facebook announced some it was taking from Discovery, and yes, they’re the sorts of things that you can see going viral on the platform, including an ‘unsolved mysteries’ show, and a ‘celebrity animal encounters’ one, and one that is an offshoot from the Humans of New York blog. Snapchat also announced some, including a show from the ‘Barstool Sports’ site, which will visit US college campuses. Again, could be very viral and popular for the Snapchat audience http://www.mobilemarketer.com/news/barstool-sports-to-produce-college-football-series-on-snapchat/503638/ Most people don’t download apps https://www.comscore.com/Insights/Presentations-and-Whitepapers/2017/The-2017-US-Mobile-App-Report Some sobering news from comScore — 51% of the smartphone owners they surveyed in the US didn’t download a new app in the past month. It shows how hard it has become to gain attention on mobile, unless you are already one of the big players. Apps can and do work for brands, but don’t underestimate how hard it is to get them onto people’s phones! (Let me know if you want the full report) Cortana & Alexa https://www.nytimes.com/2017/08/30/technology/amazon-alexa-microsoft-cortana.html One of the most unexpected stories this week was this new alliance that will allow two voice assistants, Microsoft’s Cortana & Amazon’s Alexa, to (essentially) talk to each other. It signals a breakdown in the walled gardens that both represent, but also I think, that given Amazon uses Bing for the search queries that it can’t answer through its own products, why not collaborate? Likewise, Cortana will be able to call on Alexa to answer things that it doesn’t cover. A sign of the times — Amazon is turning a discussed shopping mall into a warehouse https://techcrunch.com/2017/08/25/an-ohio-mall-gets-a-second-life-thanks-to-amazon/ Amazon is going to demolish what was once the world’s biggest shopping mall, and use to land to house a new distribution warehouse, in an oddly symbolic real estate deal. But at the same time they are pushing into retail themselves as they signalled the closing of the purchase of Walmart by cutting its prices in many key lines — a direct signal of how they are likely to shake things up. https://www.bloomberg.com/news/articles/2017-08-28/amazon-cuts-prices-at-whole-foods-as-much-as-50-on-first-day The newest use of AR — Lenovo is bringing out a $200 lightsaber game https://vrscout.com/news/disney-lenovo-star-wars-ar-headset/ It sounds great — a headset that takes your phone, and a ‘lightsaber’ unit that ‘works’ once you look at it through the your phone, and opponents to fight against. A brilliant use of AR, and with the Star Wars brand attached, this could really fly out of the stores. Watch out for YouTube videos of people using it, and takes of broken limbs and household objects (just like when the Wii came out). Uber Movement https://techcrunch.com/2017/08/30/uber-movement-traffic-data-finally-makes-it-out-of-beta/ ‘Movement’ is Uber’s new initiative to provide intelligence to cities by sharing their own data. Uber must have more data on how people move around cities than almost anyone else, and like CityMapper, who are starting a bus service, they have realised that this data could be the basis of a new business or opportunity. I’m really looking forward to seeing what insights and changes come out of this. You can see it here: https://movement.uber.com/cities Thanks to Max Askwith for the link! Burger King is launching its own cryptocurrency in Russia https://www.cnbc.com/2017/08/28/burger-king-russia-cryptocurrency-whoppercoin.html As part of a marketing campaign BK is introducing its own currency in Russia, the WhopperCoin. Each time someone buys a burger they will get a WhopperCoin credited to their digital wallet, which can be used to buy burgers, once the customer has built up a high enough balance. It’s clearly a bit of a stunt, and probably not that different in practical terms to other loyalty card schemes, but it shows that cryptocurrenies are getting more mainstream, and may indicate that they are already popular with a certain section of the Russian population. Thanks to Mario Nissan for the link! Pay with a smile at KFC in China Alipay Launches 'Smile to Pay' for Commercial Use in China | Alizila.com Alipay has launched its new facial-recognition payments technology for commercial use, Ant Financial Services Group…www.alizila.com http://www.alizila.com/alipay-launches-smile-to-pay-commercial-use/ The latest use of image recognition technology is from Alibaba’s Alipay, which is allowing customers to pay with a smile. Initially just in one restaurant (possibly to allow intensive monitoring…), it’s a nice twist on the ‘pay with a selfie’ idea. Store Analytics with floor level cameras https://www.hoxtonanalytics.com/ I went to an event this week on the future of retail, and one of the companies mentioned was this analytics company that measures footfall, inc producing a dashboard and visualisations, simply by filming people’s shoes as they enter and leave stores or malls. They even say they can tell the gender of the shopper (presumably based on style and size of foot…) Sometimes the great ideas are the simple ones! Forrester research looks at teens’ attitudes to ads https://go.forrester.com/blogs/the-data-digest-instagram-and-snapchat-have-room-for-more-ads-according-to-forrester-datas-us-youth-survey/ Forrester asked 12–17s in the US about advertising on key social platforms. 39% thought that there were too many ads on YouTube, 26% on Facebook, but only 11% on both Instagram and Snapchat. Forrester suggests that this means that there is scope for more ads on the final two apps — but it may also be an indication of their success in blending in ads that do appear, or making them feel like interesting content, rather than ads (See the Kendal Jenner Instagram example below). Good news for Snapchat, at any rate! Staying on Snap, this is a great example of tailored branded content http://www.adweek.com/tv-video/vice-debuts-a-snapchat-dating-show-sponsored-by-match-featuring-fun-commentary-by-action-bronson/ Hungry Hearts is a show made for dating service Match.com by Vice, and shown exclusively on Snapchat. Vice food show presenter Action Bronson takes two singles on a date, to one of his favourite restaurants, and at the end gives his prediction of whether there will be a second date. It sounds like perfect content for Snap, and the perfect sort of vehicle to promote Match, and keep people engaged with Snapchat too. Apps for kids are coming to Alexa http://www.mobilemarketer.com/news/amazon-debuts-alexa-skills-for-children-from-nickelodeon-sesame-street/504145/ Kids seem to love Amazon Echo, and a natural extension of this is a new set of approved ads aimed at under 13s, approved by Amazon, from providers like Nickelodeon and Sesame Street. There have been lots of ‘Netflix for Kids’ ideas, so it is quite an obvious move to engage a new audience. However they will have to be well behaved, or Amazon and the partners will get a lot bad news stories very quickly. In related news, the BBC has announce that it is making — essentially — ‘choose your own adventure’ dramas for Alexa and Google Home. Listeners will steer the action by making choices. I can’t wait to hear one of these! https://www.theverge.com/2017/9/6/16261348/bbc-radio-plays-interactive-stories-audio-drama-google-home-amazon-alexa-echo How AI played a role in re-designing YouTube https://www.theverge.com/2017/8/30/16222850/youtube-google-brain-algorithm-video-recommendation-personalized-feed YouTube has re-designed its feed, and this article explains how AI played a part in the new recommendation engine. Google is very advanced in AI capabilities through its ownership of DeepMind (whose technology beat the Go world champion), and this is just another example of how that technology is making Google’s products work better. They must also be working on ways of making advertising targeting more intelligent — watch this space… The bad side of AI — writing fake restaurant recommendations https://www.theverge.com/2017/8/31/16232180/ai-fake-reviews-yelp-amazon If you want to see the negative potential of AI, look no further than this project from the University of Chicago, who trained their technology to produce short, user-generated restaurant reviews that were indistinguishable from real ones. Luckily this was only done as a test to see if it could be done, but this sort of tech could easily infect — and compromise — review sites. Disney puts AR Star Wars spaceships over global landmarks https://vrscout.com/news/disney-augmented-reality-star-destroyers/ Each week there seem to be fun new uses of AR in marketing, as it becomes more and more mainstream. This week it was announced that Star Destroyers have been placed above the Golden Gate Bridge and other landmarks — e.g. London Eye, Eifel Tower, Sydney Harbour Bridge — in major cities, to promote the new Star Wars film. Fans need to have the Find The Force app, then go to the precise spots, and they’ll see the ships as they look through their phones. I have a friend who had a company who could do this nearly 10 years ago — they had dinosaurs hiding behind sky scrapers — but they were clearly years ahead of the market. These look like great fun. A brilliant example of using an influencer https://www.instagram.com/p/BYs9sgOD7Bd/ Kendall Jenner posted a picture of herself wearing Ippolita Jewellery, and so far, 2m of her 83m Instagram followers have liked it. It is clearly marked as a paid partnership, and we have no idea how much money changed hands (Kendall may even be on a percentage of sales), but going back to the first story, it is a perfect example of how an ad may not be seen as an ad. Lots of blurred lines here, but it is great that the partnership is being openly declared. Thanks to Richard Wright for the link! Physical branded content — Bud Light glasses that send alerts http://www.mobilemarketer.com/news/bud-light-debuts-mobile-enabled-touchdown-glass-for-nfl-kickoff/504433/ Bud Light has launched these drinks glasses — that people pay $18 for — in NFL team colours — that will flash when their favourite team scores. The drawback is that they have to be synched with a special app — but for people who do buy the glasses it is way for the brand to make sure it is on the table and filled with beer every time a fan is watching a game on TV! The iPhoneX — the 10 year anniversary phone https://techcrunch.com/2017/09/12/apples-iphone-x-might-be-the-best-phone-money-can-buy/ Apple announced its new iPhone, the iPhoneX (iPhone Ten). It was kind of all things to all people — the fans loved it, the haters hated it (‘a great big nothingburger’), but it showed some improvements on the previous iPhones to a considerably higher price — starting at $1,000. The most interesting things for me are that the screen is now the phone face of the phone — no button on the front, so it’ll be even more engaging for watching, playing and creating — and the possibilities that come with the new dual cameras and augmented reality. It has lots of Augmented Reality capability — and brands are starting to experiment http://www.mobilemarketer.com/news/brands-show-off-arkit-innovations-with-slew-of-new-mobile-apps/504807/ We’ve seen lots of interesting AR examples for the past few months from developers (search ‘ARKit’) and now brands are starting to show what they have planned. Ikea has Ikea Place which has a library of 2,000 items which you can ‘place’ into your own rooms on the phone screen to see how things will look. Yes, there were versions of this over 5 years ago, but this will be far more realistic. There’s also a baseball one which will allow fans at games to focus on a player, and then see his stats. Very smart stuff, with the sort of wow factor that will make people really want to buy an ‘X’. The new Apple Watch has its own connectivity https://techcrunch.com/2017/09/12/the-apple-watch-series-3-will-transform-a-lot-of-workouts/ The 3rd iteration of the Apple Watch finally has its own LTE connectivity — meaning that it can be used on its own, and not tethered to an iPhone. So you could use it to make phone calls, or go for a run and let it have full capability to track and monitor, without having to take your phone too. It’s finally getting to the point that people hoped (expected) it would be at when it launched 2 years ago, and, given that Apple also revealed that it is now the biggest watch maker by revenue (Rolex is second), sales should keep rising. It could be the next mainstream media device. The new Apple TV is much more powerful https://techcrunch.com/2017/09/12/apple-is-bringing-live-sports-to-the-apple-tv-4k/ Last month it was revealed that Apple was preparing to spend $1bn on TV content this year, so it made sense to super-charge the Apple TV. It’s now capable of showing 4K resolution content, and also has a new ‘Sports’ tab which might be a clue that Apple is planning on bringing in more sports content, either through partnerships or buying rights. Again, and impressive update. Facebook is testing ‘Instant Videos’ https://www.engadget.com/2017/09/11/facebook-instant-videos-test/ Facebook has been testing a feature where videos will download to your phone when you are on wifi, and cache so that you can watch them when you are offline or with a patchy connection. Similar to Instant Articles, the video will live on the app, rather than on a publisher’s site. As Facebook puts more focus on video this is a natural move to boost viewing — it was also revealed this week that A+E Network’s first Facebook show ‘Bae or Bail’ had 24m views since the end of August. Tesla remotely enabled its cars to drive further to escape hurricane Irma https://www.theguardian.com/technology/shortcuts/2017/sep/11/tesla-hurricane-irma-battery-capacity A strange story — Owners of lower-level Teslas found that their range had been increased remotely, to help them to driver further away from the devastation. It turns out that all the cars have the same battery, but that on the cheaper models its capacity is limited to 80%, unless they pay for an upgrade. It’s a good illustration of how Tesla has power over its cars even after sale, and how mechanics will lose the ability to fix cars — but also it feels wrong, like someone selling you a house, and keeping rooms hidden from you. Alibaba partnered with New York Fashion Week http://www.alizila.com/alibaba-new-york-fashion-week-the-shows/ Alibaba was a major partner at this week’s NYFW. The partnership allowed Alibaba a deeper connection to many luxury brands, and as part of the deal two brands — Opening Ceremony and Robert Geller — will show at Alibaba’s October event See Now Buy Now, which will put them in front of up to 500m Chinese shoppers. It’s another example of how the biggest Chinese tech companies are getting more actively involved in the West. A new start-up ‘Bodega’ wants to re-invent the vending machine https://www.fastcompany.com/40466047/two-ex-googlers-want-to-make-bodegas-and-mom-and-pop-corner-stores-obsolete Two ex-Google people in the US have created a vending machine that uses visual recognition and called their company — insensitively — Bodega. They are glass-doored cabinets to live in apartment blocks etc that need to be opened with an app, and have multiple cameras to spot what the user has taken out, and then bill for. It’s a good idea — similar to Amazon’s Go Store concept — but also full of drawbacks. For example what if you had an identical, but empty box of cereal, and just replaced a full box? Would the cameras be able to tell? Looking forward to seeing if this is the next Uber or the next Juicero… Marmite has produced a gene-testing kit, to see if you born a lover or hater of Marmite https://www.marmite.co.uk/geneproject A bit of genius from Marmite: Taking inspiration from DNA testing companies like 23andme, Marmite are selling a kit (for £90) to take a sample which they will analyse, and tell you if you are genetically a lover or hater of Marmite. More affordably, there is a facial recognition part of the site where yu can upload a video of yourself eating Marmite and they will tell you whether or not you’re enjoying it… Lots is happening in Augmented Reality The Apple launches last week raised the temperature on AR, and I’ve seen lots of really interesting examples of new uses and apps this week - Last week I mentioned the Ikea app that let you place sofas, chairs etc from the Ikea catalogue virtually into your home. Hot on the tail of that come this new app from Houzz which claims to have 500,000 furniture and home décor elements that you can virtually add to your home https://vrscout.com/news/houzz-virtual-furniture-arkit-app/ Also in AR, Holo is basically the celebrity fan’s version https://vrscout.com/news/arkit-holo-app-8i/ This app gives you mini animated holograms of celebrities (& animals) that you can place into the world around you, and view through your phone screen. It sounds a bit crazy, and there are no real big hitters on the platform yet (no Kardashians for example), but when you remember that Kim K reportedly got millions of dollars from her mobile game and Kimoji app, this idea could be worth a lot. Instagram and Snapchat both arguably hit the mainstream because of celebrities, and having a celeb in your pocket in AR could really take off. Another use for AR — treasure hunts http://www.mobilemarketer.com/news/kate-spade-new-york-guides-paris-tours-with-help-from-ar-and-influencers/505264/ The ‘treasure hunt’ idea keeps coming back for new technologies — it’s been done with Foursquare, Snapchat filters and more — and now the luxury brand Kate Spade New York is using AR to guide people around Paris, with different AR effects for different landmarks, finally driving people to the new Kate Spade store. Also — WeChat is working on it’s own AR platform https://www.techinasia.com/wechat-first-look-ar-platform Not to be left behind WeChat is working on its own platform. The examples show virtual warriors on your table top and more. WeChat’s strength is that it’s mobile only — and it’ll be fascinating to see what their geniuses come up with (& how quickly western companies replicate the features). AI is also developing quickly — this story is a great example of how AI can be used in forecasting http://www.glossy.co/modern-media/data-is-the-what-people-are-the-why-how-ai-is-changing-trend-forecasting Fashion probably isn’t the most natural industry for AI, but it’s being used by consultancies like WGSN to analyse a huge amount of data very quickly to then allow the human forecasters to make more informed decisions. All of these things can sound a bit ‘black box’y, and you’d definitely want a way of sense-checking the insights that were coming out of deep learning, at least to start off with, but it can also make analysis far more rigorous, and cut down on human error. CityMapper and Gett have partnered to launch shared taxi commuter routes in London https://techcrunch.com/2017/09/21/citymapper-ties-with-gett-to-launch-shared-taxi-commuter-route-in-london/ One trend that I think is fascinating is how data is now actively being used to plan new services. CityMapper has already launched a bus service based on its knowledge of how people move around the city, and this partnership with Gett is surely an extension of that. It’s not just in cities either — there are also physical examples of this ‘Design from Data’ trend — products in China are increasingly being designed from insights observed online in reviews and so on. Deloitte has a new report on mobile — ‘The State of Smart’ — in the UK https://www2.deloitte.com/uk/en/pages/technology-media-and-telecommunications/articles/mobile-consumer-survey.html Lots of data on how phones are being used — 41% of people think their partner uses the phone too much — plus mobile video use and more. Nothing very surprising, I don’t think, but it’s always great to get solid numbers from a great source. Full 60 page report here http://www.deloitte.co.uk/mobileuk/assets/img/download/global-mobile-consumer-survey-2017_uk-cut.pdf & finally — South Park messes with your Alexa https://techcrunch.com/2017/09/15/south-park-trolled-amazon-echo-owners-in-the-best-way-possible/ Following on from Burger King’s award winning TV ad that activated people’s Google Home devices, South Park had a segment in its new series that added various random items to Amazon baskets, if the Alexa was in earshot. Yes, it’s very funny, but it also alerts us to the idea that these speakers may not be secure, and the next hackers may have far darker intentions. www.girlgeekdinnerslazio.com Girl Geek Dinners Lazio Team www.girlgeekdinnerslazio.com https://www.facebook.com/girlgeekdinnerslazio/ https://twitter.com/GGDLazio https://www.youtube.com/channel/UC79XojQIPU1W3zG8gChVH_A girlgeekdinnerslazio@gmail.com
The new, new thing — September 2017
0
the-new-new-thing-september-2017-1befa0b11b47
2018-05-09
2018-05-09 04:35:52
https://medium.com/s/story/the-new-new-thing-september-2017-1befa0b11b47
false
3,516
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Girl Geek Dinners Lazio
La nostra associazione ha l’obiettivo di creare un Team al femminile che spinga le giovani donne a ricoprire sempre più ruoli di leadership in tutti campi.
9c089e546185
girlgeekdinnerslazio
19
223
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-24
2018-08-24 09:36:06
2018-08-24
2018-08-24 09:56:55
2
false
en
2018-08-24
2018-08-24 10:07:05
1
1bf06b6cce0e
2.466352
0
0
0
Facebook appear to be stepping up its efforts in the voice recognition space. With Google and Amazon paving the way and finding success…
2
Aloha: Facebook is Finding a Voice Facebook appear to be stepping up its efforts in the voice recognition space. With Google and Amazon paving the way and finding success with voice-activated assistants, it seems odd that Facebook has been so quiet in the space until now. Facebook’s ‘Aloha’ Voice Assistant Logo Indeed, there has been talk of a smart speaker and lots of patents and chatter about the Blue Beast working on voice products for some time now, but nothing concrete has materialised. The suggestion appears to be that Facebook is keen to get it ‘right’ — something that anyone who has found themselves cursing at a smart speaker in a frustrated effort to be understood will appreciate — working to ensure their voice recognition understands the genuine flow of actual, everyday speech, including slang and verbal stumbles. Getting that right for a truly global audience is a gargantuan task. But what will it lead to? At its most simple, another way to continue your discussions with your friends. Instead of infuriating WhatsApp voice notes or hasty Messenger video calls, you’ll be able to bark a few sentences into your phone and have them received by all in arguably the most convenient format: text. This doesn’t seem hugely exciting, but is another step for Facebook to take to become the ubiquitous way friends connect with one another. Where things become more interesting for marketers is if Facebook provide the ability to talk back. Suddenly brand interactions can take on a much more personal form — imagine choosing the voice, tone and even slang that your AI-powered brand assistant may take. Imagine being able to draft in celebrity voices to help answer any questions your audience may have about your product. Imagine being able to have authentic-feeling real dialogues with consumers, instigated by them. Facebook is also likely to be a little more open to learning from its users than perhaps Apple and Google have been. Whereas Echo and Home products are at the moment mapping behaviours and adapting to them on the fly (so if more people ask about the weather, Amazon will invest in more sophisticated responses about whether or not it’s raining outside), Facebook will be able to learn how people talk to their actual friends, not just AIs. This opens the door to some potential trend analysis, and could arguably help brands feel more authentic (or lead to some cringe-inducing copywriting decisions). Facebook’s patent for a smart speaker. Beyond this, Facebook is likely to integrate their tech into other products — most notably Oculus Rift. Yet to find its feet as more than a niche product for tech enthusiasts, pairing high-functioning voice recognition and VR opens up additional new avenues for interesting experiences. Indeed, one of the biggest challenges for VR at the moment is finding a comfortable way for users to interact with the worlds that can be created. And, of course, don’t forget the inevitable smart speaker in your living room… Overall, this is a small morsel of news with potentially larger implications, and is also another sign that voice control, in many different forms, is here to stay. Andy Cooper, Creative Content Strategist, OMD Create EMEA OMD Create EMEA are OMD EMEA’s dedicated content team. If you’d like to learn more or have any questions, please don’t hesitate to get in touch.
Aloha: Facebook is Finding a Voice
0
aloha-facebook-is-finding-a-voice-1bf06b6cce0e
2018-08-24
2018-08-24 10:07:05
https://medium.com/s/story/aloha-facebook-is-finding-a-voice-1bf06b6cce0e
false
552
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
OMD Create EMEA
OMD Create EMEA is OMD EMEA's dedicated content team. We're chatty people. Get in touch: andrew.cooper@omd.com
15d256398773
andrew.cooper
3
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-31
2018-03-31 08:20:00
2018-03-31
2018-03-31 08:28:15
0
false
en
2018-06-25
2018-06-25 23:28:28
2
1bf15cf6155e
1.562264
1
0
0
This post is a part of Jeff’s 12-month, accelerated learning project called “Month to Master.” For March, he is downloading the ability to…
1
M2M Day 89 — Struggles, Frustrations This post is a part of Jeff’s 12-month, accelerated learning project called “Month to Master.” For March, he is downloading the ability to build an AI. When people think of “programmers”, they think of hunched-back nerd, curled over his computer, furiously banging away at the keyboard, pounding out lines of code. In reality, programmers, are spending A LOT of time debugging their code. Today, I got a visceral experience of this. Today, I successfully used transfer-learning to train my new dataset of images. The accuracy on predictions was about 73%, which is SOLID consider I only have 3000 images and the type of classification problem is trying to find my personal attractiveness preferences. However, the struggle, was that training this model takes about 6–7 hours. This means, the feedback cycles on writing code is SLOW. Yesterday night, after successfully training my model, I needed to pull a model weights & model json file. After attempting to run these in my bot, an error popped up, saying that the weight file was invalid. I thought I didn’t pull the correct files, so I thought it would make sense to re-run my model. This meant, that after work, I ran my model at around 7pm and waited till about 12 midnight for the training to be finished. Only to find, that the error WASN’T my weight file. The error occured because I had an out of date version of keras on my local computer and the model could only be run on the most recent version of keras. This meant, I didn’t even need to wait 5 hours to re-train my model. This was frustrating, since I felt like I waited for nothing( I was watching Star Wars while I waited). Regardless, it’s a really small error that caused me to inefficiently use a ton of time. This is part of the struggle with coding. Anyways, I got the bot to run successfully on my new model so I’m still happy. The challenge is a success! Tomorrow, I’ll just need to clean up my files and tweak a couple functions and then I can release my program to the public! Read the next post. Jeff Li is saving the world by matrix-downloading skills into his brain. He is …….“The SuperLearner. ” If you love me and this project, follow this medium account. Hate me, you should still follow this medium account. One option here……
M2M Day 89 — Struggles, Frustrations
1
m2m-day-88-struggles-frustrations-1bf15cf6155e
2018-06-25
2018-06-25 23:28:28
https://medium.com/s/story/m2m-day-88-struggles-frustrations-1bf15cf6155e
false
414
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Jeffrey Li
Accelerated Learning Fanatic | Data Scientist | Educator
3899f1e86899
dj.jeffmli
275
143
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-09
2018-02-09 08:17:14
2018-02-09
2018-02-09 08:22:11
1
false
en
2018-02-09
2018-02-09 08:22:11
1
1bf3bc84285f
0.343396
0
0
0
Create Chatbot for your business. It is a computer program that mimics human conversations in its natural format using artificial…
3
The Lifecycle of Chatbot — Apptunix Create Chatbot for your business. It is a computer program that mimics human conversations in its natural format using artificial intelligence techniques. Visit and avail the services now: +1 415 670 9326
The Lifecycle of Chatbot — Apptunix
0
the-lifecycle-of-chatbot-apptunix-1bf3bc84285f
2018-02-09
2018-02-09 08:22:12
https://medium.com/s/story/the-lifecycle-of-chatbot-apptunix-1bf3bc84285f
false
38
null
null
null
null
null
null
null
null
null
Chatbot Bandwagon
chatbot-bandwagon
Chatbot Bandwagon
0
Dave Fletcher
I am Dave Fletcher working with Apptunix as iOS developer with 3.5 years of experience in writing clean and maintainable source code.
812a633a1010
fletcherdave810
1
4
20,181,104
null
null
null
null
null
null
0
null
0
339ca5d6df1
2017-12-18
2017-12-18 09:42:02
2017-12-18
2017-12-18 10:03:01
1
false
it
2017-12-18
2017-12-18 10:03:01
0
1bf4cb375c0b
2.418868
0
0
0
Google aprirà a Pechino il suo primo centro di ricerca asiatico sull’intelligenza artificiale. Gli utenti cinesi non potranno utilizzare i…
3
Dragonomics — Google ci riprova in Cina. Con l’Ai Google aprirà a Pechino il suo primo centro di ricerca asiatico sull’intelligenza artificiale. Gli utenti cinesi non potranno utilizzare i servizi del colosso statunitense senza l’ausilio di strumenti in grado di aggirare la censura, o comunque con grande difficoltà, tuttavia questo non impedisce alla multinazionale tecnologica di lavorare su altre linee di business al di là della Grande Muraglia. Tanto più che in tema di intelligenza artificiale la Repubblica popolare sta facendo da avanguardia, con il lavoro dei cosiddetti Bat, cioè la triade formata da Baidu, Alibaba e Tencent, chiamate dal governo di Pechino a far parte del team nazionale scelto dal ministero della Tecnologia per portare avanti la ricerca in questo campo, ognuno dando il proprio contributo in un determinato settore. Baidu, per anni conosciuta come la Google cinese, si concentrerà sullo studio della guida autonoma. La divisione cloud computing del colosso dell’e-commerce fondato da Jack Ma si occuperà di soluzioni per migliorare la qualità della vita nelle città, mentre Tencent, conosciuto in Italia per il sistema di messaggistica WeChat, studierà soluzioni per le diagnosi mediche. Il piano di sviluppo presentato dal governo lo scorso luglio prevede infatti di far diventare il Paese un centro di prima importanza nello sviluppo dell’intelligenza artificiale entro il 2030. Sono trascorsi sette anni dalla decisione di Google di chiudere il grosso del proprio business in Cina, schiacciata dal predominio di Baidu nel mercato dei motori di ricerca e pressata dalla censura sui contenuti. La società di Mountain View non ha comunque mai abbandonato del tutto il Paese, come dimostra la diffusione del sistema operativo Android e il business della pubblicità. Il nuovo centro è l’ultimo in ordine di tempo, dopo l’apertura di altri simili a New York, Toronto, Londra e Zurigo. Il laboratorio sarà guidato da Fei Fei Li, già a capo dello Stanford Artificial Intelligence Lab e dello Stanford Vision Lab, con il ruolo di responsabile scientifico. A lui si affiancherà il responsabile per la ricerca e lo sviluppo di Google Cloud Ai, Jia Li. In totale gli uffici dovrebbero contare circa un centinaio di dipendenti. «Credo che l’intelligenza artificiale e i suoi vantaggi non abbiano confini. E che si tratti di una scoperta nella Silicon Valley, a Pechino o altrove, l’intelligenza artificiale ha il potenziale per migliorare la vita delle persone in tutto il mondo», ha spiegato il responsabile del centro. L’apertura del nuovo ufficio è di fatto l’ennesimo passo verso il riavvicinamento tra Pechino e Mountainview. Non è un caso che la scorsa settimana il ceo di Google, Sundar Pichai, abbia partecipato all’annuale conferenza organizzata dall’Amministrazione generale per il cyberspazio, peraltro in compagnia di Tim Cook, ceo di Apple, e del vicepresidente di Facebook Vaughan Smith, a rimarcare l’importanza della Cina nelle strategie dei big Usa. Sulla stampa statunitense la società che fa capo alla holding Alphabet non manca inoltre di sottolineare come attualmente il numero di impiegati attualmente in Cina non di discosti troppo da quello del 2010. La grande G fa quindi in direzione inversa il percorso delle controparti cinesi: andare nella Silicon Valley per potenziare la propria capacità di ricerca nel campo dell’intelligenza artificiale. Nella campagna di distensione con Pechino, il colosso Usa fa inoltre leva sulle risorse. «Il Google AI China Center», scrive Fei Fei Li sul blog aziendale, «sosterrà la comunità di ricerca e sponsorizzerà conferenze e workshop lavorando a stretto contatto con la vibrante realtà dell’intelligenza artificiale cinese». Parole che potrebbero essere ben accolte dalla dirigenza comunista. [Scritto per MF-Milano Finanza]
Dragonomics — Google ci riprova in Cina. Con l’Ai
0
dragonomics-google-ci-riprova-in-cina-con-lai-1bf4cb375c0b
2017-12-18
2017-12-18 10:03:03
https://medium.com/s/story/dragonomics-google-ci-riprova-in-cina-con-lai-1bf4cb375c0b
false
588
China Files è un collettivo di giornalisti specializzati in affari asiatici. Copriamo l'attualità di Cina, India, Asia Centrale, Giappone e Coree, fornendo news, approfondimenti e reportage scritti e audio. Potete leggerci anche su www.china-files.com.
null
ChinaFilesIT
null
China Files
info@china-files.com
china-files
CHINA,NEWS,ANALYSIS,EAST ASIA,CHINESE CULTURE
chinafiles
Google
google
Google
35,754
Andrea Pira
Tweeting about China and Asia. http://t.co/KtjGW2JG
a64cf56c7713
andreapira
222
469
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-14
2018-08-14 07:37:25
2018-08-14
2018-08-14 08:11:54
1
false
en
2018-08-14
2018-08-14 08:11:54
2
1bf653fbc8f8
1.6
0
0
0
Data is the “life blood” of an organization, for as it streams between frameworks, databases, procedures, and divisions, it conveys with it…
5
HIRE ONLINE SUPPORT Data is the “life blood” of an organization, for as it streams between frameworks, databases, procedures, and divisions, it conveys with it the capacity to make the organization more quick witted and more viable. The most elevated performing organizations give careful consideration to the data resource, not as a reconsideration but instead as a center piece of characterizing, outlining, and building their frameworks and databases. The official definition given by DAMA International, the expert organization for those in the information administration calling, is: “Information Resource Management is the advancement and execution of designs, arrangements, practices and methods that appropriately deal with the full information lifecycle requirements of an endeavor.” Data is basic to settling on all around educated choices that guide and measure the accomplishment of the organizational methodology. For instance, an organization may break down data to decide the ideal authorization activities that diminish rebellious conduct. Likewise, data is additionally at the core of the business forms. An organization may upgrade a procedure to get false exercises by including recorded hazard related data. After some time, this kind of process change can result in material investment funds. Indeed, even a solitary execution of a business procedure can convert into considerable advantages, for example, utilizing data examples to stop a psychological oppressor at an outskirt or sifting a digital assault. How an organization utilizes and deals with the data is similarly as vital as the instruments used to carry it into the earth. Having the correct data of fitting quality empowers the organization to perform forms well and to figure out which forms have the best effect. These key destinations use data by changing it into valuable data. The most noteworthy performing organizations guarantee that their data resources are open to the procedures and people who require it, are of adequate quality and auspiciousness, and are secured against abuse and manhandle. Effectively utilizing data and data resources does not occur independent from anyone else; it requires proactive data administration by applying particular controls, strategies, and skills for the duration of the life of the data. Like frameworks, data experiences a life cycle. Hire Developers India provides complete business solutions. Hire Online Support Experts now. Click to Hire now
HIRE ONLINE SUPPORT
0
hire-online-support-1bf653fbc8f8
2018-08-14
2018-08-14 08:11:55
https://medium.com/s/story/hire-online-support-1bf653fbc8f8
false
371
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Robort Bradley
Web and Mobile App Development Service Provider.
c7a5f4855653
harrris.bradley
8
31
20,181,104
null
null
null
null
null
null
0
null
0
17723ea897cf
2018-03-15
2018-03-15 11:57:56
2018-05-11
2018-05-11 15:41:34
8
false
en
2018-05-11
2018-05-11 15:41:34
3
1bf685143de1
6.106918
1
0
0
We extracted and compiled, all-round valuable information regarding the best competing mobile companies and trends that show what mobile…
4
Which Mobile Phone Dominated the News? We extracted and compiled, all-round valuable information regarding the best competing mobile companies and trends that show what mobile model dominated the week, with factual positive comments and media coverage. These articles have been shared more than 2.521 times on social networks and generated an approximate number of 3,277,300 impressions. — As of March 16th. FightHoax — Horserace Animated Visualization — https://public.flourish.studio/visualisation/27003/ Before we dive into the data and its meaning, it is important to understand a crucial point of the FightHoax algorithm. FightHoax takes the various elements that make up a story, such as who is behind the news story, the history and reputation of the publication, the language used and the different perspectives of the topic into consideration, in order to create a consistent overall view. The FightHoax technology follows a similar pattern to what would you do as a person, as in, cross-and-fact-check what you’re reading, in the most objective manner possible. 1) The Emotions This adjoining aspect of the analysis concerns the overall sentiment observed in the digital sphere, which might include parts, the whole or none of the affecting pillars. The emotional chart depicted below verifies our initial observations about the topic’s overall perception bias and the impact of public opinion. FightHoax — a strong sad pulse was observed FightHoax’s technology tactically considers the content of the news piece of public information by contrasting the data with itself and observing its behaviour in various parts of the digital sphere, all through the scope of the sentiment. We can note that the majority of the articles pertain to a strong emotional pulse, by attesting to a hot topic and / or by expressing it with corresponding language and content. We ascertain that the week was dominated with primary sad emotions. Press announcements such as the viral rumoured and much expected One Plus 6 technical and design specifications, had the journalists concerned and deeply disappointed. On the other hand, news pieces that covered the new software upgrades of Google Pixel and Google Lens, had a joyful essence of language used. Deep and emotional article language means more and more social buzz from the users. 2) Factual Reporting When it comes to the facts, there are two correlating aspects that we need to consider: the source itself and the linguistic content we’re reading. It is important to decipher the facts presented to us by understanding more about where we are reading and what we are reading. FightHoax — the majority of sources have no factual history Factual reporting history of the news pieces, covering the latest mobile scene moves, appears to be incoherent or missing entirely. FightHoax — 23 out of 31 articles lack the necessary factual information The same pattern is observed on the content analysis, as either the content is not satisfactory in length or does not offer the necessary data and facts when compared with the topic covered and the correlation matrix. The majority of the news articles have a really short reportage result since the base of their factual reporting is rumors and leaks, such as the much rumored top-notch design of the new One Plus 6 mobile phone. It is important to consider that the factual aspect of the analysis is a rather dynamic one. Human error and response are also key parts of the overall view. Simply put, even a source with the most reliable track record can adhere to factual mistakes and, thus, understanding how it responds to them is of crucial importance to our factual reporting. You just can’t objectively verify the facts on 23 out of 31 articles. 3) The Title Acommon phenomenon, which originates on the ability to use your emotions as a reader, is clickbaiting. This is a rather straightforward concept to analyse, albeit an important one for our general understanding. Titles tend to easily capture the reader’s curiosity and emotional pulse, therefore offering a valuable aspect to understanding what kind of content we are reading. Sometimes, by effect of the title, impressions and title interactions — such as a post on a social network — are higher than actual link clicks. By extension of our emotional pulse, we can observe a higher frequency of clickbait titles on the 31 articles. Clickbait seems to contradict that, manipulating emotions with cheap gimmicks and little substance, meant as a kind of digital junk food that satisfies a temporary craving but never truly satisfy. Titles of this nature are usually charged with even more emotions and tend to obfuscate the contents of the news piece in both interest and length, so as to lead the user to interact more easily and therefore increase the chances of generating impressions by first-glance. We discover the use of this technique in examples like “LG phones are about to change as it adopts a gutsy new strategy” and “New dumb phone offers alternative to smartphone”. 15 out of the 31 articles analysed have tried to caught your attention by provided no or little knowledge compared to the article. 4) The Language By combining the content, its length and its factual level with the quality of the language written and used, we can verify and build— to a certain degree — the persona profile of the author. Most of the times, quality content signals that, time, care and proper education of the author, have been capitalised in order to craft the article. Naturally, however, we need to regard human error and its correlation with both publishing time and deadlines and, of course, the real impact of proofreading which is, more often than not, based on human abilities. Simply put, lack of total perfection is usually a verification of genuine content. FightHoax performs a thorough scan of the linguistic content based on established scientific methods and algorithms such as the Coleman-Liau Index. FightHoax — 25 out of the 31 articles are grammatically and syntactically, great We have identified buzzwords such as “AI” and “specs”, being used as regular- everyday words. Numerous buzzwords, in any content piece, narrow down the audience since the majority of the readers are traditional older people. It is also easier to proofread a news piece that is short when composing it, which leads directly to a clash with our analysis of factual coverage. 25 out of 31 articles you read are linguistically optimal. 5) Author Digital Footprint Understanding, at a personal level, who wrote the piece you are reading is a crucial step towards creating an overall objective view of the information. News authors with a more consistent digital footprint are generally more reliable due to their expertise and extensive coverage of specific subjects, whereas authors with no digital information create a “hole” in our objective analysis. FightHoax — 8 out of 31 authors has no credible digital footprint FightHoax’s algorithm matches available digital data in a matrix correlated with the topic covered and its content. Hence, it is noted that the minority of our 31 articles, have unknown authors. Τhis is a result of either a lack of a digital footprint or the publication carbon copied-plagiarised the article without sharing the original author. Absence of author information, more often than not, creates a vacuum effect on the correlation matrix due to the content being directly affected by the person composing it and, thus, being unable to consider the human-effect. In plain terms, 8 out of 31 authors, you just don’t know who they are. Coming to An Understanding It is a clear outcome of our sentiment analysis visualization, that Google Pixel dominated the week, with mostly favourable and supportive media coverage, from the press platforms. On the other hand, at the last spots, Panasonic, Light Phone and Essential Phone had caught the eyes of the readers, by creating media buzz, but scoring substantially negative critiques and comments. At the middle positions, runners-up like the freshly announced Huawei mobile phones P20 and Y9, generated a noticeable amount of headlines yet filled with neutral critiques and left food-for-thought comments, for the audience to consider before buying the new models. FightHoax — Horserace Animated Visualization — https://public.flourish.studio/visualisation/27003/ FightHoax empowers news analysis and data journalism with Artificial Intelligence and Big Data. For any inquiries, please e-mail us at info@fighthoax.com. Learn more at www.fighthoax.com
Which Mobile Phone Dominated the News?
1
which-mobile-phone-dominated-the-news-1bf685143de1
2018-05-16
2018-05-16 06:24:19
https://medium.com/s/story/which-mobile-phone-dominated-the-news-1bf685143de1
false
1,318
FightHoax empowers news analysis and data journalism.
null
FightHoax
null
FightHoax
info@fighthoax.com
fighthoax
NEWS,DATA ANALYSIS,DATA VISUALIZATION,MACHINE LEARNING,NATURAL LANGUAGE PROCESS
FightHoax
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Valentinos Tzekas
CEO of FightHoax.com
107f9ffb58ef
valentinostzekas
62
69
20,181,104
null
null
null
null
null
null
0
null
0
abcf1c8db116
2018-06-13
2018-06-13 16:20:26
2018-06-13
2018-06-13 16:30:07
1
false
en
2018-06-21
2018-06-21 12:04:06
11
1bf73718681c
4.354717
1
0
0
Is voice the next frontier in UX design? Market leaders will determine the future of the VA device market. #breegaperspectives
5
Voice UI on the Rise: Can Today’s Leading Voice Assistant Devices fight obsolescence? #breegaperspectives Voice Command: the Frontier in consumer UI Technology Most consumers came into contact with the first wave of voice assistants in early 2011 when Apple’s iPhone released SIRI, a voice activated system able to search the web, play your favorite songs or tell you the weather. Ironically, today’s leading players in voice search and AI assistants, including Apple itself, did not predict the market size and growth we see today. In 2017 alone, the demand for voice assistant devices increased from 7million units in 2016 to 25 million units (source: Mediapost), with about 33 million devices in circulation today. Moreover, the increase in use of voice as a way to communicate or interact with technology has been on the rise. According to Google, 41% of adults “talk to their phones” and the biometrics market claims voice recognition systems will grow into a $600 million business between 2015 and 2019 (source: Technavio). The growth in the use of voice as the go to UI with technology systems, in combination with the rise of AI, are the large drivers in the growth of this market, opening its potential to a wide scope of applications, not just for traditional web search, but also for the interaction with smart homes, home appliances (IoT), and automobiles (source: UnfoldLabs). According to our analysts , the growth in demand for voice-based smart systems will nonetheless be a challenging road for any single player, new entrant or established and penetrating the global market will be contingent on addressing some very real challenges consistent with local needs such as language and everyday adoption. VA Device Market 2018 of the market currently exists in North America. Local: Voice recognition, data sets, knowledge graphs Device…infograph.venngage.com The Leaders If we look at today’s leading players, such as Amazon Echo, Google Home, Cortana and Apple Home (Siri), and consider that 93% of the voice assistant device market is based in North America, several factors become obvious. Firstly, both Amazon and Google (respectively first (70%) and second (24 %) leading the US market share) had before the introduction of their voice assistant devices, platforms focused on text search algorithms for web search and shopping. Both Amazon and Google alike, to the disadvantage of Siri and Apple Home, is that they use these “data sets” or “knowledge graphs” as backbones to their AI software. These knowledge graphs, acquired through the millions of searches produced on Amazon and Google everyday, have permitted these producers to make significant advances in voice-based commands. In a semantics study in 2017 undertaken by Stone Temple that asked 5,000 general knowledge questions to virtual assistants reported that Google assistant was 91 percent correct, compared to Amazon’s Alexa at 87 percent, and Siri’s 62 percent (source: Wired). However, Amazon’s leading position in the market points to other factors critical to success: it’s ability to connect to other apps and devices and build it skill set beyond its immediate area of expertise (namely shopping for Amazon and web search for Google). Amazon’s Alexa currently boasts about 25000 skill sets through crowd sourcing on its developer portal, while Google Assistant possesses 724, Microsoft Cortana 174, and Siri with only 9 uses cases and a very closed developer portal, which is opposite to what Apple did with iPhone apps and its respective developer network. Aside from the developer network and expanding it’s “knowledge base”, Amazon has focused on building out Alexa’s OEM network, investing in the assistant’s interconnection with home appliances, from refrigerators to washing machines, and automobiles, such as with Ford Sync Technology. These expansions have opened the possibilities for Amazon Echo and other Amazon VA devices. Google Assistant, Cortana, and Siri are more limited in OEM network, with integrations that are smaller in scope and closer to each respective company’s core technology, such as Android, Microsoft 10 and iOS respectively. Whereas Alexa has focused on conquering all, Cortana and Siri seem to position themselves in specific competencies such as document management/business context and music respectively. Leading Players Leave Much Room New Entrants Despite the strides of Amazon Echo and Google Home, today’s leading voice assistant devices have a long way to go. Despite obvious limitations for conquering the potential global market share (restrictions to connectivity reduces market opportunity), there are three main challenges all players face, according to Mr. Paulus: 1. Regional needs (language, dialects, etc.) 2. Adoption and return use 3. Specialization Despite Amazon’s leading position in the North American market, Chinese technology companies such as Beijing’s LingLong and Korea’s Samsung that introduced Bixby, have the opportunity today to provide technology that better addresses local needs, with all its complexity. The second and third points are more difficult to address, and are challenges any player in this market faces today. Consumers today purchase voice assistants as a novelty, with return use dropping to 30 and 20% after a 60-day period. This represents a significant drop, not only threatening with obsolescence, but also reducing the amount of user feedback these devices provide to their makers to improve the user experience. According to study on Amazon Echo, the highest return use was limited to playing a song(34%), followed by controlling smart lights (31%), (source: UnFoldLabs). The longevity of these devices in the market are dependent on how they “break out of a limited use” and how the value add of the voice assistant applies to the “everyday life of its users and not just the occasional trivia game at the dinner table”. Devices like Amazon Echo and Google Home have this challenge but also possess the opportunity to provide a wide scope of value as they continue to integrate with other technology systems such as smart homes and cars. Large scope virtual assistants will also have to face-off with specialized assistants such as Cortana and Siri, while specialized players will have to find ways to either break out of siloed contexts, or find other ways to keep their devices “sticky”, and make them evolve in the long run. And lastly, each maker’s ability to integrate the learnings from its user base and evolve the user experience to become more useful, accurate, and human-like will have lasting affects on the customer loyalty and return use. It will be interesting to see how these players and new ones just arriving will handle these challenges. The trend is that the market will continue to see these players fight for OEM expansion and key partnerships that will propel them into other markets and expand the compatibility among devices and apps.
Voice UI on the Rise: Can Today's Leading Voice Assistant Devices fight obsolescence?
7
https-medium-com-breega-voice-ui-on-the-rise-1bf73718681c
2018-06-21
2018-06-21 12:04:07
https://medium.com/s/story/https-medium-com-breega-voice-ui-on-the-rise-1bf73718681c
false
1,101
Our vision, uncut. Discover our passions, ideas, Breega out and about, and news about our startups.
null
null
null
breega uncut
contact@breega.com
breega
STARTUP NEWS,VENTURE CAPITAL,PARTNER PERSPECTIVES,STARTUP LESSONS,STARTUP MARKETING
BreegaVC
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Breega
Built by entrepreneurs for entrepreneurs, Breega is a VC fund focused on launching technology startups poised to change the world.
8939ba16a8fd
BreegaVC
310
121
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-22
2018-04-22 09:09:55
2018-04-22
2018-04-22 09:57:25
2
false
en
2018-04-29
2018-04-29 16:56:22
19
1bf9611032ee
7.719182
3
0
0
This post is about a few of the many options that are reasonably available to any of us for planning our strategies for catching and riding…
4
AI Has Been Here For Some Time Now — It’s Just Not Quite Evenly Distributed As Of Yet This post is about a few of the many options that are reasonably available to any of us for planning our strategies for catching and riding the Artificial Intelligence wave that is invading virtually all aspects of our lives and coming on relatively quickly…but we do have enough time…really. I happen to be from southern California, so my ocean wave analogy seems quite natural. Depending on weather conditions — or the speed with which the AI Industry continues to make incredible advancements — events and outcomes can be very different. Assumptions Matter. The quote, “The…basic assumptions about reality determine what is focused on,” is from Peter Drucker‘s “Management Challenges for the 21st Century,” and our understanding, our basic assumptions about reality today relative to AI — and all that entails — is where we are and collectively how we are learning to focus. Many of you, like me, will remember IBM’s 1997 Big Blue chess-playing computer defeating the reigning world chess champion Gary Kasparov — yes, man vs. machine. And it just wasn’t the fact that Big Blue won but in part, how it won, “…But on the 36th move, the computer did something that shook Kasparov to his bones. In a situation where virtually, every top-level chess program would have attacked Kasparov’s exposed queen, Deep Blue made a much subtler and ultimately more effective move that shattered Kasparov’s image of what a computer was capable of doing. It seemed to Kasparov — and frankly, to a lot of observers as well — that Deep Blue had suddenly stopped playing like a computer (by resisting the catnip of the queen attack) and instead adopted a strategy that only the wisest human master might attempt…”[1] So, we have that to take in, a computer that can master us in one of our games played/engineered for over 1,500 years — but with vastly more resources that are always available in an instant — will invariably win. And today, in 2018, any of us can buy a computer chess game that can beat any grandmaster in chess. That question again: Is the AI in our future coming as a scenario 1, 2, 3 or 4? In Scenario 1 we look — smooth water as far as you can see — and we imagine the potential of the possibilities and how we think it will all conclude. Seemingly no hurry. Everything is quite calm and no need to hurry or do any real planning. We have time. But in Scenario 2 we see movement, efficient information selection and display — Feedly, GetPocket and my favorite Anders Pink — actually a quite sophisticated content curation platform — and it was using Anders Pink, understanding the process, that provided the very realization that got me energized on the subject of curation and how it is intrinsically tied to continuous learning: “How am I going to even begin to try and keep up with the rapidly changing technology in virtually every domain of business, every industry and every technology — and being able to talk about and work with people that such knowledge translates to a more aware — and employed — knowledge worker who is virtually most of us today.” So, Scenario 3 is on the scene, but we are ramping up with plenty of insights, brain power, and sheer brilliance from the MIT Sloan School professors Erik Brynjolfsson and Andrew McAfee — with at least a modicum of insights from Peter Drucker, Peter Senge and others — in their two recent books, The Second Machine Age: Work, Progress, and Prosperity in a time of Brilliant Technologies and Machine Platform Crowd: Harnessing Our Digital Future. I read their books, watched several videos presentations (TED and Talks At Google) and realized a variety of possibilities are available to avoid the downsides of Scenarios 3 and 4 — strong, random waves, hitting you from seemingly every angle — and make our way — that is if we are actively planning now. Our savior you ask? Well, it has the decidedly undramatic moniker of either continuous learning, lifelong learning, organizational learning, and other variations on the theme. And of course, Drucker is there before anyone with his whole notion of the knowledge society, “For knowledge, by its very definition, makes itself obsolete every few years,” Peter Drucker asserted. “In the knowledge society,” he added, “learning is lifelong and does not end with graduation.” Drucker coined the term “knowledge worker” in 1959 in his book, The Landmarks of Tomorrow, “…The term “knowledge management” has that PC era smell. But almost 20 years before the founding of Microsoft, Drucker coined the term “knowledge worker” to describe the growing cadre of employees who labored with their brains rather than their hands. Drucker explained that knowledge workers require a new style of management that treats them more as volunteers or partners than as subordinates. He predicted that the ability of leaders to motivate these founts of productivity — “the most valuable asset of a 21st-century institution” — would become a cornerstone of competitive advantage…”[2] The future possibilities, given the magnitude of the impact of AI on all aspects of our lives, are incalculable — as in positive — and we can have a much brighter future. That is if we are ready now and we begin adapting to a continuous learning way of life — rather painless actually — and we help transform our workplaces into learning organizations. That has evolved into my taking a slew of LinkedIn Learning courses in, somewhat arcane and geeky: DevOps Foundations, Cloud Architecture: Core Concepts, Learning Data Science: Understanding the Basics, Learning Cloud Computing: Serverless Computing, Artificial Intelligence Foundations: Thinking Machines, Creating A Culture of Learning, and more. And of course, I attribute, in part, that coursework to understanding the messages of Erik Brynjolfsson and Andy McAfee in their, for me, breakthrough books — although in retrospect, you will still get it with or without such courses…but they help. NOTE: LinkedIn Learning IS a continuous learning platform and it personifies the “anytime/anywhere” nature of platforms that must be available for continuous learning to prevail — and no, I don’t work at or for LinkedIn. Their approach and excellent examples throughout, empower us to understand where we are and how we can formulate, in part, the times to come. They start, where else, from the Industrial Revolution, “…The Industrial Revolution ushered in humanity’s first machine age — the first time our progress was driven primarily by technological innovation — and it was the most profound time of transformation our world has ever seen…”[3] to today, “…Now comes the second machine age. Computers and other digital advances are doing for mental power — the ability to use our brains to understand and shape our environments — what the steam engine and its descendants did for muscle power…”[4] And next (hear the other shoe falling…just an echo actually), “…Technological progress is going to leave behind some people, perhaps even a lot of people, as it races ahead. As we’ll demonstrate, there’s never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value…”[5] And another, and far earlier than anyone, we have Drucker again, “…Instead of searching for the right organization, management needs to learn to look for, to develop, to test The organization that fits the task…[6] And that organizational model is the Learning Organizations, and for me, that is Peter Senge, “…Lifelong learning: Another Peter — Senge — popularized the concept of “learning organizations” in the 1980s. But learning organizations are predicated on learning individuals. Drucker called teaching people how to learn “the most pressing task” for managers, given the perpetual expansion of skills and knowledge that are products of the information economy. He eschewed the designation “guru” — which suggests one who counsels — casting himself rather as a student. True to form, Drucker every year assigned himself a topic about which he knew nothing and made it the subject of intense study…”[7] And now is just the right time for Thomas Friedman thinking, which in many ways started me on this path — where I discovered Brynjolfsson and McAfee — as you shall see from these quotes, “…Labor and machines were, broadly speaking, complementary [in past times]. In the second machine age, though, noted Brynjolfsson, “we are beginning to automate a lot more cognitive tasks, a lot more of the control systems that determine what to use that power for. In many cases today artificially intelligent machines can make better decisions than humans.” So, humans and software-driven machines may increasingly be substitutes, not complements…”[8] “…The future possibilities, given the magnitude of the impact of AI on all aspects of our lives, are incalculable — as in positive — and we can have a much brighter future…” One has to agree with the “increasingly substitutes, not complements” statement to some degree but with most knowledge workers — those on a dedicated continuous learning journey— it will be complimentary. That is if we are onboard now, we adapt to a continuous learning approach in all aspects of our lives and we actively participate in evolving our workplaces into learning organizations. Here is a short list of information resources I recommend and that I believe you will find at a minimum interesting: — Jess Bezos’s Amazon’s Annual Letter to Shareholders, 2016. I believe we can all learn from any of his Shareholder letters because he nails it regarding where companies, and I would say all of us can internalize his practice of management, can start adopting his “Day 1” view of the world. — Anders Pink’s incredibly insightful book, Content Curation For Learning. Not only is it “the” read to understand continuous learning, but it is somewhat magical how as you proceed through Stephen Walsh and Steve Rayson’s book you realize what is happening — curation in real time. And for those of you who enjoy page layout for comprehension, this book (PDF) is one of the very best I have ever seen. Yes. Ok. Book fanboy here. — And Anders Pink’s, “10 Tips for Better Continuous Learning.” Anders Pink, April 10, 2018 — Most recently I like David Roe’s, “6 Ways Artificial Intelligence Will Impact the Future Workplace,” CMS Wire, April 17, 2018 — And this article by FreightWaves Staff, “Artificial Intelligence is ready for primetime,” April 19, 2018 Thanks very much for your time — and of course jump feet first onto the moving training of Continuous Learning and what I believe will be a very exciting future. Ken REFERENCES: [1] What Deep Blue Tells Us About AI In 2017, Steven Levy (Back Channel) 05.23.17 [2] The Wisdom of Peter Drucker from A to Z | by Leigh Buchanan, Editor-at-large, Inc. Magazine, Nov 19, 2009 [3] Brynjolfsson, Erik; McAfee, Andrew. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company, [4] The Second Machine Age: Work, Progress, and Prosperity…” W. W. Norton & Company. [5] Brynjolfsson, Erik; McAfee, Andrew. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company [6] Drucker, Peter F. Management Challenges for the 21st Century, HarperCollins. [7] The Wisdom of Peter Drucker from A to Z | by Leigh Buchanan, Editor-at-large, Inc. Magazine, Nov 19, 2009 [8] Friedman, Thomas. Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations. Farrar, Straus, and Giroux.
AI Has Been Here For Some Time Now — It’s Just Not Quite Evenly Distributed As Of Yet
10
ai-has-been-here-for-some-time-now-its-just-not-quite-evenly-distributed-as-of-yet-1bf9611032ee
2018-04-29
2018-04-29 16:56:23
https://medium.com/s/story/ai-has-been-here-for-some-time-now-its-just-not-quite-evenly-distributed-as-of-yet-1bf9611032ee
false
1,944
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Ken Sepeda
Specific interest: mentoring/coaching and a broad interest in augmented intelligence and information curation. All opinions/thoughts expressed here are my own.
20a195269298
ksepeda
7
16
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-24
2018-02-24 07:54:13
2018-02-24
2018-02-24 08:21:33
2
false
th
2018-02-24
2018-02-24 08:21:33
0
1bfbd8092ee
0.756918
7
1
0
หนึ่งใน Algorithm สำหรับ Machine Learning ที่เป็นพื้นฐาน และทุกคนควรจะต้องรู้คือ Linear Regression
4
Linear Regression ในหนึ่งนาที หนึ่งใน Algorithm สำหรับ Machine Learning ที่เป็นพื้นฐาน และทุกคนควรจะต้องรู้คือ Linear Regression ความจริงแล้ว Linear Regression นี้เราได้ใช้งานกันอยู่เรื่อย แถมเห็นมันมากันตั้งแต่สมัยมัธยมปลาย!! เพราะความจริงแล้ว Linear Regression นั้นคือสมการเส้นตรงในรูปแบบหนึ่งนั้นเอง !! (y=mx+c) Linear Graph เพียงแต่ว่า Linear Regression นั้น “เส้น” ของเราจะไม่ได้ผ่านทุกจุด แต่เราพยายามจะวาด “เส้น” ที่ผ่านทุกจุด โดยมีค่า “error” รวมน้อยที่สุด Linear Regression ซึ่งถ้าใครจพได้ กราฟแบบนี้คือกราฟ ที่เราชอบใช้ใน Excel กันเวลาต้องการหาค่าประมาณว่า กราฟแบบนี้จะได้เส้นตรงแบบไหน!! ซึ่งเราจะ Predict จากตัวแปรสองตัวคือ x(input) กับ y(output) Y = (b0)X→ สมการสำหรับ Linear Regression ตัวแปรเดียว โดย b0 คือค่า สัมประสิทธิ (ค่าคงที่) สำหรับการสร้างสมการเส้นตรง โดยตัวอย่างสำหรับ Linear Regression ตัวแปรเดียว เราอาจะสามารถ หา น้ำหนัก จาก ส่วนสูงได้ (Y คือน้ำหนัก และ X คือส่วนสูง) ทั้งนี้หากสิ่งที่เราต้องการจะ Predict มีความซับซ้อนมาก เราสามารถทำ Linear Regression หลายตัวแปรได้ โดยสมการจะเป็น Y = (b0)(x0) + (b1)(x1)+(b2)(x2)+….(bn)(xn) โดยตัวอย่างที่สามารถนำไปใช้ได้ เช่น การหาราคาที่ดิน การคำนวนหาค่า BMI และอื่นๆอีกมากมาย จะเห็นได้ว่า Linear Regression นั้นง่าย และ ใกล้ตัวกว่าที่คิดไว้เยอะเลย!!
Linear Regression ในหนึ่งนาที
11
linear-regression-ในหนึ่งนาที-1bfbd8092ee
2018-05-30
2018-05-30 10:32:00
https://medium.com/s/story/linear-regression-ในหนึ่งนาที-1bfbd8092ee
false
99
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
DumpDataSci
A noobie data sci who need to share an experience
1ae7f932d103
dumpdatasci.th
341
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-20
2018-03-20 05:49:59
2018-03-20
2018-03-20 05:50:42
0
false
en
2018-03-20
2018-03-20 05:50:42
1
1bfc8fc8752e
0.358491
0
0
0
Stuart Burchill is a CEO/CTO of Nanotech Company. He is a great personality person. They live in the world of Nanotech. He is working…
1
Stuart Burchill Stuart Burchill is a CEO/CTO of Nanotech Company. He is a great personality person. They live in the world of Nanotech. He is working endlessly about “high tech”. Now “AI” or artificial intelligence has become a hot topic. Everything digital is our way of storing/sharing/analyzing data/information. The company continues to work on new technology, including combined thermal insulation. We continue to work to enhance shareholder value by increasing profitability. Nanotech Company is working combined thermal insulation and corrosion prevention products that are benefits super high temperature thermal insulation coatings for release in 2018.
Stuart Burchill
0
stuart-burchill-1bfc8fc8752e
2018-03-20
2018-03-20 05:50:42
https://medium.com/s/story/stuart-burchill-1bfc8fc8752e
false
95
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Stuart Burchill
Stuart Burchill is announced a shareholder update on company progress and corporate developments. We have accomplished that task and accomplished 12 month of pr
27d0990fcff6
stuartburchill
5
1
20,181,104
null
null
null
null
null
null
0
null
0
284538178f0a
2018-09-22
2018-09-22 21:00:32
2018-09-22
2018-09-22 21:02:07
1
false
en
2018-09-24
2018-09-24 23:05:32
3
1bfe59b09342
0.822642
2
0
0
AI to inspire your own innovations
2
AI Inspiration AI to inspire your own innovations David Eagleman wearing his brain-computer interface. As well as being a pioneer of neuroscience, he is a successful author and futurist. The possibilities for AI and the tech concepts that support them: Understand emotion: Emulate and create new senses: New creative approaches to problem solving: Design with intuition: Turn vast, crowd sourced data into immediate insights or new creations: Create art: The Relationship Between Art and AI Art is how we’ll imagine new ways to use AI that could help us survive, even evolve.medium.com Appeal to the 5 senses: Effect childhood development: VIDEO | How AI could be impacting childhood development | Fast Company Devices like the Amazon Echo and Google Home, along with other AI devices, could be affecting the way kids grow.www.fastcompany.com Make inaccessible or newly imagined experiences reality: Virtual Reality for Seniors | Welcome to Rendever Rendever’s virtual reality platform gives residents of the best assisted living and senior care communities the ability…rendever.com Man vs machine. Consciousness and intuition: Turn lo res images into high res images: Change what a person does in a video:
AI Inspiration
5
ai-inspiration-1bfe59b09342
2018-09-24
2018-09-24 23:05:32
https://medium.com/s/story/ai-inspiration-1bfe59b09342
false
165
Course by Jennifer Sukis, Design Principal for AI and Machine Learning at IBM. Written and edited by students of the Advanced Design for AI course at the University of Texas's Center for Integrated Design, Austin. Visit online at www.integrateddesign.utexas.edu
null
utsdct
null
Advanced Design for Artificial Intelligence
cid@austin.utexas.edu
advanced-design-for-ai
ARTIFICIAL INTELLIGENCE,COGNITIVE,DESIGN,PRODUCT DESIGN,UNIVERSITY OF TEXAS
utsdct
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jennifer Sukis
Design Principal for AI & Machine Learning at IBM. Professor of Advanced Design for AI at the University of Texas.
be85714f1ba8
jennifer.sukis
242
89
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-18
2018-07-18 13:46:58
2018-07-19
2018-07-19 06:01:32
1
false
en
2018-07-19
2018-07-19 12:41:34
1
1bfeef34cfcd
4.064151
1
0
0
For a long time, I have struggled to understand the facets that are make up the Field of Data Science.
5
The Field of Data Science & Yet another New Data Science Venn-Diagram… The Field of Data Science For a long time, I have struggled to understand the facets that are make up the Field of Data Science. The depth of knowledge in “Data Science” is becoming large and methodologies so unique, it is irrational to lump it into another traditional field such as Mathematics, Economics, Computer Science, or Statistics, or to relegate its capacity to a Job Title. It is in it’s own right a separate field. A field that encompasses several different unique job titles, and one who’s community is heavily dedicated at collaborating and innovating unique new processes, like traditional researchers, but through Open-Source Collaboration. The only caveat to this end is that the rate of knowledge being generated is exponentially outpacing the capacity for universities to incorporate curriulum to bridge the gap between current understanding in the field and their antiquated and slow pedogogical systems. In order to overcome this obstacle online courses, programs, and in-person bootcamps were created to speed the acceleration of competent data scientist in to the workforce. Even then to have the students realize that regardless of the companies stance that the Data Scientist postion doesn’t require a PhD, it does still however require an education beyond the eductaion that they could reasonably attain My Data Science Journey I was fortunate enough to have pursued an undergraduate degree in economic’s which opened the door for me to the world of statistic’s, probability theory, game theory, among other interesting quantitive theoretical practices. Beyond that my personal interest in software development, and just an overall curiosity for understanding why things happened, led me to Data Science. After graduation, rather than doing a bootcamp, or MOOC, I decided to take the risk and do an online Masters in Data Analytics. I have been extrodinarily underwhelmed by the courses which are primariliy are taught using Excel, and at most cover the extent of what multiple regression is and how to implement it. On the plus side, simply having that master’s degree helped me to land my awesome job at a Fortune 1000 company, and utilize data science everyday. I am working as a Market Research Analyst, but behave more like a data scientist in my day-to-day functions. I use Python and R in all of my analysis projects, and use Excel as a repository and an improvised from of version control. I preform analysis on the Customer Analytics side of the company and try and derive meaning as to the sentiment and satisfaction from our customers, in order to help the business better communicate and connect with their current customers. I also act as the liason between the Big data team and work within the Hadoop Cluster writing SQL (i.e. HiveQL) to pull relavant information in for our team. Most of you can intuitively piece together my job title is not accurate. I am not a Market Research Analyst, and I am not a Data Scientist. But what am I? A Decision Scientist. After listening to the awesome podcast with Drew Conway and Hugo Bowne-Anderson, I listened to Drew Explain how he generated his extremely popular Data Science Venn-Diagram. It was within this podcast that I found the inspiration to add on to the the work that Drew did and make a more inclusive diagram of the additional components in the field, today. The Field of Data Science Venn-Diagram Explanations The additional component which was missing in Drew’s inital version was what I call the.. Entrepreneurial Capacity: It is representitive of the capacity to think big enough that there can be a broad range of application in your ideation, and to have the ability to strategically implement the approaches necessary to achieve your desired outcome. With it’s addition a more illuminating picture of the Field of Data Science can be made. The relevant additions include: Decision Scientist: An individual who can draw on all four quadrants of the diagram to utilize data science to understand and interperate business problems from data, and who has the capacity to produce actionable data and non-data products. Decision Scientists help to convey meaning to organizations, and help them to consume and use analytics. Data Scientist (Not an addition, but a re-classification): An individual who can draw on the Mathematical Skills, Hacking Skills, and the Substantive Experience quadrants of the diagram to utilize data science to understand the mathematical and statistical intricacies and significances in the data, and who have the capacity create pipelines for data creation, and collection alongside software engineers, and Database admins. Data Scientists help organizations create, generate, and process meaningful analytics. Business Intelligence Developer: An individual who can draw on the Entrepreneurial Capacity, Hacking Skills, and the Substantive Experience quadrants of the diagram to utilize data science to understand the relevant information within the data, and to utilize less-mathematical and statistical implementations than that of a data scientist. Unlike the decision scientist, BI Developers tend to use more of an entrepreneurial capacity to develop meaningful interpretations of the data to generate meaningful Dashboards, and interactive visualizations. BI Developer is the intersection between a Software Developer and a Decision Scientist. Data Analyst: An individual who can draw on the Mathematical Skills, Hacking Skills, and the Substantive Experience quadrants of the diagram to utilize data science to understand the relevant information within the data, and to utilize less-computational complexity than that of a data scientist. Data Analysts don’t necessarily need to have the ability to rival Data or Decsion Scientist in terms of their hacking skills yet they can develop relevant interpretations of the data to generate actionable insights, through Excel, Stata, SPSS, etc types of software. Data Analyst’s are an intersection between Decision Scientists and Data Scientists. Conclusion It is in my opnion that these extra components help to shed a more inclusive light onto the growing field of Data Science, and to at the very least strike up a conversation as to the actual placement for the new sub-classification within the data science field. Anyways I digress.. Keep it Logical, -> Jeremy A. Seibert
The Field of Data Science & Yet another New Data Science Venn-Diagram…
1
the-field-of-data-science-yet-another-new-data-science-venn-diagram-1bfeef34cfcd
2018-07-19
2018-07-19 12:41:34
https://medium.com/s/story/the-field-of-data-science-yet-another-new-data-science-venn-diagram-1bfeef34cfcd
false
1,024
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Jeremy Seibert
Decision Scientist. Market Research Analyst. Aspiring Computational Economist. Graduate Student.
aa354200a694
jaseibert5
3
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-18
2018-03-18 15:03:09
2018-03-18
2018-03-18 16:39:06
3
false
en
2018-03-18
2018-03-18 16:39:06
12
1c01373def70
2.09717
5
1
0
We will use one of the most popular datasets in Machine Learning for this tutorial i.e. the MNIST dataset. We are not going to use Kaggle…
5
📊 Host your own Data science competition! We will use one of the most popular datasets in Machine Learning for this tutorial i.e. the MNIST dataset. We are not going to use Kaggle to host our competition. What ? Why ?. We are going to use something that claims to be better than Kaggle 😮 and is open source 🚀 🍻. Could this be the Kaggle Killer ? 🔥 Us nerds can have different opinions about spaces and tabs, but one thing we all agree on is : Open Source > Proprietary. Open source > Proprietary! Duh! Presenting EvalAI, the open source answer to Kaggle. EvalAI is an open source web application that helps researchers, students and data-scientists to create, collaborate and participate in various AI challenges organised round the globe. EvalAI has a sick landing page and the platform continues to get better everyday because of active contributions from the community. I have high hopes for this project. Don’t forget to star their GitHub repository! Submitting the Challenge Follow these steps to create a challenge on the EvalAI platform: Use the MNIST dataset available here. Create Test and Train splits from the dataset. You could make use of the sklearn’s train_test_split for this. Check this notebook for a sample preparation. Follow the instructions on this page to create a basic outline for your challenge submission. The instructions are pretty clear and they use yaml configurations which are very easy to interpret. Create your evaluation script. Follow instructions on this page. Here is a sample evaluation script that I created for submitting the MNIST challenge. Check this notebook for trying out your scoring function before you add it to your evaluation script. Easy Peasy That’s it! Submit your zip on the platform and someone from the team will review and approve your submission before the challenge goes LIVE!. Host the best model submission! 🏌🏻 Every budding data scientist works on digit-recognition and then, that project disappears in the bunch of other projects he works on. I built this digit recognizer web app to show off the project I did for my Pattern Recognition course. Did you check it out yet? CHECK IT OUT! It’s fun! I used Python’s Flask for building the API and HTML-Canvas to enable self drawing of digits. You can use this code for building your own little UI 😉. Use the best submission model from your competition and host it 🚀 Suggestions ? Comments ? Criticism ? Everything is welcome!
📊 Host your own Data science competition!
16
host-your-own-data-science-competition-1c01373def70
2018-05-03
2018-05-03 14:49:06
https://medium.com/s/story/host-your-own-data-science-competition-1c01373def70
false
410
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Dhruv Batheja
🎓 Data Science @ TU Delft
5fa40e8a5055
dhruvbatheja
97
99
20,181,104
null
null
null
null
null
null
0
null
0
217734a2f5b9
2017-12-12
2017-12-12 14:34:45
2017-12-12
2017-12-12 12:30:02
3
true
en
2017-12-12
2017-12-12 14:37:16
5
1c01b1bde234
3.716038
3
0
0
by Aaron Krumins
5
AlphaZero Is the New Chess Champion, and Harbinger of a Brave New World in AI by Aaron Krumins The world has quietly crowned a new chess champion. While it has now been over two decades since a human has been honored with that title, the latest victor represents a breakthrough in another significant way: It’s an algorithm that can be generalized to other learning tasks. It gets crazier. AlphaZero, the new reigning champion, acquired all its chess know-how in a mere four hours. AlphaZero is almost as different from its fellow AI chess competitors as Deep Blue was from Gary Kasparov, back when the latter first faced off against a supercomputer in 1996. And what’s more, AlphaZero stands to upend not merely the world of chess, but the whole realm of strategic decision-making. If that doesn’t give you pause, it probably should. From its origins in India, the game of chess has stood the test of time as a measure of strategic intelligence. Games of imperfect information, like the variation of poker known as Texas Hold-‘Em, arguably have more in common with our day-to-day strategic decisions. But chess remains an important measure of how we think about intelligence. Chess requires being able to gauge an opponent’s tactics, memorize hundreds of board positions, and think ahead several moves. At least that was the common approach to the game until recently, and also the way conventional chess AIs like Deep Blue were programmed. The previous reigning champion, Stockfish 8, was no exception. It used a search engine to explore different move combinations that had been programmed into it by its creators. Such chess engines make widespread use of opening books and endgame tables, effectively supplying the search algorithm with all the commonly accepted chess wisdom from which to draw its moves. AlphaZero, the new champion, soundly defeated Stockfish 8 in a 100-game series without losing a single match to its adversary. To do so, it took a completely different tack. The creators of AlphaZero, the London-based AI project known as DeepMind, have pioneered an approach to AI known as deep reinforcement learning. Instead of looking at games like Chess and Go as search problems, they treated them as reinforcement learning problems. Reinforcement learning may sound vaguely familiar if you took an Intro to Psychology class in college; it’s precisely the way humans learn. We actually don’t play chess like a search engine, exhaustively exploring different move combinations in our head to find the best one. Rather, through repeated playing we gain a set of associations about different board positions and whether they are advantageous. Through repeated exposure, good board positions get reinforced in our minds, and poor ones get pruned — though unlike pure reinforcement learning, we may augment this with information taken from books or word of mouth. Then we draw upon these associations during game play. The mathematical basis of how we apply reinforcement learning as humans has been painstakingly worked out over the last 30 years. That brings us to AlphaZero. By simply playing against itself for a mere 4 hours, the equivalent of over 22 million training games, AlphaZero learned the relevant associations with the various chess moves and their outcomes. In doing so, it was learning much the way a human does, but because the computer can compress 100,000 hours of human chess play into a few minutes, it builds up a set of associations far more quickly than we ever could, and over a far wider range of move combinations. Building upon research done in psychology and animal cognition, DeepMind created a reinforcement learning algorithm first to conquer a handful of early Atari video games. Realizing the importance of such a multipurpose learning algorithm, Google quickly snapped up the company in a potentially lucrative acquisition. Within a few years, Google demonstrated this by using deep reinforcement learning to optimize the heating and cooling of its data centers, reducing its energy footprint by 15 percent. Deepmind made further waves by applying reinforcement learning to the board game Go, thought beyond the scope of AI because of its almost infinite variety of move combinations. Now the company has shown that the same approach can dominate in chess. Since reinforcement learning is the method we humans use to gain many kinds of skills, what can deep reinforcement not learn? Deep reinforcement learning is nothing less than a watershed for AI, and by extension humanity. With the advent of such über-algorithms capable of learning new skills within a matter of hours, and with no human intervention or assistance, we may be looking at the first instance of superintelligence on the planet. How we apply deep reinforcement learning in the years to come is one of the most important questions facing humanity, and the basis of a discussion that needs to be taken up in circles far wider than Silicon Valley boardrooms. Aaron Krumins is the forthcoming author of a book on reinforcement learning. Originally published at www.extremetech.com on December 12, 2017.
AlphaZero Is the New Chess Champion, and Harbinger of a Brave New World in AI
3
alphazero-is-the-new-chess-champion-and-harbinger-of-a-brave-new-world-in-ai-1c01b1bde234
2018-04-26
2018-04-26 17:46:22
https://medium.com/s/story/alphazero-is-the-new-chess-champion-and-harbinger-of-a-brave-new-world-in-ai-1c01b1bde234
false
839
All the cutting-edge chip news, software updates, and future science of ExtremeTech, distilled into an easy-to-read format.
null
extremetechdotcom
null
ExtremeTech Access
jamie_lendino@ziffdavis.com
extremetech-access
SCIENCE,TECH,FUTURE,SPACE,COMPUTERS
extremetech
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
ExtremeTech
ExtremeTech is the Web’s top destination for news and analysis of emerging science and technology trends, and important software, hardware, and gadgets.
f64ef3d68bc6
extremetech
28,669
58
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-07
2018-02-07 11:05:48
2018-02-07
2018-02-07 11:06:53
1
false
en
2018-02-07
2018-02-07 11:06:53
0
1c01bea88e2f
0.271698
208
1
0
… the robot rapidly learns how your brain works.
5
The AI-boosted software mediating the connection between you and… … the robot rapidly learns how your brain works.
The AI-boosted software mediating the connection between you and…
213
the-ai-boosted-software-mediating-the-connection-between-you-and-1c01bea88e2f
2018-06-16
2018-06-16 06:11:52
https://medium.com/s/story/the-ai-boosted-software-mediating-the-connection-between-you-and-1c01bea88e2f
false
19
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Neurogress
THINK. SHAPE YOUR WORLD. http://Neurogress.io Ask for more information: https://t.me/neurogress
df47ccd2c097
neurogress
2,185
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-25
2018-07-25 13:03:54
2018-07-26
2018-07-26 19:31:01
1
false
en
2018-07-26
2018-07-26 19:31:01
2
1c02ccf0db8c
2.011321
1
0
0
There are many different algorithms used to train a neural network, and many variations of each. In this article, I am going to outline…
5
5 Neural Network Algorithms You Need to Learn There are many different algorithms used to train a neural network, and many variations of each. In this article, I am going to outline five algorithms that will give you an all-rounded understanding of how a neural network works. I will start with an overview of how a neural network works, mentioning at what stage the algorithms are used. Neural networks are fairly similar to the human brain. They are made up of artificial neurons, take in multiple inputs, and produce a single output. Because nearly all the neurons influence each other — and are therefore all connected in some way — the network is able to acknowledge and observe all aspects of the given data, and how these different bits of data may or may not relate to each other. The network may be able to find very complex patterns in a large amount of data that would otherwise be invisible to us. In this visualization of an artificial neural network (ANN), there are three neuron layers. The input layer (left, red), a hidden layer (in blue), and then the output layer (right, right). Assume this network is meant to predict the weather. The input values would be attributes relevant to the weather such as time, humidity, air temperature, pressure, etc. These values would then be fed forward to the hidden layer, while being manipulated by the weight values, which are initially random unique values on every connection or synapse. The new values on the hidden layer are fed forward to the output, while being manipulated again by the weight values. At this point, it is important to recognize that the output would be completely random and incorrect. The manipulation that happened during the feed-forward step contained no actual logic relevant to the problem because the weights start as random. However, we are currently training the ANN with a huge data set that contains many previous weather forecasts with the same attributes and the result of these attributes (the target value). After the feedforward stage, we can compare the incorrect output to the desired target value, calculate the error margin, and then back-propagate the neural network and adjust all the weight values with respect to how they contributed to the margin of error. If we do this forward and back feeding a thousand more times with each item of data in the dataset, the weights will start to shape in such a manner that will manipulate future inputs in a relevant way to the problem. Often, even more success can come from training the same dataset multiple times. The feed-forward step could be seen as guessing. While the back-propagation step then educates that guess based on the margin of error made. Over time, the guessing will become extremely accurate. Read the full article here on OpenDataScience.com.
5 Neural Network Algorithms You Need to Learn
1
5-neural-network-algorithms-you-need-to-learn-1c02ccf0db8c
2018-07-26
2018-07-26 19:31:01
https://medium.com/s/story/5-neural-network-algorithms-you-need-to-learn-1c02ccf0db8c
false
480
null
null
null
null
null
null
null
null
null
Neural Networks
neural-networks
Neural Networks
3,870
#ODSC - The Data Science Community
Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.
2b9d62538208
ODSC
665
19
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-08
2017-12-08 08:12:09
2017-12-08
2017-12-08 08:56:06
9
false
en
2017-12-08
2017-12-08 08:56:06
21
1c0339355a4
6.396226
16
0
0
At the moment, it seems everyone and their dog has an opinion on big data, and at a dinner party, morning coffee, or business meeting you…
5
Humanising Data: Using New Technology For Social Good At the moment, it seems everyone and their dog has an opinion on big data, and at a dinner party, morning coffee, or business meeting you can be subjected to views ranging from luddite fear to unadulterated enthusiasm. The collection of vast amounts of personal data is nothing new, but as data analysis becomes easier and more readily accessible, consumers and businesses alike are catching on to the scale of data collection in 2017. Open source technology like Apache Hadoop and Spark have opened data insight up to a broader audience, and we are undergoing a collective awakening on the importance of data in everyday life and decisions, as well as some of the risks associated with living a digitally enriched life. Where previously people worried about government data accumulation on populations, this pales in comparison to the breadth of data available to Google, Amazon, and Facebook. Every digital interaction we have, item we purchase, or bit of media we consume shapes and is shaped by our digital identities, and businesses are now often able to understand and predict our behaviour better than we can ourselves. We are constantly detailing our digital selves, giving businesses and governments both access to ourselves and the ability to shape us. The past 18 months have also churned out some fairly concerning examples of this process; we now know the manipulation of data created targeted digital campaigns that steered the results of the EU Referendum and the US election. It seems that the further we delve into the data debate, the more fear we can develop about the future. However, there is also a growing effort to introduce ethics around commercial data usage, privacy and the role of regulation. On top of this, there are serious upsides for humans in the data age, both as consumers and members of a wider global community. In this article, we lay aside some of the fears around data and explore examples of groups humanising data from our expanding digital footprints. Better Creative Campaigns: Spotify The most obvious human convenience application of data is the ability of corporates to form extremely personal relationships with its customers, delivering products, information, and people with eerie accuracy. Outside of targeted products, brands are also using big data to steer more interesting creative campaigns. In 2016, Spotify’s out-of-home campaign drew on unusual data insights to surprise their audience. The ads were funny and topical, with direct appeals to their audience, like “Dear person who played ‘Sorry’ 42 times on Valentine’s Day, what did you do?” Spotify’s campaign used our own data to surprise or humour us. Real-Time Response: Telefonica Magic Box Telefonica’s partnership with UNICEF, Magic Box, aggregates data in real time to optimise responses to natural and humanitarian disasters. Piloted in Columbia, one of the world’s most climate change vulnerable countries, the program taps into the full data resources of the telecommunications company as well as publicly available data to understand the exact extent of a crisis. Rather than relying on public self-reporting, Magic Box is able to tap into far more data at much quicker speeds for disaster responses. Big data for personal health: Glow Glow takes the best of big data on women’s health and combines with personal health technology in an endeavour to improve fertility without treatment. The app collects vast amounts of open-source health data and combines it with an individual user’s information to maximise chances of conception. Started by PayPal co-founder Max Levchin, the app demonstrates the health potential of combining big health data with individual data. Glow could also potentially minimise the reliance on expensive and traumatic traditional fertility treatments. Working across sectors: DataKind DataKind provide social purpose organisations with pro bono help from leading data scientists in order to more deeply understand the issues they are trying to solve. They use the best of the for-profit world’s data capacity for social good, with projects as diverse as predictive modelling for financial inclusion in Senegal, managing water demand in California, and using data to target homelessness in London. DataKind show the potential for collaboration using the best technical skills from the for-profit sector and the social mission of the nonprofit sector. Inclusive Insight: Mastercard Mastercard’s Centre for Inclusive Growth, launched 2013, uses the group’s vast data capacity to encourage understanding about inequality and understand how to encourage inclusive development. As part of the company’s corporate social responsibility program, they used insight collected to steer giving efforts into programs that encourage equal access to financial resources. Take, for example, Jordan, where Mastercard used data to understand the drivers of youth unemployment and map new routes to inclusivity. Mastercard are also using data to understand the actual needs and behaviours of refugees across the globe and granting money to social entrepreneurs to solve these issues. Mastercard’s example is interesting because it uses one of the company’s core capacities — data — to influence its social responsibility platform, and then open-sourced the findings. Government Best Practice By tapping into the vast resources available through open and state collected sources, many governments are driving more effective and targeted interventions to social issues. Take the Obama Administration in the US, who launched the Data-Driven Justice Initiative, taking big demographic data to track the plight of incarcerated or at-risk individuals through social services. The project was able to map the key risk factors and contributors to an individual’s’ chance of incarceration, and better tailor responses to key demographics within the broader justice system. Closer to home, in 2016 the NSW Government began using aggregated data from electoral rolls, complaints and bills to determine illegal boarding houses across Sydney. This move could reduce exploitation, abuse, and environmental risk involved in Sydney’s illegal housing. While medical data protection is a serious privacy issue, many governments are also moving to standardise and centralise patient information, in order to coordinate state interventions and understand meta health trends. In Qatar, medical records are standardised and are transferrable between health services, who rely on both patient and specialist-led technology to steer better health outcomes, and Britain’s National Health Service is soon to follow. Health, justice, and housing make up some of the greatest global public budget spends, and smart governments are looking to centralise information across their services to understand wider social trends and use funds more effectively. Across borders, the UN is also using big data to further the Sustainable Development Goals. The UN’s Global Pulse has projects all over the world working to promote equality, from monitoring the spread of HIV from mother-to-child, and mapping economic resilience after a natural disaster, to understanding changing perceptions towards refugees and understanding migration flows of populations. Encrypted Data For Transparency: Provenance Blockchain has received a dramatic increase in public interest with the rise of cryptocurrency and the debate around what this means for markets, but outside cryptocurrency, blockchain has enormous consequences for bringing transparency to the production of goods. Provenance are building technology that makes it easy for businesses to achieve supply chain transparency and steer ethical production, with 200 active clients. One project using Provenance technology is the International Pole and Line Foundation, who used blockchain to map every point of fish production from catch to consumer. Screening for sustainable sourcing free from slave labour, the technology tags the fish, collecting data on every part of the process, with the whole process ultimately available to consumers. Provenance is mainstreaming and simplifying technology that lets businesses practice transparency more easily, meaning the barrier to ethical production is lower. As citizens, consumers, and community members in the digital age, we live with ever-expanding footprints of data behind us. This brings its own concerns; we are forfeiting the right to privacy and handing over some control to groups with political or commercial incentives. On the other hand, many groups out there are using the unprecedented availability of data to improve quality of life for all people, allowing social impact efforts to be more targeted and effective than ever before. At the projects*, we are excited to see what new territories big data takes brands. If you’re interested in hearing more, please get in touch.
Humanising Data: Using New Technology For Social Good
26
humanising-data-using-new-technology-for-social-good-1c0339355a4
2018-05-25
2018-05-25 16:22:32
https://medium.com/s/story/humanising-data-using-new-technology-for-social-good-1c0339355a4
false
1,377
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
the projects*
An independent brand consultancy with offices in LA, NY, London and Sydney. We craft creative solutions that keep brands thriving in an ever changing world.
9699c28d4104
theprojects
1,972
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-31
2018-07-31 08:29:08
2018-07-31
2018-07-31 10:25:40
2
false
en
2018-08-21
2018-08-21 06:58:01
2
1c038e46f3cd
2.839937
0
0
0
When it comes to automating processes, procurement organizations are no slouches. Long ago, chief procurement officers automated…
2
Will Machine Learning Save Procurement Millions a Year? When it comes to automating processes, procurement organizations are no slouches. Long ago, chief procurement officers automated administration, payroll processing, material-resource needs calculation, invoice generation, and material flow tracking. The function has, for the most part, eliminated redundant work to dedicate more time toward more strategic activity and transitioned the collaboration to business networks. Yet, exception handling seems to be at an all-time high as companies try to get a better handle on their suppliers and partners to gain a competitive edge. AndProcurement professionals are now stretched to their limit, bombarded with a steady stream of requests — each one requiring analysis of massive volumes of documents. Organizations, on average, waste between 3% and 4% of overall external spend on unnecessary transaction costs, excessive inefficiency, and regulatory noncompliance. For an organization with an annual procurement spend of $2 billion, cutting off such leakages can add $70 million a year back to the bottom line. But there’s good news. “Thinking” algorithms — known as machine learning — can speed up the process of exception handling with significant payback. And they literally do all the thinking — and work — for you. Machine learning: Overcoming common barriers to full automation Unlike robotic process automation (RPA) and preliminary use cases for artificial intelligence (AI), machine learning (ML) handles activities that call for complex rules and pattern recognition. By demonstrating a basic level of human judgment, machine learning can, for example, assign transactions to formal spend categories and subcategories. This critical first step in uncovering sourcing opportunities can transition from a traditionally time-consuming, manual task to a real-time, automatic response. And easier categorization is just the beginning. Machine learning can further automate procurement and enhance its strategic reputation across five major aspects of the function: · Supplier management. Procurement organizations want to make sure that their suppliers are financially viable and stable. Procurement can apply machine learning to determine the most competitive rate to negotiate. and also discover the best contract terms that will help the partnership become more successful through as-promised delivery and on-time payment. · Capabilities matching. All buyers want their suppliers to fully meet current and expected needs and deliver exceptional service. But it can be tough to separate marketing hype from reality. By scanning the industry for new competencies and aligning them with business requirements through machine learning, procurement can develop processes to continuously test the ability of new and existing partnerships. · Efficiency monitoring. Machine learning can efficiently track and monitor the efficiency of every entity of the supply chain and rate suppliers based on their performance, enabling procurement to hold vendors accountable while helping to ensure that operations run at peak standards. · Compliance enforcement. Machine learning can pick out hidden patterns that can indicate whether a supplier is not meeting business and regulatory requirements faster and more efficiently than any human. With the data on hand, the procurement function can engage difficult discussions with greater ease and in a manner that is both productive and decisive. · Value creation. Every area of the company demands quality and maximum value to the bottom line. Using machine learning, the procurement function can deliver it, automating balanced score-carding to track the efficiency of the supplier relationship and the effectiveness of a purchased good or service on all transactions. Plus, machine learning can empower the team to develop rules that permit flexibility and responsiveness while controlling risk. While technology will undoubtedly continue to evolve, and deliver more value and opportunity, it’s impossible to ignore machine learning as a transformational stepping stone toward providing quick value that benefits the entire company and its customers. And procurement leaders that adopt and embed this next-generation form of artificial intelligence in their processes can lead the way and position the function as a powerhouse of strategic influence on business success.
Will Machine Learning Save Procurement Millions a Year?
0
will-machine-learning-save-procurement-millions-a-year-1c038e46f3cd
2018-08-21
2018-08-21 06:58:01
https://medium.com/s/story/will-machine-learning-save-procurement-millions-a-year-1c038e46f3cd
false
651
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Dr. Marcell Vollmer
Chief Digital Officer & SVP @SAPAriba. Passionate about #Life, #Coffee, #PhD in #Politics, #MBA in #Economics, #SocialMedia Enthusiast and curious to learn&grow
e92f982c8bd0
marcellvollmer
11
15
20,181,104
null
null
null
null
null
null
0
null
0
f702855ffe47
2017-10-27
2017-10-27 20:00:27
2017-10-27
2017-10-27 20:00:28
0
false
en
2017-10-27
2017-10-27 20:00:28
11
1c03b7dec9cc
1.124528
0
0
0
null
3
Make Way For The Robo-Trader # medium.com Robo-advisors first took over the advisory industry — now robots are infiltrating the trading floor. It’s no… VantagePoint Hot Stocks Outlook for October 27th, 2017 # keepingstock.net The Hot Stocks Outlook uses VantagePoint market forecasts that are up to 86% accurate to demonstrate how tra… Integrate.ai # medium.com Integrate.ai demonstrates commitment to engineering excellence with additions from Amazon and Stitch Fix The… Walmart to release more shelf-scanning robots soon # medium.com Walmart plans to bring more robots to its stores. The machines assist staffers by scanning out-of-stock item… Walmart to release more shelf-scanning robots soon # medium.com Walmart plans to bring more robots to its stores. The machines assist staffers by scanning out-of-stock item… Sales = Budget x Authority x Need x Time # medium.com Qualifying frameworks like BANT help validate potential leads Every company is driven by Sales. The math is … Indie developers need to lead the way unifying home voice assistants. # medium.com Must consumers make a one-time choice between the red and blue pills of Google Home and Amazon Alexa? Of cou… Sony continues to invest deeply in imaging with 3D sensors # sonyreconsidered.com Aims to provide eyes for machines For well over a year, regular readers will attest to my attempts to highli… Future Factories: How AI enables smart manufacturing # medium.com Today’s consumers are pickier than ever. They want customized, personalized, and unique products over standa… Komputation v0.10.2 # medium.com - Clarified the Maven dependency — CUDA files are now accessed as streams. — Included commons-io as a depend… Artificial intelligence is changing the face of plastic surgery # venturebeat.com GUEST: You’ve probably read that artificial intelligence is transforming medicine. While the field has yet t…
11 new things to read in AI
0
11-new-things-to-read-in-ai-1c03b7dec9cc
2017-10-27
2017-10-27 20:00:30
https://medium.com/s/story/11-new-things-to-read-in-ai-1c03b7dec9cc
false
298
AI Developments around and worlds
null
null
null
AI Hawk
aihawk1089@gmail.com
ai-hawk
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
null
Deep Learning
deep-learning
Deep Learning
12,189
AI Hawk
null
a9a7e4d2b403
aihawk1089
15
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-30
2017-10-30 11:07:04
2017-10-31
2017-10-31 00:35:42
1
false
en
2018-08-26
2018-08-26 17:26:07
9
1c063dcaebc4
2.384906
4
1
0
NEC and Yanaka Coffee collaborate to create blend coffee beans inspired by classic Japanese literature
5
Drinkable novels: when AI and Big Data are freed from interfaces NEC and Yanaka Coffee collaborate to create blend coffee beans inspired by classic Japanese literature NEC x Yanaka Coffee “Drinkable Novel” coffee beans (image from PR Times) When we think of Artificial Intelligence (AI), Big Data and all the other emerging technologies, we tend to assume they all operate inside a screen. Take our phones, for instance. This personal device tells us the distance we walked or the number of hours we slept in the forms of bar charts and pie charts. But we are more than what these numbers say. In fact, data has a potential to be transformed into a source of material that could enrich our lives or connect ourselves with others, if only we know how to humanise it. They can be disseminated as part of our everyday environment, allowing us to not only see, but also hear, smell, taste and feel in its crafted form. NEC and Yanaka Coffee’s recent collaboration work on “Drinkable Novels” is a great case that shows the strengths to emerging technologies when designed with empathy and human qualities in mind. NEC, a Japanese information technology service provider and Yanaka Coffee, a popular coffee roasting shop, have released a series of coffee beans inspired by six classic Japanese literature, including Sōseki Natsume’s bittersweet love story, Kokoro, and Osamu Dazai’s dark and eerie atmospheric book, No Longer Human. NEC data scientists analysed over 10,000 book reviews to develop a “taste” chart comprising of 5 key parameters: bitterness, sweetness, aftertaste, brightness, heaviness. For instance, review such as “it reminded me of my youthful days. I felt nostalgic” was translated as high in sweetness level, while others such as “it left me thinking quite deeply about life” was regarded strong in aftertaste level. Using NEC’s latest AI system, “NEC the WISE,” these data were processed through a Deep Learning embedded software, “NEC Advanced Analytics — RAPID Machine Learning” to develop a data analysis model. The model was used to automate the book review analysis process, and six coffee radar charts were developed. The radar charts were then used as “recipes” at Yanaka Coffee where the final coffee beans were designed and implemented. This is one of the many fascinating case studies where AI and Big Data are integrated into our daily life in a form that is easier for the customers to understand. In this way, we are not merely a “user” but a “human” given an opportunity to engage with the latest technology through multi-sensory experiences. If emerging technologies can be translated into a language familiar to all of us, it could perhaps enhance the way we engage with our surrounding data. This post was also inspired by: Google Play and IDEO’s article, “Applying Human-Centred Design to Emerging Technologies.” Fjord’s “Living Services: Data and Design, a Tale of Two Problem Solvers” Domenic Lippa’s “250 Facts & Figures” at London Design Festival 2017 Giorgia Lupi’s Data Items: A Fashion Landscape at The Museum of Modern Art Designit’s “Bringing Data to Life” project with BVC Method’s “Surrounded by Data” exhibition My good friend Alex Rosso’s rhetorical question: “What is the aroma of innovation?” A designer + researcher + strategist with a background in graphic design and cultural studies, Megumi Koyama is a recent graduate of Central Saint Martins MA Innovation Management course interested in design for civic engagement and empowerment. She is currently based in London, UK seeking opportunities in design for social good/tech for good. Contact koyamamegu[at]gmail.com for opportunities and collaborations! LinkedIn
Drinkable novels: when AI and Big Data are freed from interfaces
102
drinkable-novels-when-ai-and-big-data-are-freed-from-interfaces-1c063dcaebc4
2018-08-26
2018-08-26 17:26:07
https://medium.com/s/story/drinkable-novels-when-ai-and-big-data-are-freed-from-interfaces-1c063dcaebc4
false
579
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Megumi Koyama
A designer + researcher + strategist. Graduate of CSM MA Innovation Management // #designforgood #techforgood https://megumikoyama.myportfolio.com/
62559620a642
megukoyama
79
50
20,181,104
null
null
null
null
null
null
0
null
0
f0db56adb08d
2018-03-03
2018-03-03 00:04:43
2018-03-03
2018-03-03 00:07:34
1
false
en
2018-03-03
2018-03-03 00:07:57
17
1c067239d319
1.090566
2
0
0
On People… Book release of “Tensorflow for Deep Learning” — Link
5
Woebot raises 8 million, Tensorflow 1.6, Project Alexandria, Google ML course, Deep learning notations,… On People… Book release of “Tensorflow for Deep Learning” — Link Computational social science is not equal to computer science plus social data — Link Deep Learning notations by Ian Goodfellow — Link On Education and Research… Google releases new machine learning course for free — Link Google releases machine learning glossary in Spanish, French, Korean, and Mandarin — Link [Paper] Evaluating the stability of embedding-based word similarities — Link [Paper] Averaged stochastic gradient descent with weight dropped LSTM or QRNN — Link [Paper] Adverserial examples that fool both human and computer vision — Link On Code and Data… Tesorflow 1.6 is released — Link Google releases new dataset and challenge for landmark recognition — Link JupyterLab is released and ready for mass user adoption — Link Code repository for Paradigms of Artificial Intelligence Programming (by Peter Norvig) — Link On Industry… Novel antibiotic recipes could be hidden in medieval medical text — Link Automated psychotherapy bot, Woebot, raises $8 million round of series A funding — Link Worthy Mentions… GitHub survived the largest DDoS attack ever recorded — Link Allen Institute for AI to pursue common sense for AI via Project Alexandria- Link How to conduct cross-validate PCA, K-means clustering, and other supervised algorithms — Link Please leave a 👏 or share, it helps more than you think.
Woebot raises 8 million, Tensorflow 1.6,
21
woebot-raises-8-million-tensorflow-1-6-1c067239d319
2018-05-15
2018-05-15 09:13:00
https://medium.com/s/story/woebot-raises-8-million-tensorflow-1-6-1c067239d319
false
236
Diverse Artificial Intelligence Research & Communication
null
null
null
dair.ai
ellfae@gmail.com
dair-ai
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,RESEARCH,TECHNOLOGY,DATA SCIENCE
dair_ai
Machine Learning
machine-learning
Machine Learning
51,320
elvis
Researcher and Science Communicator in Machine Learning and NLP; I discuss more about Linguistics, Emotions, NLP, and AI here: (https://twitter.com/omarsar0)
41338000425f
ibelmopan
1,667
661
20,181,104
null
null
null
null
null
null
0
null
0
3acc3fd41039
2017-10-25
2017-10-25 12:46:13
2017-10-25
2017-10-25 12:48:30
1
false
en
2017-10-25
2017-10-25 12:48:30
10
1c068f0986b3
4.222642
18
2
0
You’ve probably noticed. Over the last two years, there’s been a surge in the number of articles and concerned citizens warning that…
4
Bionic Lawyers: AI, Tech & Automating your legal practice. You’ve probably noticed. Over the last two years, there’s been a surge in the number of articles and concerned citizens warning that lawyers will soon be replaced by intelligent robots. I have a rule against robots in content on AI. But only the cheesy CGI ones. These guys are fun. With the weight of major law firms opening venture capital funds and bold new legal tech startups, it’s easy to feel nervous and vulnerable about the threat of artificial intelligence (AI) as a solo or small firm lawyer. If we believe these articles, we should all drop our pens and wait for chatbots and AI with tech 3.0 names like Lisa and Ankit to replace us. The truth is that most of the headlines are just hype. It’s not just me saying this. Big players, like Google’s head of AI, John Giannandrea, are warning of the dangers of unwarranted hype around AI. The biggest misconception with AI and software, particularly in the legal space, is that we have developed what is called general or super intelligence. Meaning that a computer system can recognize and resolve new problems that its designers had not originally built it to for. We have not. This dynamic problem-solving ability remains the exclusive ability of humans, for now. Many experts say we have not even really created artificial intelligence at all, but rather “advanced automation” or “cognitive computing”. Today, AI systems can operate one specific task and improve its ability by identifying and learning from patterns it observes. This is significant. It means that software can improve over time without programmers having to write more code. It’s powerful, but it’s not autonomous intelligence. Because of these limitations, we only see AI applied to very specific tasks that are repeatable, predictable, and scalable. That’s why AI has been deployed to tasks like sorting documents for discovery and OCR scanned PDFs, but not to draft a shareholders agreements or a brief. The latter is too unpredictable for engineers to design a repeatable system. THE BIONIC PRINCIPLE AND AUTOMATION I’m not going to argue that robots can never replace lawyers. I think they will, at specific tasks. But they’ve already been doing that for decades. Talk to a grey-haired partner and you’ll hear stories of redlining documents manually at 3:00 in the morning. With a ruler, a red pen and lots of coffee. Can you imagine? That is now done automatically by computers. Thankfully. Most lawyers don’t do tasks. We do projects. Like settling disputes, organizing investments, and transferring property. These are complex projects with hundreds, maybe even thousands, of “tasks”. So rather than wait for the robots to take over, I propose a different future for lawyers. Over the next decade, smart lawyers will tool-up with a variety of smart technologies that automate specific tasks. These systems will integrate to deliver exceptional productivity and user experiences. And yes, most of these tools will have some form of AI integrated into them. One of the core beliefs in the startup/tech world is that if we can liberate ourselves from mundane tasks, we can be more creative, more user-oriented, more empathetic, and more productive. We can identify problems and craft solutions before others. It’s why Netflix beat Blockbuster. And it’s why Amazon will beat Wal-Mart. Liberation is achieved by what’s called the Bionic Principle: if a task can be automated by a machine, automate it. Even the smallest tasks. Unless the cost is egregious, buy the tool. The compounding benefits of automation and integration are exponential. Just ask Google. To help you build up your bionic practice, here are a few tools that almost any practising or in-house lawyer can implement immediately to get started: AMY Scheduling & calendar automation How much time do you or your assistant spend every month emailing back-and-forth with people to schedule meetings? Probably a lot. There’s a bot for that. Amy is an AI-powered scheduling assistant that will automate those email exchanges with people to figure out a time that works for everyone. Once one is agreed upon, she will send out a calendar invite and set reminders. (www.x.ai) CLARKE Notes & task-listing for meetings It’s proven. Multi-tasking is hard. Have you ever felt rushed when trying to listen to someone talk, take notes and record your tasks during a meeting? Meet Clarke, a smartphone app that listens to your meetings, automatically takes notes and lists tasks that you’ve been charged with during the meeting. This liberates you to listen more closely, ask more meaningful questions and build stronger relationships. (www.clarke.ai) ZAPIER Integrate your dumb-apps to make them smart Zapier is a staple at startups and tech companies. It allows you to integrate almost any app you’re currently using. For example, if you create a form using Typeform or Formstack, Zapier can enter the user input into a Google Doc or your CRM (ex., Clio), even if those two apps don’t connect natively. (www.zapier.com) PAPER Contract negotiation, eSign and management Don’t you hate emailing Word files around, manually running redlines and asking for the latest version? Paper makes contract negotiation, eSigning and management more human by integrating powerful features like field automation, collaborative editing, automated versions and eSignature. It’s used by both in-house and private practice lawyers. Disclosure: I am a founder of Paper. (www.paperlts.com) WEBMERGE Affordable document automation Low-hanging fruit to power-up your templates with document automation. Combine with Zapier to create powerful automated workflows. Here’s an example, if a client completes an intake web form, values from that form are merged into a template, and the completed document is emailed to you and/or your client. If the email template copies Amy, she’ll even automatically schedule a meeting! (www.webmerge.me) It’s really easy to stay on top of the latest technology to build your bionic practice. Schedule a monthly visit to Product Hunt to discover new software and hardware as soon as they’re released. Follow the Law Hackersnewsletter to receive a monthly newsletter of new legal tech startups. If you’re in the GTA, join the Toronto chapter of LegalHackers to expose yourself to early ideas and technologies. This article was first published in the Ontario Bar Association’s magazine Just. If you enjoyed this article, please give it a few claps 👏🏼, so others can read it. Adrian Camara is a founder of Paper, the platform to re-imagine contracts and how your teams collaborates on them.
Bionic Lawyers: AI, Tech & Automating your legal practice.
118
bionic-lawyers-ai-tech-automating-your-legal-practice-1c068f0986b3
2018-03-17
2018-03-17 20:43:14
https://medium.com/s/story/bionic-lawyers-ai-tech-automating-your-legal-practice-1c068f0986b3
false
1,066
All-in-one document workflow automation software designed for modern legal and business teams. From eSign to document assembly, Athennian has you covered. https://athennian.com
null
athennianhq
null
Athennian: Where legal work happens
hello@athennian.com
athennian-document-assembly-legal-collaboration
LEGAL,LEGALTECH,LAWYERS,TEAMWORK,STARTUP
athennian
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Adrian Camara
How can we make legal more human? CEO at athennian.com
4d303ed8a881
adriancamaragil
166
60
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-19
2018-03-19 08:22:57
2018-03-19
2018-03-19 08:24:56
1
false
en
2018-03-21
2018-03-21 12:19:59
2
1c07348f28f2
2.241509
218
0
1
Neurotechnology offers vast potential. In one futurist’s words, “Our understanding of our limitations will be shattered, and new vistas…
3
The Art of Thinking: Will Development of Neurotechnology Force Us Become Thought Diplomats Neurotechnology offers vast potential. In one futurist’s words, “Our understanding of our limitations will be shattered, and new vistas will open up, as we explore the possibilities that arise when we bring minds, machines, and the material world together.” However, to bring these disparate elements together harmoniously means that all three components will need to “give a little” and this is where the novel notion of thought diplomacy comes in. The secret ingredient to unlocking the full potential of neurotechnology may simply lie in finding a way to meet halfway. So how do we do that when we’re talking about something as innate and instinctual as thinking? Diplomacy with other human minds As humans begin to network their thought processes via brain-computer interfaces, futurists predict that we will achieve a state of “cyberthink”, a form of group intelligence where the combined experience and knowledge of an integrated team of people becomes something far greater than the sum of its parts. While it’s challenging to intuitively understand what that would look or feel like, it’s pretty fair to say that it would lay the foundations for problems being solved very differently. Imagine having access to crowd-sourced problem solving, where ideas and strategies from a score of minds flow together seamlessly: an amazing notion. I imagine this utopia of thought would surely depend on god-level networking skills and the ability to synergize ideas with those whose minds you are networked with. In this sense, we may well all need to become thought diplomats, carefully shielding out unnecessary or unproductive thoughts while also remaining open to a frenzy of fresh and challenging ideas. But if an increased pressure to be diplomatic with our thoughts is the cost, imagine the gain. What about diplomacy with artificial intelligence? The neurotechnology industry is moving forward quickly. We will soon see devices that can be implanted in the human brain that can greatly augment our intelligence. Indeed, Elon Musk, CEO of Space and Tesla, recently stated that these kinds of neurointerfaces are the best chance we have of keeping pace with artificial intelligence. Companies like Neurogress are working to make this thought diplomacy between humans and computers easier and more detailed than ever before. They’re achieving this through developing software which can utilize artificial intelligence to learn to understand each individual’s unique brain signals. The results are fascinating and striking. Here’s how Neurogress describes the experience. “A person is tentatively asked to imagine the desired motions in mind many times, and the algorithmic image recognition systems find a match between these intentions and … the electrical activity of his/her brain. In the future, the algorithms reliably recognize signs of a person’s intention … , this time through free expression of the person’s intentions in the course of the movement. So what this means is that using this technology, the process of thought diplomacy becomes an amazing dance of give and take. As we endeavor to embrace harnessing our brains to communicate with technology, Neurogress’ software will be ensuring that the technology in turn adapts right back. Invest in the interactive mind-controlled devices of the future by buying tokens now. Visit Neurogress.io.
The Art of Thinking: Will Development of Neurotechnology Force Us Become Thought Diplomats
266
the-art-of-thinking-will-development-of-neurotechnology-force-us-become-thought-diplomats-1c07348f28f2
2018-06-16
2018-06-16 22:04:39
https://medium.com/s/story/the-art-of-thinking-will-development-of-neurotechnology-force-us-become-thought-diplomats-1c07348f28f2
false
541
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Neurogress
THINK. SHAPE YOUR WORLD. http://Neurogress.io Ask for more information: https://t.me/neurogress
df47ccd2c097
neurogress
2,185
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-24
2018-03-24 03:24:16
2018-03-24
2018-03-24 03:26:51
1
false
en
2018-03-24
2018-03-24 07:05:01
2
1c081899297f
1.539623
1
0
0
Gartner says a combination of poor communication between IT and the business, the failure to ask the right questions or to think about the…
5
What causes the failure of Business Intelligence and Analytics at an organization? Gartner says a combination of poor communication between IT and the business, the failure to ask the right questions or to think about the real needs of the business leads for failure of business intelligence and analytics projects. IT departments make the mistake of looking at BI as an engineering problem that requires a specific package solution. People in IT need to stop approaching BI as a vendor or engineering solution, or as a tool. They need to understand what business they are in. They are in the information and communication business. It’s very important to create a ‘common language’ for BI and Analytics across the organization. They need to look at whether people need historical information, or information in real time and whether people will use business intelligence to collaborate. Most of the successful projects I have been involved with have been collaborative effort from people across the various departments like marketing, sales etc. It is important for BI and Analytics to help one and all across the organization. Typical Business Intelligence Architecture Image Source: https://www.tvp.zcu.cz/cd/2013/PDF_sbornik/42-1.pdf One of the most important aspects is information gathering. For the most effective information gathering it is important to get the right mix of people. For e.g. at one of the firms I helped build a massive business intelligence and analytics solution at everyone gathered during the information gathering session knew that the Net Promoter Scores were very important for the marketing department. What none of them knew were the NPS scores were very important even for sales department. When one customer was happy he would go and bring his friend along. Happy Customer means a better NPS Score, a better NPS Score means better Sales. Soon enough the NPS scores were scrutinized by the Sales team too! Everyone was driving for better NPS Scores across the organization. BI and Analytics was the enabler! If your organization doesn’t have a successful BI and Analytics strategy you need to get the information flowing across the various departments. Collaborate and have a ‘lingua franca’ across all the departments of your organisation.
What causes the failure of Business Intelligence and Analytics at an organization?
1
what-causes-the-failure-of-business-intelligence-and-analytics-at-an-organization-1c081899297f
2018-03-24
2018-03-24 11:00:32
https://medium.com/s/story/what-causes-the-failure-of-business-intelligence-and-analytics-at-an-organization-1c081899297f
false
355
null
null
null
null
null
null
null
null
null
Business Intelligence
business-intelligence
Business Intelligence
4,052
Nimish Rao
Love Data Stories, AI and Machine Learning
81a2287275f7
nimishrao
42
51
20,181,104
null
null
null
null
null
null
0
null
0
62a2f19ff07e
2018-03-30
2018-03-30 23:18:39
2018-04-01
2018-04-01 23:28:00
10
false
en
2018-04-10
2018-04-10 15:16:55
1
1c08dd73e2ed
6.427358
24
3
0
The credit score is a numeric expression measuring people’s creditworthiness. The banking usually utilizes it as a method to support the…
5
Credit Scoring with Machine Learning The credit score is a numeric expression measuring people’s creditworthiness. The banking usually utilizes it as a method to support the decision-making about credit applications. In this blog, I will talk about how to develop a standard scorecard with Python (Pandas, Sklearn), which is the most popular and simplest form for credit scoring, to measure the creditworthiness of the customers. Project Motivation Nowadays, creditworthiness is very important for everyone since it is regarded as an indicator for how dependable an individual is. In various situations, service suppliers need to evaluate customers’ credit history first, and then decide whether they will provide the service or not. However, it is time-consuming to check the entire personal portfolios and generate a credit report manually. Thus, the credit score is developed and applied for this purpose because it is time-saving and easily comprehensible. The process of generating the credit score is called credit scoring. It is widely applied in many industries especially in the banking. The banks usually use it to determine who should get credit, how much credit they should receive, and which operational strategy can be taken to reduce the credit risk. Generally, it contains two main parts: Building the statistical model Applying a statistical model to assign a score to a credit application or an existing credit account Here I will introduce the most popular credit scoring method called scorecard. There are two main reasons why the scorecard is the most common form for credit scoring. First, it is easy to interpret to people who has no related background and experience such as the clients. Second, the development process of the scorecard is standard and widely understood, which means the companies don’t have to spend much money on it. A sample scorecard is shown below. I will talk about how to use it later. Figure-1 Example Scorecard Data Exploration and Feature Engineering Now I’m going to give some details about how to develop a scorecard. The data set I used here is from the Kaggle competition. The detailed information is listed in the Figure-2. The first variable is the target variable, which is a binary categorical variable. And the rest of the variables are the features. Figure-2 Data Dictionary After gaining an insight into the data set, I start to apply some feature engineering methods on it. First, I check each feature if it contains missing values, and then impute the missing values with median. Next, I do the outlier treatment. Generally, the methods used for outliers depends on the type of outliers. For example, if the outlier is due to mechanical error or problems during measurement, it can be treated as missing data. In this data set, there are some extremely large value, but they are all reasonable values. Thus, I apply top and bottom coding to deal with them. In Figure-3, you can see after applying the top coding, the distribution of the feature is more normal. Figure-3 Outlier Treatment with Top Coding According to the sample scorecard shown in Figure-1, it is obvious that each feature should be grouped into various attributes (or groups). There are some reasons for grouping the features. Gain an insight into relationships attributes of a feature and performance. Apply linear models on nonlinear dependencies. Understand deeper on the behaviours of risk predictors, which can help in developing better strategies for portfolio management. Binning is a proper method used for this purpose. After the treatment, I assign each value to the attribute in which it should be, which also means all numeric values are converted to categorical. Here is a example for the outcome of binning. Figure-4 Grouping Feature “Age” with Binning After grouping all the features, the feature engineering is completed. Next step is to calculate the weight of evidence for each attribute and the information value for each characteristics (or feature). As mentioned before, I have used binning to convert all numeric value into categorical. However, we cannot fit model with these categorical values, so we have to assign some numeric values to these groups. The purpose of the Weight of Evidence (WoE) is exactly to assign a unique value to each group of categorical variables. The Information Value (IV) measures predictive power of the characteristic, which is used for feature selection. The formula of WoE and IV is given below. Here the “Good” means the customer won’t have serious delinquency or target variable is equal to 0, and “Bad” means the customer will have serious delinquency or target variable is equal to 1. Usually, characteristics analysis reports are produced to get WoE and IV. Here I define a function in Python to generate the reports automatically. As an example, the characteristics analysis report for “Age” is shown in Figure-5. Figure-5 Characteristics Analysis Report for “Age” Then I make a bar chart to compare the IV of all the features. In the bar chart, you can see the last two features “NumberOfOpenCreditLinesAndLoans” and “NumberRealEstateLoansOrLines” have pretty low IV, so here I choose other eight feature for model fitting. Figure-6 Predictive Power of each Characteristic Model Fitting and Scorecard Point Calculation After the feature selection, I replace the attributes with the corresponding WoE. Until now, I get the proper data set for the model training. The model used for developing scorecard is logistic regression, which is a popular model for binary classification. I apply cross validation and grid search to tune the parameters. Then, I use the test data set to check the prediction accuracy of the model. Since the Kaggle won’t give the values for target variable, I have to submit my result online to obtain the accuracy. To show the effect of data processing, I train the model with raw data and the processed data. Based on the result given by the Kaggle, the accuracy is improved from 0.693956 to 0.800946 after the data processing. The final step is calculating the scorecard point for each attribute and produce the final scorecard. The score for each attribute can be calculated with the formula: Score = (β×WoE+ α/n)×Factor + Offset/n Where: β — logistic regression coefficient for characteristics that contains the given attribute α — logistic regression intercept WoE — Weight of Evidence value for the given attribute n — number of characteristics included in the model Factor, Offset — scaling parameter The first four parameters have already been calculated is the previous part. The following formulas are used for calculating factor and offset. Factor = pdo/Ln(2) Offset = Score — (Factor × ln(Odds)) Here, pdo means points to double the odds and the bad rate has been already calculated in the characteristics analysis reports above. If a scorecard has the base odds of 50:1 at 600 points and the pdo of 20 (odds to double every 20 points), the factor and offset would be: Factor = 20/Ln(2) = 28.85 Offset = 600- 28.85 × Ln (50) = 487.14 When finishing all the calculation, the process of developing the scorecard is done. Part of the scorecard is shown in Figure-7. Figure-7 Final Scorecard with part of Characteristics When you have new customers coming, you just need to find the correct attribute in each characteristics according to the data and get the score. The final credit score can be calculated as the sum of the score of each characteristics. For instance, the bank has a new applicant for credit card with age of 45, debt ratio of 0.5 and monthly income of 5000 dollars. The credit score should be: 53 + 55 + 57 = 165. To develop a more accurate scorecard, people usually have to consider more situations. For example, there are some individuals identified as “Bad” in the population but their application is approved, while there will be some “Good” persons that have been declined. Thus, reject inference is supposed to be involved in the development process. I don’t do this part because it requires the data set of rejected cases which I don’t have in my data. If you want know more about this part, I highly recommend you to read Credit Risk Scorecards — Developing and Implementing Intelligent Credit Scoring written by Naeem Siddiqi. If you are interested in my work or have some problems about it, please feel free to contact me. At the meantime, if you want to know more about what students learn from WeCloudData’s data science courses, check out this website: www.weclouddata.com
Credit Scoring with Machine Learning
272
how-to-score-your-credit-1c08dd73e2ed
2018-05-29
2018-05-29 19:22:00
https://medium.com/s/story/how-to-score-your-credit-1c08dd73e2ed
false
1,372
Let’s do some really cool stuffs about data science!
null
null
null
Passion for Data Science
hongri208@gmail.com
henry-jia
DATA SCIENCE,PYTHON,MACHINE LEARNING,BIG DATA,DATA ANALYSIS
null
Machine Learning
machine-learning
Machine Learning
51,320
Hongri Jia
null
6f878c62f59a
hongri208
32
5
20,181,104
null
null
null
null
null
null
0
null
0
d8ca8bdc1cd3
2018-09-08
2018-09-08 00:01:13
2018-09-08
2018-09-08 00:02:06
4
false
en
2018-09-08
2018-09-08 00:19:27
4
1c0906a844b8
6.262264
5
0
0
Thoughts from team Maslo on how to build empathetic technology.
5
How To Build and Grow an AI Thoughts from team Maslo on how to build empathetic technology. At Maslo we think about technology first and foremost as an integral part of the human condition. We don’t think of it as an abstract tool or some artificial construct or some mechanical utility. Technology is an extension and an expression of our humanity. It has been part of at times what reduces our humanity and at other times increases it. The best long term approach would be to increase our humanity, while recognizing we can’t always know the best ways to do that. This is why we started Maslo and why we continue to build the platform we are building today. Instead of trying to create a suite of technology functions people find useful or efficient to get work done or build something or send a message to another person or manage assets, we set out to actually figure out how to increase the empathy of technology. We want to figure out the fundamental conditions of humanity increasing approaches to technology. The hypothesis is that if we can grow empathetic technology capability then it can be infused into every technology. Empathy to us is a more general form of understanding. Not just an intellectual, logical or reasoned understanding of the facts and inductive, deductive connection of dots but instead a broader shared experience. Empathy is a deeper sharing of context, timing, flow, values, and experience. Empathy is not a transactional input/output goal or utility maximization. Empathy is companionship and trust. Empathy is consistent thereness. But let’s not be confused about whether we mean something strange and best left in the self-help and metaphysics section (which can be wonderful sources of inspiration, btw). No, at Maslo empathy is an important, measurable and learnable aspect of reality and animal life. In fact, it might be at the very heart of what it means to be alive (and trust us, there is no clear definition of what life is by even leading scientists on the matter). Empathy is the measurable capacity of another entity to share the real, physical and material existence of others. Not a facsimile or a reductive summary of the experience of others but to actually “go through” the experience of others. Empathy is an active experiencing of shared consequences. This can be measured by a variety of signals and metrics precisely because empathetic creatures are affected by the same signals as those entities for which it is empathetic. For example, most people tend to consider dogs as empathetic companions to humans. They observe that dogs experience their shared physical and relational conditions in coincidental ways. Dogs hear, see, smell, feel the environment of humans while not completely the same within spitting distance the same. Dogs physical size is even within the human frame of reference. So while dogs are different than humans yet they clearly share the world/same spaces/same consequences probably more than any other non-human creatures. We can reliably measure dogs and humans responses to similar signals, environments and responses to and between each other. And so we get to the crux of how empathetic technology is possible. The technology must be first and foremost responsive to the same raw signals of the world. A technology must hear, see, touch, sense, etc the world at similar levels (in fidelity, speed, noisiness, etc) as humans. An empathetic technology must process those raw signals independently as well as socially with humans and other technologies. And, perhaps the biggest point, these signals should primary be used for associative bindings between CONSEQUENCES and not be over interpreted for meaning (semantic or logical etc). That is, if a technology is recording audio of a speaker the specific words are only a small set of the signal. The tone, tenor, speed, intensity of the speech pattern, the physical context of the speech, the positioning of the observer, any audience present, and the operating relationship of the observer (device, etc) are all relevant aspects of the overall raw signal. And that new raw signal must always be put in relation to other previously experienced raw and synthesized signals. What ends up being the key to empathetic intelligence and complexity is being able to notice changes in the signals as it relates to changes in the consequences of those signals. So empathetic technology must always have a flow of signals in which to consider how signals change in relation to each other. The detection of consequences emerge from the analysis of the detection of changes in future signals. For example… if many speech acts are recorded by an empathetic technology, the technology will create a consequential association between the various aspects of the speech acts. Perhaps a high intensity speech act within a work environment tends to be associated with a low intensity, low speed speech act in a home setting. An empathetic technology would become aware of that by participating in emitting similar behavioral acts in accordance to similar environmental and behavioral contexts. This is not to mean the empathetic technology is merely a mirror — a facsimile. The empathetic technology instead takes in raw signal in similar ways, synthesizes it according to its own history and emits signal to other observers which in turn emit signal back. While it may at times mirror or mimic it necessarily will have variation in its behavior by the fact that it receives variation in signals. Again, the idea is to process the same kind of signaling in the same kinds of ways but not necessarily always be experiencing the exact same signally environment etc. How does this relate to our more general notions of Artificial Intelligence? No matter what school of thought one subscribes to with AI and Machine Learning the goals almost all map to “adaptation and complex behavior”. Almost everyone doing AI work desires to create systems that can learn and execute complex behaviors in complex environments. At Maslo we believe, based on evidence from psychology, neurosciences, behavioral sciences, computer science, complex sciences and economics that learning and complex behavior emanates from a variety of signal processing capabilities and exposure to environmental variety and the capacity to synthesize and maintain large maps of consequential associations. However, the mere existence of learning and complex behavior is not enough for one complex creature to engage another complex creature (e.g. a human using AI). The complex creature or AI must earn trust that its learning and complex behavior is in strong coincidence or correlation with the related creature. AI is not AI if it is not believable or considered reliable. And that reliability is 100% a function of shared consequences, not a function of “being right” or “winning games”. We wish AI was actually not an abbreviation for Artificial and Intelligence. We believe that all researchers and technologists would bear more fruit working in EC or Empathetic Complexity. But part of being empathetic is sharing signals and the consequences of those signals, so at Maslo we’re fine absorbing the vernacular as long as it ECs — effectively communicates. But for those going deep into this journey with us… we are most definitely focused on growing a technology of empathetic complexity and in doing so we will achieve every aim of AI. A Note on Maslo Technology For the more deep tech focus folks we will be sharing the details of our platform as it becomes coherent. For now it may be of use to note that we are using the following signal processing techniques and pipelines: CoreML for on devices realtime processing of visual signals, such as face gestures. Google Cloud NLP for quick speech processing. Wolfram Language and Cloud for audio processing, speech recognition, semantic analysis, custom visualization Custom python processors for meta data from devices about geolocations, device features, etc. We have also developed a suite of UI functionality to emit complex signals in the form of visual and audio gestures as well as voice and text prompts that we’ll share more about in later posts. Our platform has a core set of signal processing and data repository across all Maslo AIs and each individual user has a specific AI adapting at its own pace with a Maslo user. The core serves much like a cultural and genetic core reference but is not normative. It should be viewed much more like a slower moving AI than the individual AIs. You can read more of our conceptual underpinnings in our various design documents and brainstorming mind maps that we’ll share. We also strongly encourage several texts: The Farther Reaches of Human Nature // Abraham Maslow Thinking Fast and Slow // Daniel Kahneman The Pencil: A History of Design and Circumstance // Henry Petroski Science And Human Behavior // B.F Skinner Maslo is currently in beta. Get it on the App Store.
How To Build and Grow an AI
133
how-to-build-and-grow-an-ai-1c0906a844b8
2018-09-08
2018-09-08 00:19:27
https://medium.com/s/story/how-to-build-and-grow-an-ai-1c0906a844b8
false
1,474
You want to be your best self. We build technology to help you get there. Tips from friends at Maslo.
null
heymaslo
null
Your Virtual Self
founders@maslo.ai
maslo
ARTIFICIAL INTELLIGENCE,PSYCHOLOGY,FUTURE TECHNOLOGY,STARTUP
heymaslo
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Russell Foltz-Smith
I be doing stuff. and other stuff. More stuff. http://www.worksonbecoming.com/about/ I believe in infinite regression of doing stuff.
b38bb037365d
un1crom
241
468
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-29
2018-07-29 01:54:32
2018-07-29
2018-07-29 02:04:07
1
false
en
2018-07-29
2018-07-29 02:04:07
0
1c097aeb1fb2
1.381132
1
0
0
More specifically, mind transmission.
5
Mind Reading More specifically, mind transmission. The number hasn’t been many, but I can recount a number of situations where I’ve wanted to transmit a thought to someone in the middle of a meeting. If we had all been remote in the meeting, I’d have been able to send a quick message via instant messenger but in a live, face-to-face meeting, without resorting to the obvious of taking out my phone and texting the individual, this is not easily done. The situation might arise if I want the individual who’s speaking to disclose information in certain way, stop what they’re saying, or remind them of something without tipping off the others in the conversation. Those with a very high emotional quotient who are also paying attention to their surroundings are able to pick this up, but what could be some tools to enhance this for others who aren’t as adept? First, what are the signs that someone is trying to transmit a thought to you? Their gazed is fixed on you Their eyelids are wide open Their pupils are dialated Their jaw is usually locked Their eyebrows are furled Their eye movements may be signalling to you Their hands might be moving So what could be some mechanisms to detect this? If there were cameras in the room, that’s one way. If you had on you a camera like a Google Glass that could inform you If your phone could see them and was detecting some stress from them And how could you be informed? Maybe a text message? If you’re on Google Glass, a flashing notification to let you know a thought is incoming Maybe your phone could ring to buy time for a distraction AI can detect signs that we might have missed but allowing to augment our interactions with others means building the applications that use this information.
Mind Reading
14
mind-reading-1c097aeb1fb2
2018-07-29
2018-07-29 02:04:07
https://medium.com/s/story/mind-reading-1c097aeb1fb2
false
313
null
null
null
null
null
null
null
null
null
Wearables
wearables
Wearables
5,492
Leor Grebler, UCIC
CEO of UCIC — The Voice of AI — making hardware products come alive with voice interaction. Proofs of concept, prototypes, and tools for integration of voice.
136fa39ffeba
Grebler
3,566
359
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-03
2017-10-03 02:28:42
2017-10-03
2017-10-03 02:43:29
0
false
en
2017-10-03
2017-10-03 17:11:03
0
1c0c86c2577b
1.633962
5
0
0
What is Machine Learning??
2
Introduction to Machine Learning What is Machine Learning?? A computer program which helps the a system learning from the existing data or historical data or Past Experience (E),so that it can improve its performance(P) at preforming a task(T) or in a layman terms we can say it is the extracting of knowledge from (past/historical) data. Machine Learning is of 3 categories: 1.Supervised Learning 2.Unsupervised Learning & 3.Reinforcement Learning What is ‘Supervised Learning’?? In simple terms, Supervised Learning is when the system is trained on well labelled data.i.e the dependent variable/feature is well defined. Ex: A housing data with all the features of the house are given and the type of the house is also given like Duplex,Semi,Detached etc What is ‘Unsupervised Learning’?? In simple terms, Unsupervised Learning is when the system is trained on unlabelled data.i.e the dependent variable/feature is not defined. Ex: A housing data with all the features of the house are given and the type of the house is NOT given. What is ‘Reinforcement Learning’?? In simple terms, Reinforcement Learning is when the system automatically decides on the action to be taken under certain conditions in order to improve its performance. Ex: An online chess game. Common Machine Learning Problems: Two of the most common problems solved my machine learning are 1.Classification and 2.Regression problems. What is a Classification problem? For a given dataset, we are supposed to build an algorithm which classifies the data into the categories mentioned. Ex: If an indivdual will be approved a loan or not? If an individual will be a defaulter or not? If a particular transaction is fraudulent or not?(this is an called imbalanced data*) If an individual is a diabetic or not? (this is an called imbalanced data*) *I will write a separate blog on how to deal with this kind of problems There are two types of classification problems: (1) Binary Classification (either 0 or 1) (2) Multi-classification (A or B or C..etc) Types of Classification Algorithms: 1. Logistic Regression 2. Support Vector Machines 3. K-Nearest Neighbours 4. Naive Bayes 5. Decision tree 6. Random Forest What is a Regression problem? For a given dataset, we are supposed to predict a continious value using the features provided. Ex: Perdicting the final sale price of a house given its various features Predicting the cost of project based on various features. Types of Regression Algorithms: 1. Linear Regression 2. Support Vector Regression 3. Decision Tree Regression 4. Random Forest Regression Of late there is another set of powerful Gradient Boosting algorithms that are becoming very popular.They are GBM, XGBoost & LightGBM.
Introduction to Machine Learning
11
introduction-to-machine-learning-1c0c86c2577b
2018-05-01
2018-05-01 04:29:59
https://medium.com/s/story/introduction-to-machine-learning-1c0c86c2577b
false
433
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Abhilash Marecharla
null
6dc75b2c57be
mymlprojects
3
1
20,181,104
null
null
null
null
null
null
0
null
0
284538178f0a
2018-09-20
2018-09-20 20:57:11
2018-09-20
2018-09-20 21:54:44
0
false
en
2018-09-29
2018-09-29 17:25:27
0
1c0d29506b81
0.845283
0
1
0
I think my role on an AI team would be a writing or product management role. While I do enjoy research and collecting data from various…
4
My Role on an AI Team I think my role on an AI team would be a writing or product management role. While I do enjoy research and collecting data from various places, I think my communication and writing skills are stronger. I really enjoy looking at local city data and thinking of ways to improve on infrastructure, politics and health/nutritional concerns. I am a journalism major and I could definitely see AI aiding in compiling and organizing data to help have more reliable statistics to input into stories. I think I can definitely help my team with creatively thinking of new ideas, as well as playing devils advocate to make sure we’re coming up with a useful product. I also think I will contribute to the literature portion of the project. I think there are many individuals I could learn from in this position. There are a lot of people who are critical thinkers that can affectively communicate ideas to a team, as well as apply their journalistic abilites in writing for a product. Im inspired when I converse with other people that have different talents than mine because it helps me expand my way of thinking by seeing how others solve problems. My only worry is that I wont be able to open my mind as wide as I may need to.
My Role on an AI Team
0
write-a-medium-post-describing-what-you-envision-your-role-to-be-should-you-end-up-working-on-an-ai-1c0d29506b81
2018-09-29
2018-09-29 17:25:27
https://medium.com/s/story/write-a-medium-post-describing-what-you-envision-your-role-to-be-should-you-end-up-working-on-an-ai-1c0d29506b81
false
224
Course by Jennifer Sukis, Design Principal for AI and Machine Learning at IBM. Written and edited by students of the Advanced Design for AI course at the University of Texas's Center for Integrated Design, Austin. Visit online at www.integrateddesign.utexas.edu
null
utsdct
null
Advanced Design for Artificial Intelligence
cid@austin.utexas.edu
advanced-design-for-ai
ARTIFICIAL INTELLIGENCE,COGNITIVE,DESIGN,PRODUCT DESIGN,UNIVERSITY OF TEXAS
utsdct
Product Management
product-management
Product Management
25,668
Blaise Compton
Journalism senior at UT Austin
17f1b451736c
blaise.compton
2
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-08
2018-09-08 02:39:32
2018-09-08
2018-09-08 04:36:34
7
false
th
2018-09-24
2018-09-24 11:03:49
0
1c0e411a40a2
1.770755
2
0
0
Supervised Learning หรือการเรียนรู้แบบมีผู้สอน เป็นศาสตร์แขนงหนึ่งใน AI หรือปัญญาประดิษฐ์ ภายใต้หัวข้อ Machine Learning…
5
Supervised Learning คืออะไร? ทำงานยังไง? Supervised Learning หรือการเรียนรู้แบบมีผู้สอน เป็นศาสตร์แขนงหนึ่งใน AI หรือปัญญาประดิษฐ์ ภายใต้หัวข้อ Machine Learning ที่กำลังเป็นที่นิยมในการศึกษาและวิจัยกันในปัจจุบัน เนื่องจากทำได้ง่าย ต้นทุนต่ำ เพียงใช้คอมพิวเตอร์เครื่องเดียวก็สามารถศึกษาและทำงานจนเห็นผลได้เลย จริงๆ แล้วศาสตร์แขนงนี้มีมานานมากแล้วตั้งแต่ปี 1959 ถูกเสนอโดย Arthur Samuel แต่ด้วยเทคโนโลยีหรือระบบประมวลผลในตอนนั้นยังล้าสมัยอยู่ ทำให้ยังไม่เป็นที่นิยมผิดกับในปัจจุบัน Supervised Learning เป็นศาสตร์เล็กๆ ของ AI ภายใต้หัวข้อ Machine Learning Machine Learning หรือการเรียนรู้ของเครื่องจักรหรือเครื่องคอมพิวเตอร์นั้นแบ่งออกเป็นประเภทใหญ่ๆ ได้ 3 ประเภท นั่นก็คือ Supervised Learning ที่เราจะพูดถึงกันในบทความนี้ Unsupervised Learning หรือการเรียนรู้โดยไม่มีผู้สอน Reinforcement Learning หรือการเรียนรู้ผ่านการให้รางวัล Supervised Learning หรือการเรียนรู้แบบมีผู้สอนนั้น หากจะให้เปรียบเทียบก็เหมือนกับการสอนเด็ก ลองนึกภาพว่าเราชี้สัตว์ให้เด็กที่ไม่เคยเห็นดู แล้วบอกว่าสัตว์ตัวไหนคือแมว ตัวไหนไม่ใช่แมว ชี้ไป 2–3 วัน ให้เด็กได้เจอสัตว์หลายๆ ประเภท จนเด็กเริ่มเข้าใจ วันที่ 4–5 เราอาจจะลองเอาแมวตัวที่เด็กไม่เคยเห็นมาให้ดูสัก 10 ตัว รวมกับสัตว์อื่นๆ อีกจำนวนหนึ่ง โดยคราวนี้เราไม่บอกว่าสัตว์ตัวไหนคือแมว ตัวไหนไม่ใช่แมว ถ้าเด็กตอบถูกก็แปลว่าการสอนของเรามีประสิทธิภาพ ในทำนองเดียวกัน หากเราสอนเด็กไปเลยว่า สัตว์ที่เด็กเห็นนั้นเป็น แมว หมา หรือหมู เด็กก็อาจจะตอบได้มากกว่าแค่ แมว หรือไม่ใช่แมว วิธีนี้อาจจะต้องใช้กระบวนการสอนที่มีความซับซ้อนมากขึ้นไปอีก เราเรียกวิธีการสอนเด็กทั้ง 2 แบบนี้ว่า Classification ซึ่งจะได้ผลลัพธ์ตามภาพด้านล่างครับ ผลลัพธ์ที่ได้จากการสอนเด็กแบบ Classification ที่ไม่ซับซ้อน ผลลัพธ์ที่ได้จากการสอนเด็กแบบ Classification ที่ซับซ้อน วันถัดมา เราเรียกเด็กอีกคนมาสอนเรื่องราคาเพชร (diamond) เราหยิบเพชรอันนึง ขนาด 2 กะรัต สีเหลือ ระดับความสะอาด VS2 แล้วบอกเด็กว่า เนี่ยราคา 2 ล้านบาท หยิบอีกเม็ดขนาด 3 กะรัต สีฟ้า ระดับความสะอาด VS1 แล้วบอกเด็ก 3 ล้านบาท ทำแบบนี้ไปหลายๆ เม็ดจนเด็กเกิด model ในการคาดเดาราคาของเพชรขึ้นในหัว จนวันนึงซุ่มหยิบเพชรเม็ดใหม่ขึ้นมา ก็อาจให้เด็กคาดเดาราคาได้เลย เราเรียกกระบวนการสอนเด็กแบบนี้ว่า Regression หลักการ Supervised Learning สามารถนำไปประยุกต์ใช้แก้ปัญหาได้ 2 รูปแบบ หากเราจะโปรแกรมให้คอมพิวเตอร์บอกเราบ้างล่ะ ว่าภาพสัตว์นั้นๆ เป็นสัตว์ชนิดอะไร การเขียนโปรแกรมแบบดั้งเดิม เรานำ logic หรือ model ที่เราคิดขึ้นมา ใช้เขียนโปรแกรมเพื่อให้ได้ output จาก input ที่รับเข้ามา ซึ่งหากนำไปเปรียบเทียบกับตัวอย่างการสอนเด็กข้างต้น input ของเราก็อาจจะเป็นภาพสัตว์ชนิดต่างๆ ส่วน output ก็คือคำตอบว่าภาพที่รับเข้าไปเป็นภาพของแมว หรือว่าสัตว์ชนิดใด การเขียนโปรแกรมแบบนี้ยากและแทบเป็นไปไม่ได้เลย เนื่องจากความซับซ้อนของ model หรือ logic ที่เราต้องเป็นคนคิดขึ้นมาใช้ในโปรแกรม เพื่อแยกแยะภาพสัตว์ชนิดต่างๆ Traditional Programming การใช้ Supervised Learning มีรูปแบบการเขียนโปรแกรมที่แตกต่างออกไป ในช่วงแรก เราเขียนโปรแกรมให้คอมพิวเตอร์สร้าง model หรือ logic ของโปรแกรมจาก input และ output ที่เราต้องการแทน เช่นเดียวกับการสอนเด็ก จากนั้นเราจึงนำ model มาใช้ ยิ่งเรามี input และ output ที่มีความหลากหลายเป็นจำนวนมากเท่าไร เราก็มี“โอกาส”ได้ model ที่มีความแม่นยำมากขึ้นเท่านั้น กระบวนการสร้าง model นี้เราเรียกว่าการ “เทรน” ซึ่งสามารถกินเวลาได้ตั้งแต่หลักวินาทีจนถึงหลายๆ วัน แล้วแต่ความซับซ้อนของโจทย์ที่เราต้องการแก้ ภาพแสดงกระบวนการเทรน เพื่อให้ได้ model ที่เราต้องการ เมื่อเราได้ model ที่เราต้องการแล้ว เราจึงนำมาประยุกต์ใช้ กับโปรแกรมของเรา จะเห็นว่ากระบวนการทำ Supervised Learning นั้นมีความซับซ้อนมากกว่าการเขียนโปรแกรมแบบดั่งเดิม แต่ข้อดีของมันก็คือสามารถทำสิ่งที่เป็นไปไม่ได้ให้เป็นไปได้ หากว่าเรานึกย้อนกลับไปเมื่อ 10–20 ปีก่อน การให้คอมพิวเตอร์มีความฉลาดพอที่จะแยกแยะสิ่งของได้ยังเป็นเหมือนแค่ความฝัน แต่ปัจจุบันนี้เราสามารถสร้าง model แบบที่ว่านี้ขึ้นมาได้เองบนโต๊ะคอมพิวเตอร์ที่บ้าน เนื่องด้วยเทคโนโลยีต่างๆ มีความทันสมัยมากขึ้น ปัญหาที่เคยยากและไม่เคยแก้ได้ ก็สามารถถูกแก้ด้วยการใช้ทรัพยากรที่สมเหตุสมผลมากขึ้น โดย: ภูริ เฉลิมเกียรติสกุล eX
Supervised Learning คืออะไร? ทำงานยังไง?
7
supervised-learning-คืออะไร-ทำงานยังไง-1c0e411a40a2
2018-09-24
2018-09-24 11:03:49
https://medium.com/s/story/supervised-learning-คืออะไร-ทำงานยังไง-1c0e411a40a2
false
191
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Phuri Chalermkiatsakul
a data architect existing to exponentiate things
710a7a62fda2
every.phu
6
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-25
2018-02-25 23:29:49
2018-02-25
2018-02-25 23:31:32
1
false
es
2018-02-25
2018-02-25 23:37:48
1
1c0f8d9089c2
4.762264
1
0
0
Marcelo Rinesi es un científico de datos freelance y tecnólogo. Es CTO del Institute for Ethics and Emergent Technologies, cofundador del…
5
Marcelo Rinesi: “Qué es trabajo intelectual es una frontera que se mueve con el tiempo” Marcelo Rinesi es un científico de datos freelance y tecnólogo. Es CTO del Institute for Ethics and Emergent Technologies, cofundador del grupo de arte + data Datasthesia, y miembro del Instituto Baikal. Ha publicado o sido citado sobre la ética, política, y tecnología de inteligencias artificiales, gamificación, y otras tecnologías emergentes en publicaciones como Forbes, Wired, Rolling Stone, y Fast Company. Tuve la suerte de poder hacerle a Marcelo algunas preguntas en torno a trabajo, inteligencia artificial y robótica. Las respuestas que me dio son interesantísimas y bastante largas así que decidí poner en esta oportunidad solo aquellas que se enfocan en la inteligencia artificial. Pueden escuchar la entrevista acá. ¿Qué trabajos serán inevitablemente realizados por robots? A largo plazo no veo límites intrínsecos en la física o la ingeniería para lo que pueda hacer o no un robot. Lo que contextualmente va a estar a salvo de los robots va a ser siempre lo que una sociedad quiera o prefiera que haga un humano porque si lo hiciese un robot perdería la gracia. El deporte por ejemplo es un caso. Lo mismo pasa con los trabajos de servicio donde hasta cierto punto el hecho de que una persona lo esté haciendo es parte por lo cual estás pagando. Pagas por status en cierto sentido, sino no habría mozos en los restaurantes: te parás, agarrás tu comida y te sentás. Y cosas como administración, política o arte dónde también hasta cierto punto mucho se podría hacer con una máquina pero preferimos por una razón u otra, más social y hasta a veces más inconsciente, que lo haga una persona. Tal vez el efecto más importante de los autos sin piloto a largo plazo es que justamente son el primer caso de una actividad así, dónde el cambio no es en un warehouse sino el mundo real. Si podes hacer un sistema que sea más o menos seguro y útil en ese sentido, podes empezar a tener robots en todos lados. También puede cambiar todo lo que son los campos en los que todavía no está muy claro qué es hacerlo bien y qué es hacerlo mal. Eso preferimos dárselo a humanos muchas veces no tanto porque lo hagan mejor, sino porque está bueno (o queremos que esté bueno) poder echarle la culpa a alguien. En la medida en que la performance de los robots mejore eso se va a ir erosionando. Tal vez por las siguientes décadas no imagino robots que sean mejores que los cirujanos en todos los aspectos pero es inevitable que suceda en un momento u otro. Vas a tener sociedades en las que quieren a una persona a cargo por razones sociales, culturales o políticas. Y otras en las que no. ¿La inteligencia artificial puede reemplazar al trabajo intelectual? Si es así: ¿a cuáles? Cualquier actividad (incluso la intelectual) que sepamos bien cómo se hace, se puede programar. De hecho, esa es una posible definición de programar. Qué es trabajo intelectual en ese sentido es una frontera que se mueve con el tiempo. Hay muchas cosas que hoy en día son automatizables y que en algún momento eran actividades intelectuales que rayaban con lo artístico (fórmulas matemáticas, búsquedas de patrones, etc.). Incluso lo que hoy en día se define como redes neuronales es básicamente autoprogramar dando muchos ejemplos en vez de programar uno a mano. En ese sentido la respuesta es sí y no. Sí porque muchas cosas que llamamos trabajo intelectual, tal vez eventualmente todas las que hoy conocemos, se pueden hacer con inteligencias artificiales, tarde o temprano. Y no porque a medida que una computadora puede hacerlo vamos cambiando la frontera de lo que es trabajo intelectual. ¿Hay manera de ejercer control sobre cómo funciona la inteligencia artificial? ¿Qué pasa con la inteligencia artificial que “aprende”? Controlar a una inteligencia artificial es más fácil que controlar a una persona. Es más simple, es más transparente, la hiciste vos y es mucho más monitoreable, a todos los niveles. Las personas también aprenden pero la inteligencia artificial aprende en un sentido muy específico, en un dominio particular para el cual la diseñaste para que aprenda. Un ejemplo: tu programa de monitoreo de cámaras que aprende a lo largo del tiempo qué patrones predicen un posible acto de vandalismo. Va a aprender sobre eso, no va a aprender espontáneamente política de alguien que hable de política todos los días en la esquina. Eso es ciencia ficción, es fantasía. Bueno, podemos tener bajo nuestro control todo lo que las máquinas aprenden. Ahora bien: ¿cómo controlamos las potenciales consecuencias de las tareas que llevan a cabo? El problema de controlar la inteligencia artificial es más bien de la legislación y política de control industrial que algo más existencial o algo más sofisticado. Es difícil, sí. Es sofisticado, sí. No es filosóficamente tan complicado. Si se lo plantea como complicado es por cuestiones de lavado de manos. Desarrollemos el ejemplo de las Lethal autonomous weapons para entenderlo: Hay una ametralladora conectada a una cámara de inteligencia artificial en un poste en una frontera. Hay un nenito perdido que camina hacia la frontera. La cámara cree que es un contrabandista, le hace un aviso para que retroceda. El chico no entiende, no escucha, está muy asustado. El arma le dispara y lo mata. Es un caso extremo pero puede pasar si pones cámaras en todos lados. En ese momento podemos hablar de los peligros de la inteligencia artificial que Asimov llamaba el efecto Frankenstein. Pero es un caso letal de falla industrial. Una empresa construyó y diseñó esa arma para luego venderla probablemente haciendo promesas de confiabilidad que no merecía tener. No es que es imposible de certificar, de preparar, de analizar. Uno puede hacer predicciones sobre eso, uno puede prevenir. Tal vez no con el 100% de seguridad pero cuanto más dinero pongas en prevención y menos rápido quieras vender algo, más seguro va a ser. Un gobierno compró esa arma y la puso ahí sabiendo que corría un riesgo. Si fue engañado por la empresa entonces: ¿por qué no puso su propio sistema de verificación y control? ¿Tomó el riesgo deliberadamente? Si es una democracia hay una sociedad que eligió a ese gobierno que tomó la decisión de poner el arma ahí. Las responsabilidades éticas son humanas. El concepto de autonomía es todavía y por bastante tiempo metafórico. La autonomía de un auto que maneja solo no es la autonomía de una persona. La autonomía de un auto que maneja solo es la autonomía de un robot industrial que dejamos ir por la calle y que si mata a alguien lo mató por mal diseño. Toda tecnología tiene sus riesgos (incluso los autos hoy en día). Toda sociedad explícita o implícitamente decide el nivel de riesgo que corre. Si una caldera explota no es responsabilidad de la caldera sino que es un problema de diseño, mala praxis o parte del riesgo que nosotros aceptamos. Va a ser lo mismo con las tecnologías más avanzadas. El problema lo puede tener la tecnología pero la responsabilidad moral es de la empresa que lo hizo, la persona que lo compró o el gobierno que lo certificó.
Marcelo Rinesi: “Qué es trabajo intelectual es una frontera que se mueve con el tiempo”
5
marcelo-rinesi-qué-es-trabajo-intelectual-es-una-frontera-que-se-mueve-con-el-tiempo-1c0f8d9089c2
2018-02-26
2018-02-26 10:01:05
https://medium.com/s/story/marcelo-rinesi-qué-es-trabajo-intelectual-es-una-frontera-que-se-mueve-con-el-tiempo-1c0f8d9089c2
false
1,209
null
null
null
null
null
null
null
null
null
Inteligencia Artificial
inteligencia-artificial
Inteligencia Artificial
1,614
Alan Porcel
La doctora Melfi me respondió una vez.
96790e62896a
PorcelAlan
39
238
20,181,104
null
null
null
null
null
null
0
# used for SDMX queries library(rsdmx) # used for plotting data library(ggplot2) # used as a very good substitution of data frames library(data.table) # used for dates manipulation library(zoo) # used to fetch macro data from various public sources library(pdfetch) schema_url <- “http://ec.europa.eu/eurostat/SDMX/diss-web/rest/datastructure/ESTAT/DSD_prc_hicp_manr" dsd <- readSDMX(schema_url) dsd <- pdfetch_EUROSTAT_DSD(“prc_hicp_manr”) codelists <- dsd@codelists@codelists dimensions <- sapply(code lists, function(x) x@id) > dimensions [1] “CL_COICOP” “CL_FREQ” “CL_GEO” “CL_OBS_FLAG” “CL_OBS_STATUS” “CL_UNIT” # creating a data table from the SDMX object to list all values for # the codelist id = CL_GEO geo_descr <- data.table(as.data.frame(dsd@codelists, codelistId = “CL_GEO”)) # delete the second and third column to make it more neat geo_descr <- geo_descr[, -c(2:3)] # change the names of the columns setnames(geo_descr, c(‘GEO’,’GEO_DESCR’)) # put all countries from the geo_descr table in one string which we # will use later on in the query. Notice that the separator is a + # symbol as this is required by the query syntax string_geo <- paste(geo_descr$GEO, collapse = “+”) # delete all aggregate indices and non-EU countries string_geo <- substring(string_geo, nchar(“EU+EU28+EU27_2019+EA+EA19+EA18+”)+1, nchar(string_geo)-nchar(“+UK+EEA+IS+NO+CH+MK+RS+TR+US”)) > string_geo [1] “BE+BG+CZ+DK+DE+EE+IE+EL+ES+FR+HR+IT+CY+LV+LT+LU+HU+MT+NL+AT+PL+PT+RO+SI+SK+FI+SE” url <- paste(“http://ec.europa.eu/eurostat/SDMX/diss-web/rest/data/prc_hicp_manr/M.RCH_A.CP00.", string_geo, "/?startPeriod=2007", sep="") # create sdmx object sdmx <- readSDMX(url) # convert to data.table stats <- data.table(as.data.frame(sdmx)) # add the description of each country by merging with geo_descr data <-merge(stats, geo_descr, by=”GEO”) # convert the datetime in order to be more friendly to R and ggplot2 data[, Date := as.Date(as.yearmon(obsTime))] data_quantiles <- data[, list( quant.0 = quantile(obsValue, probs = 0, na.rm = T, names = T)[[1]], quant.25 = quantile(obsValue, probs = 0.25, na.rm = T, names = T)[[1]], quant.50 = quantile(obsValue, probs = 0.50, na.rm = T, names = T)[[1]], quant.75 = quantile(obsValue, probs = 0.75, na.rm = T, names = T)[[1]], quant.100 = quantile(obsValue, probs = 1, na.rm = T, names = T)[[1]]), by = list(Date) ] HICP <- ggplot(data = data_quantiles, aes(x = Date)) + # sets data_quantiles as the dataset to use and the date as the # x axis geom_ribbon(aes(ymin = quant.0, ymax = quant.25, fill = ‘0% — 25%’), alpha=0.3) + geom_ribbon(aes(ymin = quant.25, ymax = quant.50, fill = ‘25% — 50%’), alpha=0.3) + geom_ribbon(aes(ymin = quant.50, ymax = quant.75, fill = ‘50% — 75%’), alpha=0.3) + geom_ribbon(aes(ymin = quant.75, ymax = quant.100, fill = ‘75% — 100%’), alpha=0.3)+ # the four lines above create filled in areas to visualize the # quantile distribution, while the alpha sets the graph objects # semi-transparent geom_line(data = data[GEO == ‘BG’], aes(x = Date, y = obsValue, fill=’Bulgaria’), size = 1, alpha=0.7)+ # create a line for Bulgaria in order to compare with the quantile # distribution scale_fill_manual(values = c("#01A9DB", "#086A87", "#086A87", "#01A9DB", "black"))+ # set the colours for the quantiles and the country line # here the two middle parts (between 0.25 and 0.75 quantiles of the # distribution) will have the same darker colour geom_hline(aes(yintercept=0), colour=”red”,size=1, alpha=0.5)+ theme_bw()+theme(panel.grid.major=element_line(size=0.3, colour=’grey92'))+ guides(fill=guide_legend(title=NULL))+ ylab(“Annual % Change”)+ theme(axis.title.x = element_blank(),panel.border = element_blank())+ ggtitle(“HICP”)+ theme(plot.title = element_text(hjust = 0.5)) # adjust the colours and appearance of the axis, gridlines, set the # title of the y axis and the graph itself and their position # also set a red line for y axis = 0 # create a list of all countries (I will not bother removing the # aggregates and the non-EU countries here) list_geo <- as.list(geo_descr$GEO) # download the data data_pdfetch <- data.table(pdfetch_EUROSTAT(“prc_hicp_manr”, FREQ=”M”, UNIT=”RCH_A”, COICOP=”CP00", GEO=list_geo, from=”2007–01–01"))
36
6009f386b4fa
2018-07-05
2018-07-05 08:41:23
2018-07-23
2018-07-23 11:31:01
2
false
en
2018-10-20
2018-10-20 17:51:32
7
1c1108a348f2
9.100314
1
0
0
If you work in the field of macroeconomics and you follow the European economy closely, you probably use the Eurostat database quite often…
5
A neat way to fetch, analyze and visualize data from Eurostat in R If you work in the field of macroeconomics and you follow the European economy closely, you probably use the Eurostat database quite often. If you need to prepare monthly or quarterly reports on certain macroeconomic indicators with data you download from a web source like Eurostat, sooner or later you realize the necessity for some degree of automation in order to avoid spending too much time in repeated manual tasks. In this post I will present you a neat way to fetch, transform and visualize data from Eurostat in R. An ex-colleague of mine showed me this way of automation before he left our workplace at the time for another job. Eventually, I had to maintain it, which led me to develop it further and think of new ways to optimize it. In this post I will show you how to download Eurostat data for the HICP annual percent change by using a SDMX query and then create a quantile graph, which visualizes how a particular country compares to its EU peers. There is a step-by-step guide on the Eurostat website on how to construct the SDMX queries as well as an actual query builder. However, it is much more interesting to automate this process yourself. The pdfetch package is made for downloading economic and financial data from various different source like Eurostat, the World Bank, the ECB, Yahoo Finance, the US Bureau of Labor Statistics among many. It is quite useful in general. However, here I will mostly rely on the rsdmx package for reasons you will understand further down. Before I continue I will list the packages you will need in order to execute the code I will provide further down: rsdmx, pdfetch, data.table, ggplot2, zoo. Needless to say you will also need some sort of IDE to work with R. My personal preference is RStudio. There is an easy way to download all these packages from the packages manager in RStudio. Otherwise you can just write install.packages("ggplot2") in the console for each package separately. Once you install all necessary packages you can load them in the following way: The next step would be to load the SDMX data structure for the HICP annual percent change through the respective link. By changing the last part of the link you can find the schema for each indicator in the database. In the database section of the Eurostat website each indicator has an alias. See the image below: For the annual rate of change of the HICP one should use prc_hicp_manr. Therefore, the link for the data structure definition or dsd for this indicator would be: To read the schema you can use the readSDMX function from the rsdmx package. Once running the command above you will have the dsd SDMX object loaded, which you can look through. It is not a very friendly data structure to go through, but we will soon transform parts of it into a data table. An alternative way to get the data structure of particular time series is using the pdfetch package: However, this will not really give you a useful object to work with and use later on. It is more useful for getting an overall impression of the data structure and the descriptions of the various dimensions of the indicator. Next, we can extract the various codelists used for each indicator, which are essentially the values used for each dimension of the indicator (e.g. frequency: monthly, quarterly, annually). Thus, we will know how many dimensions we have to specify for each indicator and what values they can carry. The @ symbol is used to access subsets of the SDMX data object (quite similarly to $ for data frames). With the code above we get all the separate ids of the codelists. The object dimensions gives us the following: To clarify what each of these means: CL_COICOP is the part of the HICP index you want to use. You are either interested in the overall index for all products or for example the sub-index for food products. Each level of the index has a unique identifier. CL_FREQ is the frequency of the indicator, which can be monthly, quarterly, annual, etc. Here we are interested in the monthly frequency. CL_GEO is the country or the selection of countries we want to download the data for. CL_OBS_FLAG is a flag that is given to certain observations. For example it can be “Preliminary data” or a “Forecast”, which should let you know that certain observations will go through revisions and are not final. CL_OBS_STATUS is a status of whether the observation exists or not. It can be for example NA for observations for which there is still no data. CL_UNIT is the unit, which can be for example an index 2015 = 100 or in our case annual percent change. Needless to say, each macroeconomic indicator will have a different set of these. For example the HICP does not have a seasonal adjustment option, but the GDP figures will most certainly have it. To extract all options for a certain codelist (apart from using pdfetch as described above) you can extract it to a data frame and/or data table with the following code: This will produce a neat data table (I prefer data table to data frame because it seems to be better optimized for many operations) with all country codes and their descriptions. You cannot transform the SDMX object directly to data table and this is why you need an intermediate step of transforming it to data frame. You can do the same with any of the listed dimensions from above. You just need to change the codelistId in the first line of the code above. We will actually need this geo_descr table in order to select which countries we want to extract the HICP index for. I want to extract it for all 27 countries of the EU (UK not included) so I need to remove all aggregate indices (EA, EU-27, etc.) and all non-EU countries like the USA and Turkey. We get this with the following script. This gives us a string with the 27 EU countries separated by a plus (as this is what the query syntax requires): Next we create the query url in the following way: Notice that after the initial generic part of the link we have the alias of the indicator (prc_hicp_manr) and then one by one each of the dimensions. M is stands for Monthly for CL_FREQ. RCH_A stands for annual percent change for CL_UNIT. CP00 is the COICOP classification for the overall HICP index. After that we add the country list we created before. In the end we add the start period for the query, namely 2007. To find out what is the correct order for the different values of the dimensions in the query url, you can either look at dsd -> datastructures -> datastructures -> [[1]] -> components -> dimensions (from the object preview in R studio) or at the output of the pdfetch_EUROSTAT_DSD command mentioned above. The former will give you a list of the requred dimensions ordered by the way they have to appear in the url. The latter will give you all the codelists values for each dimension in the order in which they have to be used in the url. Next we will download the data: Notice that in the way the data is downloaded this will create a stacked data table sorted by country. Next I will get the quantile distribution of the annual percent change of HICP across EU countries: The code above calculates the quantiles for every month in the data object (notice by = list(Date)). Thus, we have everything necessary to create the graph with ggplot2. We do that with the following code: This creates the following graph object: This graph tells us a lot about the dynamics of inflation in Bulgaria compared to the rest of the EU. During the 2007–2009 pre-crisis period we see that Bulgaria had one of the highest annual inflation rates in the EU. Perhaps, the pre-crisis growth in incomes drove nominal price convergence. If you are interested, you can find more about the topic of inflation dynamics in Bulgaria in this publication I co-authored. Later on we see a period between the end of 2013 and the end of 2017 when the inflation rate was negative and one of the lowest in the EU. However, average inflation in the EU was also very low and around zero during this period. The reasons for this were probably the low prices of commodities and the low growth environment. We see how inflation picks up in the past two years as commodities prices normalize and economic growth picks up too. There is an ECB paper I contributed to discussing the low inflation in the euro area in the years after the global financial crisis. With ggplot2 you can easily create multiple such graphs for various indicators. If you want to export them to a pdf file or a presentation you can consider using the multiplot function for ggplot2, which can align them nicely in one page. As I mentioned before, you can download time series from Eurostat and many other source with the pdfetch package as well. You can use the package to download data from Eurostat in the following way: This will create a neat data table with the observations, but without an additional column for the dates. This means you have to add it manually. This is why I prefer the method using the rsdmx package even though it might look a bit more convoluted. This is just a relatively simple example of what one can do in R in terms of automating macroeconomic reporting. Using rsdmx and/or pdfetch one can automate the data download of a wide scope of indicators and use R to analyze the data and visualize it in a presentable way. One should keep in mind that every now and then Eurostat changes the data structures and some slight changes might be required for the script to work. Also if you are not familiar with R keep in mind that SDMX queries can be done in excel as well. I hope you found this walk-through useful. Feel welcome to ask a question or start a discussion.
A neat way to fetch, analyze and visualize data from Eurostat in R
1
a-neat-way-to-fetch-analyze-and-visualize-data-from-eurostat-1c1108a348f2
2018-10-20
2018-10-20 17:51:32
https://medium.com/s/story/a-neat-way-to-fetch-analyze-and-visualize-data-from-eurostat-1c1108a348f2
false
2,310
A casual blog about economics, risk modelling and data science
null
null
null
Casual Inference
yanchev.mihail@gmail.com
casual-inference
null
null
Analytics
analytics
Analytics
15,193
Mihail Yanchev
Expert in Economics, Risk Modelling and Data Science
45e4e7a7506e
yanchev.mihail
6
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-16
2017-10-16 13:17:22
2017-10-16
2017-10-16 13:39:27
1
false
en
2017-10-16
2017-10-16 13:39:27
1
1c11120fbe30
0.901887
3
0
0
Data 4 Black Lives inspires me enough to take a weekend away from studying to engage in hopefully useful dialogue on how to use data to…
5
There has to be a better way to tackle the social injustice still alive in our country. From policies that do not work or policies that create worse problems, a better method needs to be used. I believe data science can be a tool to highlight the many sources of the American problem and also highlight the lack of effectiveness of common “solutions” so that they are never repeated. Data 4 Black Lives inspires me enough to take a weekend away from studying to engage in hopefully useful dialogue on how to use data to help black lives. Also, this events will serve as a opportunity to build relationships with people with like minds that are trying to make a difference in this American problem using data science. I am excited that this exist and I am also excited that this we showing that we can solve our own problems. Too much energy has used on efforts to obtain seats to a table where decisions are being made. Maybe it is finally time to empower ourselves and be the savior we have been waiting for. http://d4bl.org/conference.html
There has to be a better way to tackle the social injustice still alive in our country.
3
there-has-to-be-a-better-way-to-tackle-the-social-injustice-still-alive-in-our-country-1c11120fbe30
2017-10-16
2017-10-16 14:01:09
https://medium.com/s/story/there-has-to-be-a-better-way-to-tackle-the-social-injustice-still-alive-in-our-country-1c11120fbe30
false
186
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Chukwudi Uraih
null
afa0630a55db
choodaque
15
26
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-16
2018-05-16 08:56:25
2018-05-16
2018-05-16 12:29:11
10
false
en
2018-06-01
2018-06-01 16:15:43
9
1c118f017c69
3.548113
11
0
0
Dear Alphacats!
5
Alphacat Report (May 1–15) Dear Alphacats! As part of our efforts to be transparent and communicate regularly with our community, we are pleased to share this mid-month report, which includes our progress during these last two weeks and our outlook for the future. Community Alphacat Community By the middle of May, the effectiveness of Alphacat’s global community outreach has grown, and the number of community users increased significantly. The number of Twitter users increased from 5,299 to 6,900, with a growth rate of 30.21%. Global Telegram users increased from 5,711 to 6,720, an increase of 17.67%. The number of Facebook users also increased significantly, from 2,401 to 2,831, an increase of 17.91%. New Website Progress The design, research and development of new website are completed. Some details are being fine-tuned, and will come soon. Stay tuned! 2. Alphacat’s Whitepaper is updated to version 1.5.0, and will be released on our new official website. 3. A new introduction video will provide a clear introduction of Alphacat features, technical advantages, and our core team members. Exchange 1. HitBTC: the technical connections are complete. The exact date of the listing will be announced upon the official confirmation of the exchange. 2. Gate.io: The application has been submitted. We will announce further information as soon as we have any news. 3. Switcheo: the technical connections are complete. The exact date of the listing will be announced upon the official confirmation of the exchange. Product Development 1. Alphacat has released our new roadmap! The new roadmap has an updated timeline of development for each product, and refines the development roadmap for each product category. There are five product categories: Database & API, Alphacat AI Forecasting Engine, Alphacat Index Engine, Alphacat Analysis Tools, and the ACAT Store. Reference: https://medium.com/@AlphacatGlobal/alphacat-new-roadmap-46bf13aed6db 2. The first round of our BTC Daily Forecasting service has come to an end. The BTC bot has performed predictions for each day of the month of April. Using the rise probability as our criterion, the BTC bot successfully predicted 19 days of results, and its overall accuracy rate reached 63.33%. As the BTC forecast has achieved significant results, the Alphacat team will gradually release beta versions of new cryptocurrency prediction bots for testing starting in the month of May and through July. Now, the Alphacat team is inviting users to participate in product operations. Reference: https://medium.com/@AlphacatGlobal/vote-for-the-cryptocurrency-you-would-want-alphacat-to-forecast-aac82af9261c Questionnaire link: https://goo.gl/forms/QcJ4pVtD3VtMRSzO2 3. Cryptocurrency real-time forecasting: the core algorithm of our cryptocurrency real-time forecasting system Version 1.0 has been completed. The Alphacat team is currently working on the development of the ACAT Store. According to user voting results and feedback, we made some adjustments to the designs. 4. ACAT Database: on May 7th, the Alpha version of the database API was completed. 5. RangeBreak analysis tool for cryptocurrency: this product is used to analyze the resistance position, support position and other position of cryptocurrency, and release trading signals when the price hits support or resistance position. The algorithm is currently under development. Offline Activities The Alphacat Team was invited to attend London Tech Week on May 15th in Shanghai. At the London Tech Week Forum, Hanan Yariv — the global marketing director for Alphacat, spoke about blockchain and DApp. 2. Alphacat Founder — Dr. Bin Li attended Consensus 2018 in New York. Consensus 2018 features 250+ speakers and 4,000+ attendees from the leading industry startups, investors, financial institutions, enterprise tech leaders, and academic and policy groups who are building the foundations of the blockchain and digital currency economy. For more information of Alphacat: Website: www.Alphacat.io Telegram:https://t.me/alphacatglobal Medium:https://medium.com/@AlphacatGlobal Twitter:https://twitter.com/Alphacat_io Facebook: https://www.facebook.com/Alphacat.io/ Reddit: https://www.reddit.com/r/alphacat_io
Alphacat Report (May 1–15)
249
alphacat-report-mid-month-of-may-1c118f017c69
2018-06-01
2018-06-01 16:15:44
https://medium.com/s/story/alphacat-report-mid-month-of-may-1c118f017c69
false
609
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Alphacat
Alphacat is a robo-advisor marketplace that is focused on cryptocurrencies, and is powered by artificial intelligence & big data technologies. www.Alphacat.io
6300c5cec1ab
AlphacatGlobal
318
1
20,181,104
null
null
null
null
null
null
0
null
0
9c46fe82b0d3
2018-03-06
2018-03-06 21:50:14
2018-03-06
2018-03-06 22:01:45
3
false
pt
2018-03-06
2018-03-06 22:01:45
0
1c1339f73bdf
3.451887
10
0
0
Preditivas:
5
Tipos de aprendizagem Preditivas: Tarefas preditivas são tarefas voltadas para previsão, que é encontrar uma função, modelo ou hipótese que pode ser utilizada para previsão, digamos que eu posso querer prever um valor de um imóvel, ou por exemplo eu quero prever o estado para um novo paciente daqui a 5 meses depois da cirurgia, se ele estará doente ou saudável como nos vimos anteriormente, então na previsão eu tenho uma entrada (geralmente representada por X) e uma saída (geralmente representada por Y). Descritivas: Descritivas são tarefas de descrição, basicamente é explorar ou descrever em um conjunto de dados os objetos/exemplos que não possuem saídas associadas. Tudo bem, e qual seria a hierarquia de aprendizados? Na primeira imagem desse artigo nós temos o aprendizado supervisionado que dentro temos o aprendizado preditivo, temos também o não supervisionado que é o descritivo, lembrando que essa divisão é apenas por questões didáticas pois essa divisão é não tão rígida assim, muitas vezes você vai acabar misturando elas no mesmo projeto, pois o modelo preditivo também pode prover descrição dos dados e o modelos descritivos também pode prover previsões após validado. Aprendizado supervisionado: O aprendizado supervisionado como o próprio nome já diz, ele tem um supervisor externo ou seja, nós conhecemos a saída desejada para cada exemplo, lá no conjunto de dados do hospital nós tínhamos a saída para cada exemplo indicando se o paciente estava doente ou saudável, nós tínhamos rolotolados ali então a saída desejada para cada amostra. Dentro do aprendizado supervisionado nós temos a classificação e a regressão Classificação: Na classificação nós temos rótulos discretos, como os diagnósticos das pessoas se elas estão doentes ou saudáveis, se elas são boas ou más pagadoras, saber se uma fruta é banana, maçã ou laranja, temos um rótulo discreto. Aqui temos uma imagem de classificação veja, de uma lado está classificado o símbolo de adição(+), do outro as bolinhas(o), ai separamos os dois objetos, cada um com sua classificação.Por exemplo, bolinha poderia ser maus pagadores e o sinal mais(+) bons pagadores Regressão: Na regressão os rótulos não são discretos, eles são contínuos, por exemplo: peso, altura e etc. Veja essa imagem da representação de um dos tipos de regressão (no caso regrssão linear) não se preocupe, isso será detalhado mais à frente nessa série de artigos com exemplo práticos. Aprendizado não supervisionado: Se de um lado tem o aprendizado supervisionado com classificação e regressão onde você conhece as saídas desejadas para cada objeto, do outro tem o aprendizado não supervisionado onde nele, os algoritmos não fazem o uso de atributos de saída, esses algoritmos exploram as regularidade dos dados, um tipo de aprendizado não supervisionado é o de agrupamento (que nós vamos ver nessa série de artigos sobre Machine learning), onde um agrupamento dos dados é feito de acordo com sua similaridades, por exemplo, agrupar sequências biológicas, ou até mesmo clientes com comportamentos de compras similar para recomendar melhores produtos para esses cliente, os agrupamentos se preocupam em segmentar os registros do conjunto de dados em subconjuntos (também conhecidos como clusters), de tal forma que os elementos de um cluster compartilham propriedades comuns que distinguem de elementos dos demais clusters, isso nós vamos ver mais detalhadamente mais pra frente para não confundir a sua cabeça. No aprendizado não supervisionado também tem a sumarização que é encontrar a descrição compacta para os dados, consistem em identificar e indicar similaridade entre registros do conjuntos de dados. Tem também a associação para encontrar associações frequentes entre atributos. Aprendizado semi-supervisionado: Nesse tipo de aprendizado utilizamos dados rotulados e não rotulados para o treinamento. Normalmente utiliza-se uma pequena quantidade de dados rotulados com uma grande quantidade de dados não rotulados pois os não rotulados são mais baratos e são obtidos com menos esforço. Aprendizado por reforço: Neste tipo de aprendizado o algoritmo descobre por tentativa e erro quais ações geram as maiores recompensas, ele possui 3 componentes: o agente(tomador de decisões), ambiente(tudo com o qual o agente interage) e as ações(o que o agente pode fazer). O objetivo do aprendizado por reforço é que o agente escolhe ações que maximizem a recompensa esperada. Existem várias outras técnicas de aprendizado de máquina. Cada uma das técnicas tem lá suas peculiaridades, mas diante de tantas técnicas existe a melhor? NÃO, o que existe é a técnica mais adequada para o seu problema levando em consideração os seus dados e outros fatores como por exemplo o tempo de espera para o processamento, pois algumas técnicas demoram mais, outras demoram menos, ou algumas são mais eficazes que outras dependendo do seus dados . Veremos algumas dessas técnicas nos próximos artigos.
Tipos de aprendizagem
29
tipos-de-aprendizagem-1c1339f73bdf
2018-06-19
2018-06-19 10:23:11
https://medium.com/s/story/tipos-de-aprendizagem-1c1339f73bdf
false
769
Publicação ligada à comunidade AI Brasil (https://www.meetup.com/pt-BR/ai-brasil) focada em IA na prática. Tem por objetivo a democratização do uso da Inteligência Artificial, unindo a comunidade a empresas, indivíduos e instituições tendo em vista disseminação de conhecimento.
null
BrasilAI
null
aibrasil
null
brasil-ai
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,DATA SCIENCE,INTELIGENCIA ARTIFICIAL
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Italo José
Computer vision Engineer at Nextcode https://www.linkedin.com/in/italojs/
846b19bfbf1d
italojs
177
50
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-11
2017-10-11 06:57:30
2017-10-16
2017-10-16 17:17:43
7
false
en
2017-10-16
2017-10-16 17:17:43
0
1c13cb2ca349
7.793396
6
0
0
Disclaimer: This post was triggered by reading Scott Galloway’s “The Four” and it can easily be read as an attack on the book. It is not…
3
Toward a False xkcdization of Data Disclaimer: This post was triggered by reading Scott Galloway’s “The Four” and it can easily be read as an attack on the book. It is not meant as such. I do have some problems with the views expressed in the book and I might even dedicate a post to them. This post however, uses several problems I found within this book to point to what, unfortunately, is a broader problem. A month ago I didn’t even know who Scott Galloway is, and then somebody shared a link to one of his talks on our Facebook’s Workplace platform (1 out of the Four). I watched the video on Google’s YouTube (2 out of the Four) using my Apple iPad (3 out of the Four) and really loved it. “The Four”, Galloway’s book was immediately pre-ordered to my Amazon Kindle (4 out of the Four!). On October 3rd it arrived and I started reading it. At about 25% of the book I started feeling a bit awkward, things that totally blew me on Galloway’s talk were suddenly prone to a closer inspection of reading. Eventually I found myself developing a rather critical and skeptic approach to the book despite my general tendency to agree with its main points. This post, however, will only address one point, which played a major role in my disillusionment process, and which I chose to call false xkcdization. I love xkcd, I think Randall Munroe is a real genius, and even though I sometimes grow tired of it, I eventually find myself returning to xkcd and still appreciating its brilliance. One of the prominent features of xkcd is the plotting style, following the overall style of the comic strips, the plots have the appearance of hand-drawn sketches, as if telling us not to take them too seriously. But this is never used by Munroe as an excuse or permission not to take the underlying data or the presentation too seriously. In The Four (and as I later found out in his Medium blog as well) Galloway uses an xkcd-like plotting style, which goes great with his general tone of self-doubt and the ever present humorous stance. Axes of Evil But then, at 23% of the book I came across this quote The result? From 1997 to 2005 The Gap more than tripled in revenue, from $6.5 billion to $16.0 billion, while Levi Strauss & Co. sank from $6.9 billion to $4.1 billion. (my emphasis) Now, 6.5*3=19.5 which naturally means that The Gap did not more than triple in revenue. This is, of course, a simple and rather silly mistake, which can (and does) happen to anyone. But after observing it I became much more sensitive, which made me, one page later, take a screenshot of the plot supposedly showing the same data. There is so much wrong here that I don’t even know where to start. There is the naive linear interpolation between two data points eight years apart (but maybe there was no other choice), and there is the always suspicious decision to draw the horizontal x-axis at a value which is not zero. But most of all, there is the complete disrespect for the idea of a plot actually reflecting the underlying data. The Gap, as quoted above, started with $6.5B in 1997, yet the plot puts it at exactly the same level as 2005 Levi’s (which is $4.1B), the vertical distance between the two in 1997 is blown way out of proportion, just in order to make the shift look much more dramatic than it actually was ,and it really was dramatic, even without the graphic manipulation. Here is a plain old boring Excel plot using the actual data in the text. Instead of a dramatic David and Goliath we get a much more mundane tale of two opponents with practically equal starting points, one making the right decision leading to impressive growth, the other one slowly losing ground. The discussion of the steps take by The Gap in order to secure this success doesn’t have to change, even if the plot is much less impressive. And then, a few pages later, came a discussion of tuition fees. Galloway, being a professor in NYU Stern leads a brave and important attack against the ever rising cost of tuition. In order to emphasize his point he includes the following plot, “comparing” the overall inflation to the cost of tuition. Once again, I don’t even know where to start. The 200% inflation is drawn as an almost flat horizontal line. This choice could make some sense if the plot included several curves, all of them normalized against the inflation curve. But given the data in the plot this choice makes it totally pointless, since the y-axis loses any significance as actually measuring something. Furthermore, the inflation curve isn’t a truly flat line, and since the human brain is trained to detect even slight deviations from flat or perpendicular lines this choice actually gives the impression that the 200% inflation is a gentle, almost unobservable creep upwards. Once again, here is an Excel plot of the data, assuming a uniform annual increase of 3.7% for inflation and 8.3% in tuition over the same range. And just as before, the main story Galloway wants to tell remains the same, even if the plot is much less dramatic. Treacherous Data But the sins of The Four do not stop at distorted graphical presentation. 28% into the book (Kindle location 1302) comes this quote. You dedicate thirty-five minutes of each of your days to Facebook. Combined with its other properties, Instagram and WhatsApp, that number jumps to fifty minutes. People spend more time on the platform than any behavior outside of family, work, or sleep. Which is followed by this plot. I do not want to argue with Galloway’s interpretation of the numbers, just point to the evident fact that the plot actually gives a number of 60 minutes (35+25) instead of the 50 minutes appearing in the text just above it. Admittedly, the numbers are not that clearly defined, and different definitions or measurement methods can easily lead to different results. Nevertheless, one should either admit the discrepancies between data source or pick one and stick with it. Quoting one and then displaying a plot with different numbers is not a valid option. And then there is the case of the 2016 digital advertising growth which appears twice in the book, each of them accompanied by a plot. The bottom line, in both cases, is similar — digital advertising is dominated by two players, the long tail is either dying (-3%) or losing significance (mere +10%). Again, one has to admit that the actual definitions here can be vague, and different decisions as to what counts as “other” in digital advertising can easily lead to different outcomes, but using two significantly different results for the same measurement is not a valid option. Annoying Math Data and data science are all about collecting data and then using mathematical and statistical tools to infer insights. And while Galloway manages to collect a lot of data and weave them into a story, sometimes one can observe a tendency to miss the second part, that of using math to understand what is going on. Let’s look at the following quote (Kindle location 1408): Over the last five years, only thirteen in the S&P 500 have outperformed the index each year — evidence of our winner-take-all economy. Thirteen out of 500, that’s a really small number, one might be tempted to think, a strong evidence of a winner-take-all dynamics as Galloway says. This might even be true, but this is far from being a logical conclusion from the numbers. The S&P 500 is a curated list 500 large companies to begin with, and one can expect most of them to show similar annual growth, now let us suppose that each year exactly 50% of them (that is 250 firms) grow more than the average and exactly 250 grow less than the average. Let us now further suppose that those lists of 250 are totally random and independent on the previous year’s lists. That implies that the odds of beating the index on five consecutive years are 1 in 32, which for 500 firms yields the expected number of 500/32=15.625 firms. And while 13 is smaller than this it is far from being an evidence for anything as dramatic as claimed by Galloway (In fact, a more realistic model would be to have slightly less than 250 beating the index on a given year, since the median in such cases is usually lower than the average, bringing the number even closer to 13). The same disregard for math can be found in the seemingly innocent joke made by Galloway when discussing career advice Given that most of us — and statistics support me on this — are average. (Kindle Locations 3015–3016) This is, of course, far from being true, and statistics support me on this. Even in the most well-behaved distributions, only few of us are indeed average. Axes of Evil 2 Finally, let’s take a look at this plot, illustrating Galloway’s advice not to pick a career path based on its “sexiness” (Kindle location 3274) This looks like a classical xkcd plot, having the audacity to quantify the unquantifiable and put it on a plot, but this is also where the false xkcdization is most evident. Randall Munroe would never settle for something so sketchy and non-informative. In xkcd the axes would have been given some measurable meaning, even if this measurement is seemingly absurd, with units of sexiness and fulfillment, Munroe would have also taken care to include the important outliers (the rare jobs in which sexiness and fulfillment go hand in hand, the mass of crap jobs in which none of them is achieved), and add annotations to selected points in order to emphasize the message (and make some jokes). None of these things happen here, it’s just a pretty meaningless plot trying to put a data-like appearance to Galloway’s thesis. Play and Respect I love xkcd as I earlier said, and I believe it has immense influence on our current data-driven culture. One of the best things about it is the way it gave us all a license to be playful with data, and approach data analysis with an open mind and a sense of humor. But playfulness does not mean data can be treated with disrespect. Playing is fun only as long as it is accompanied by respect, even if you are playing with data.
Toward a False xkcdization of Data
57
toward-a-false-xkcdization-of-data-1c13cb2ca349
2018-05-20
2018-05-20 11:50:47
https://medium.com/s/story/toward-a-false-xkcdization-of-data-1c13cb2ca349
false
1,787
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Shahar Even-Dar Mandel
null
bc3c225a0f00
steerpike0
4
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-09
2018-07-09 03:42:34
2018-07-09
2018-07-09 03:45:53
1
false
en
2018-07-09
2018-07-09 03:45:53
6
1c1421258e09
2.490566
0
0
0
Written by Stefan Beyer
4
A Word from a Computer Scientist: Artificial Intelligence is the Key to Informed Crypto-Investment Decisions Written by Stefan Beyer Irrational Markets As anyone who has tried their luck on the stock market has found out, markets behave in mysterious ways. Even expert traders with years of experience are frequently taken by surprise and suffer unexpected losses. This is specifically true for cryptocurrency markets and similar assets, such as ICO investments. This increased unpredictability is not just explained by the smaller size and the short-lived history of the market. High technological risk and regulatory uncertainty are factors to be taken into account. Furthermore, cryptocurrency investments are highly influenced by social sentiment, news, and rumors. In short, traditional methods, such as mathematical trend analysis does not work anymore. In addition, the number of factors involved is too large and varied for humans to make accurate decisions. For this reason, WatermelonBlock employs Artificial Intelligence for accurate crypto market predictions. Artificial Intelligence Patterns and trends in market behavior may not be easily identifiable for human observers, but AI can make a difference. Recent advances in data processing capacity and machine learning algorithms have made trading and investment advice one of the fields in which AI can provide a significant advantage over human expertise. The field of AI is made up of many domains and techniques. Some techniques are more suitable for trading than others, especially in the realm of cryptocurrency investment. Statistical Approaches Since the early days of AI, historical data has been used to gain statistical insights in order to forecast future events. In terms of trading, there are patterns that repeat themselves. Some of these patterns are obvious, for example, the Christmas build-up having a favorable impact on the stocks of retail companies. However, many other patterns depend on a large number of parameters which are not obvious to human experts. Linear Regressions and more sophisticated neural networks can be trained by large sets of data to adjust weights on the importance of input parameters, in order to make fairly accurate decisions. Natural Language Processing and Sentiment Analysis As we have mentioned before, markets are irrational and greatly influenced by psychology. This is especially the case in cryptocurrency investment. Rumors, social media sentiment, and real or fake news are often more important than statistical trends or economic analysis. To take into account all these factors, artificial intelligence has to process many sources of data in a variety of formats. Moreover, this data is not written in machine processable form. Facebook posts, for example, are meant for human readers and need to be interpreted in such a way. In order for a machine to detect whether an asset is being talked about positively or negatively, it first has to understand human language. This is covered by the AI field of natural language processing, which deals with interpreting spoken and written language. Sentiment analysis, at the next level, classifies language according to its sentiment towards a certain subject. This makes it possible for social networks and other data sources to be mined for references according to certain assets, and the resulting analysis to be used for informed investment decisions. Artificial intelligence can even consider the reputation and historic accuracy of data feeds. Deep Learning Deep Learning is a recent evolution of neural network-based machine learning that makes all of the above possible. Interconnected neural networks are arranged in layered architectures providing input for each other, modeling closely the functioning of the human brain. The results of deep learning even often surprises machine learning experts and make it possible for WatermelonBlock to provide reliable AI-generated investment recommendations.
A Word from a Computer Scientist: Artificial Intelligence is the Key to Informed Crypto-Investment…
0
a-word-from-a-computer-scientist-artificial-intelligence-is-the-key-to-informed-crypto-investment-1c1421258e09
2018-07-09
2018-07-09 03:45:53
https://medium.com/s/story/a-word-from-a-computer-scientist-artificial-intelligence-is-the-key-to-informed-crypto-investment-1c1421258e09
false
607
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
WatermelonBlock
null
2e4ae63ee111
watermelonblock
39
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-31
2018-03-31 23:24:48
2018-03-31
2018-03-31 23:35:29
6
false
ja
2018-09-10
2018-09-10 08:08:36
11
1c1494096c45
4.548
1
0
0
Image Video
5
「ヒューマノイド連動AGI(汎用人工知能)」プロトタイプを発表! Humanoid for Robo Adivosor AGI for FinTech Portal Image Video 2018年3月29日ロボットエンジニア伊藤博之設計・開発による「ヒューマノイド連動AGI(汎用人工知能)」プロトタイプを発表しました。 <概要> 本製品はロボットエンジニア伊藤博之がスクラッチ開発。元々ヒューマノイドに搭載する汎用人工知能をビジネス用製品化するためのプロトタイプ。 人間の感情や倫理観を持ち、忖度(そんたく)さえも出来るAGIとして、他AIとの差別化を図る。また、ボディのあるリアルロボットとのサーボ制御連動機能も備えており、ヒューマノイド個体毎に異なる性格・能力を植え付けることも出来る。今回プロトタイプを発表し、共同開発・実証実験を希望する企業、投資スポンサーを広く募集する。 AGI:Artificial Business Intelligenceの略。 ある目的に特化したAIを”弱いAI”と呼び、現在のほとんどのAIがこれにあたる。対して多目的に広く対応していける人間のようなAIを”強いAI”と呼び、この強いAIが”汎用人工知能”と言われる。 Humanoid Brain Mechanism <製品の特長> ①デカルトの精神・肉体二元論をベースとした伊藤のAI理論「汎用人工知能二元論(Dualism of AGI)」によるオリジナルデザインAGI。 ②人間の感情や倫理観と理論的な機械としてのモード切替が可能。人間&機械両面の特性を個体ごと(ID毎)に持たせることが出来る。 ③フルスクラッチプログラム。(輸入ソフトの日本語化製品代理店販売やクラウドサービスではない。) ④ビジネス用普及プログラム活用により、安価なAGIを実現。 ⑤リアルロボットにも接続可能なAIソフトウェア。ロボットサーボ制御基盤(エッジ組み込みソフト)連動。 「汎用人工知能二元論(Dualism of AGI)」については、2018年5月出版予定の伊藤博之著AGI本(題名未定:秀和システム社より発行)にて詳細を説明。 <販売形態> 基本はライセンス販売。 オンプレミスによりユーザーが独自のビジネスルールや感情データを自由に登録可能とする。 ※仕様は個別相談。 <価格> ※個別見積り。(参考価格 5ライセンス200万円~) <販売目標> FinTech領域他にて初年度1,000ライセンス <今後の予定> 応用例) ●感情を理解するロボアドバイザー(ソフトウェア) ・顧客の嗜好や投資ポリシーを理解し、かつ相場の動向を冷静に分析する。 ・家計のやりくり、貯蓄の勧め、保険商品アドバイスを行う。 ・銀行API利用によるポータルWebサービス化。 ●ロボットによる無人金融店舗(ソフトウェア+ハードウェア) ・ボディを持つヒューマノイド型ロボットとの連動により、無人店舗を実現する。 ・IoTセンサー類、監視カメラ、モバイル通信との連動によるスマート店舗化。 Movable Robo-Advisor <開発者プロフィール> Hiroyuki Ito Robot Engineer ロボットエンジニア 伊藤博之(いとう・ひろゆき) 1963年東京都生まれ。山一證券ロンドン支店駐在員はじめ、金融系SEとしてNY・香港・スイス・中近東におけるグローバルプロジェクトに多数参加。日本IBM、興銀、E&Y監査法人、外資系ファンド勤務を経て現在AI/ロボットエンジニアとして東京オリンピックに向けた汎用人工知能搭載ヒューマノイドの開発中。成蹊大学工学部卒(人工知能専攻)。 intel主催IoTハッカソン2015東京大会優勝(ロボット出品)。母方がエレキテルの「平賀源内」の平賀家の血筋。 Hiroyuki Ito AI & Robotics Seminor 人間とAIロボットが共存するシンギュラリティー社会をポジティブに捉え、2020年東京オリンピック/パラリンピックにおいて、日本発AGI搭載ヒューマノイドを世界に発信することを目標としている。 #ヒューマノイド連動AGI #汎用人工知能 #デカルトの精神肉体二元論 #AI理論 #汎用人工知能二元論 #DualismAGI #感情を理解するロボアドバイザー #ロボットによる無人金融店舗 #ヒューマノイド型ロボット #IoTセンサー類 #スマート店舗化
「ヒューマノイド連動AGI(汎用人工知能)」プロトタイプを発表!
2
ヒューマノイド連動agi-汎用人工知能-プロトタイプを発表-1c1494096c45
2018-09-10
2018-09-10 08:08:36
https://medium.com/s/story/ヒューマノイド連動agi-汎用人工知能-プロトタイプを発表-1c1494096c45
false
57
null
null
null
null
null
null
null
null
null
Humanoid
humanoid
Humanoid
38
Ito Hiroyuki
null
2d872e66f5cb
itohiroyuki
30
31
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-14
2017-12-14 22:15:35
2017-12-14
2017-12-14 22:07:48
2
false
en
2017-12-14
2017-12-14 22:17:00
4
1c14a2e4c6a
2.251258
0
0
0
Former Google engineer starts new religion based on worshiping a robot way smarter than humans.
2
Why You Might Be Worshiping A Robot God In The Near Future Former Google engineer starts new religion based on worshiping a robot way smarter than humans. Worshiping a robot god is the next emerging religion. According to experts, a religion with a robot messiah is not far-fetched, as humans want to follow things perceived to be more intelligent. Following this philosophy, one ex-Google employee started a religion that does just that. Way Of The Future In November, Anthony Levandowski, formerly an engineer with Google, formed the “Way of the Future” (WOTF). As reported by the Daily Mail, the church will instruct followers to worship a robot god that is “a billion times smarter than humans.” WOTF is currently writing its gospel, going by the name “The Manual.” The church is also in the process of creating robot-worshipping rituals and finding a physical location. Registration documents show the religion’s purpose is to “develop and promote the realization of a Godhead based on artificial intelligence.” Levandowski is named “dean” and will control the church until he dies or resigns. The latest records reveal WOTF has received $20,000 in gifts, $1,500 for membership fees, and $20,000 other revenue. God-like Robots Already Control Our Life While the idea may be outlandish, artificial intelligence is already reaching into everyday life. We ask AI bots like Siri or Cortana for directions, weather reports, and even to turn on the lights in our house. It is not a significant leap to imagine AI being asked to solve society’s more pressing problems. “An AI would provide the equivalent of a ‘messiah’ — having many orders of magnitude more processing elements than the brain, enabling it to gift us with solutions to the most daunting social, political, economic, and environmental challenges,’ said Dr. Stephen Thaler, CEO of Imagination Engines and an AI expert. AI Getting Smarter Artificial intelligence continues to grow exponentially and is expected to surpass human intelligence in the next 10 to 15 years, per a report from Fox News. Highly intelligent robots may one day evolve and develop consciousness. The questions many experts are asking is will they have the best interests of humans in mind or will they expect us to be subservient. Tech mogul Elon Musk has been very critical of the AI god concept, claiming super-intelligent robots are a more significant threat than nuclear war with North Korea. Some experts are a bit more optimistic. Author and consultant Peter Scott believes AI bots will not likely want to be worshipped. Using their supercomputing power, robots will be more interested in helping guide humans to a better future. Either way, the genie is out of the bottle and cannot be put back in. Only time will tell if we become slaves to a super smart robot god or just a bit closer to a utopian society where AI will be a cooperative companion to humans. Originally published at scorchedpumpkin.com on December 14, 2017.
Why You Might Be Worshiping A Robot God In The Near Future
0
why-you-might-be-worshiping-a-robot-god-in-the-near-future-1c14a2e4c6a
2018-03-29
2018-03-29 07:06:10
https://medium.com/s/story/why-you-might-be-worshiping-a-robot-god-in-the-near-future-1c14a2e4c6a
false
495
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
John Houck
I am a full-time editor and writer for numerous private clients and blog sites. I am also the content manager for http://scorchedpumpkin.com
af0a14e3f184
thejohnhouck
55
77
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-04-28
2017-04-28 11:15:44
2017-10-22
2017-10-22 15:17:22
13
false
en
2017-10-22
2017-10-22 15:17:22
1
1c16538c2a9d
7.679245
1
0
0
Both supervised and unsupervised learning techniques have made great strides in recent years. Neural Networks (NN) have been especially…
2
Building Neural Network Intuition Both supervised and unsupervised learning techniques have made great strides in recent years. Neural Networks (NN) have been especially interesting and have been able to successfully complete difficult tasks that have been unattainable using other methods. One of the challenges for someone new to the field is to gain a basic intuition and understand of what is happening behind the scenes. For someone new to the field it seems like NN are some sort of voodoo: throw a bunch of data into an arbitrary structure, train and hope that it works and if it doesn’t, you are shooting in the dark trying to improve it . The lack of basic understanding of behind the scenes makes it difficult to understand what should be done in order to improve a dysfunctional NN. We wanted to answer a few basic questions: What happens when the network trains? How does the network divide the input space and defines the boundary between different classifications classes? In order to answer these question it would be useful if we could somehow visualize what is happening. But it is very difficult to visualize since most real world examples are high dimensional. That is why we decided to engineer a simple artificial example with which we could experiment. Building up the experiment In this experiment we are making a basic assumption that we are operating in a perfect noiseless world. That means that in all the examples we are going to generate we will not be introducing noise. This is a deviation from the real world examples but it is fair at this point since what we are trying to understand the NN mechanism. In order for this example to accurately emulate a real world scenario there are a few points we have to emphasis: One of the ways you could look at a neural network is as a function. For every vector in the input space a neural network outputs a value. By training the neural network what we are essentially doing is slightly changing this function in accordance to the training data we are given. At the end of the training we get a final function to whom we could input any ( x₁,x₂) and get an output value. Theoretically the output should be between 0 and 1 but this is not guaranteed. Every problem has a theoretic perfect function that takes a vector from the input space and gives the correct value that correlates with the correct classification. This is true for real world examples as well. Let’s say for example, we have as an input space vectors that represent black and white images whose size is 1000x1000 pixels. For each of the possible vectors we want to classify if the image contains a cat. We can, theoretically, go over each of the possible images in the input space and label them. Any input we could recreate in the real world using a camera and a cat would be labeled as “cat” and all the rest as “not cat”. So in essence the “theoretically perfect function” will get a value of one in “cat” input vectors, and zero otherwise. In essence this “perfect function” also called the target function is the function we are trying to approximate using our neural network. Not all points in the input space are created equally. There are regions in the input space which we will sample from more frequently and others we will not sample from at all. The correct mathematical terminology is regions of the input space where the probability distribution of picking a point is very low (or even zero) we will call “non-interest zone” and areas where the probability distribution is decent we will call “interest zones.” To emphasize this we go back to our “cat in an image” problem. Most of our input space is composed of vectors that hold random values that don’t represent any image in the real world at all. Even though these officially are pictures that don’t contain a cat in them, so they will be categorized as zero in the target function. These are areas which we will not give the neural networks data samples for training, nor pick those images for infer (infer is using a trained NN for classification). Since we don’t care how the neural network classifies these regions these are “non-interest zones”. Same goes for regions of the input space that hold legitimate real world examples but we will not sample (or hardly ever). For instance if we had images containing galaxies but we know that during training and infering we will never draw those types of images. The experiment We decided to create a problem that had two inputs and one classification class. This is done in order to plot the inputs and output on a 3D graph. The classification problem is to predict if a point lies inside of a circle whose center is at (1,2) and radius is 3. The target function: f : ℝ²→ℝ Figure: 1 if inside the circle, 0 otherwise. Next we will try to emulate the fact that not all our input space is composed of interest points, i.e. not all the points in the input space will be drawn in both the training or inferring. We will draw points from inside parallelogram that adheres to these equations. (1) -4< x₁ <6 (2) x₁–3.5< x₂< x₁+3.5 We generated 5000 random points and labels from within this parallelogram to serve as our training and validation set. These points represent the “interest zone” the network will be trained on. You can see a plot of these points in the following graph. The blue points are points classified as 1 (in the circle) and red are 0. Neural network and training The structure of the neural network we built is as follows: Two fully connected hidden layers, each with ten nodes and customary bias node. Activation function: ReLU Loss function: mean squared error. The network was trained for 150 epochs using 80/20 training/validation set split. Results The training: Figure: (Left) Initial. (Middle) After 30 epochs. (Right) After 90 epochs. The follow graph shows the trained network results (150 epochs) within the ‘interest zone’: Figure: (Left) Network prediction. (Right) Ground truth The following plot shows the network prediction over a section of the input space which is wider than the “interest zone”. We limited the plots to this range x₁ ∈[-10,10], x₂ ∈[-10,10]: Figure: Two different angles If you only look inside our interest zones (the parallelogram) you can see that as the neural network learns the output function converges to a close approximation of the real target function. Outside our interest zones our neural network give arbitrary values and isn’t close the the target function. This makes sense since we never gave neural network any information on that part of the input space. Backpropagation One of the things we wanted to do is to visualize backpropagation. One of the questions we asked ourselves is when doing backpropagation of a single point, is the change to the output function local (ie in the close proximity of that point in the input space), or does the backpropagation have effects on points that are far away from the point we were updating. In order to do this we start with a pre-trained network. We saved the model and then trained on single point: (-2,-2) for one epoch. The following graph shows the difference in the neural network output for a large section of the input space: x₁ ∈[-30,30], x₂ ∈[-30,30] Our initial intuition was that the effects of training on one example will be local. To our surprise this was not the case. We don’t know why exactly this is. One thing that is noticeable is that the largest changes where to far away points. Also that the changes seem to follow surprisingly straight ridges. We don’t know exactly why this is. Maybe it’s due to using ReLU as the activation function. In the next post we will explore a different way of analyzing the math behind neural networks which might give us insights to what is going on. We might revisit this example in a future post. Conclusion There are a few important points we can take away from this experiment. The training process is the process of using a large amount of points to try to estimate an unknown target function. In neural networks, training is essentially the process of approximating a function. As discussed earlier, there are 2 kinds of “non interest zones” in the input space: areas with invalid values for the problem domain (i.e. random values), and areas with valid values but the NN was not trained on them (e.g. galaxies). When inferring a given point, we may be able to give a better estimation of the reliability of the classification based on its location in the input space with respect to trained areas (“interest zones”) and non trained areas (“non interest zones”). This is something we may want to revisit in future posts. The effects of back propagation are not local to the point involved and the effects may be greater in far away regions of the input space. The initial assumption that we took, in which we were operating in a perfect noiseless world is a bit of a stretch. When operating in the real world the “target function” becomes a “target distribution” which has a probabilistic element. When using evaluation set to predict the accuracy of a NN in the real world, we make the assumption that validation set probability distribution is the same as in the real world. This is where there is potential for things to break. For example, if I evaluated a NN using a validation set that had 99.9% pictures of cats and dogs and 0.01% pictures of galaxies but then in the real world I feed the NN 80% pictures of cats and dogs and 20% galaxies than our evaluation of the accuracy of the NN will be irrelevant. This is a joint project with Oren Meiri The code can be found here
Building Neural Network Intuition
1
building-neural-network-intuition-1c16538c2a9d
2018-05-06
2018-05-06 10:10:14
https://medium.com/s/story/building-neural-network-intuition-1c16538c2a9d
false
1,664
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Gil Meiri
null
34f58b36cdf6
gilmeiri
8
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-21
2018-06-21 00:21:37
2018-06-21
2018-06-21 00:49:06
14
false
en
2018-06-21
2018-06-21 00:49:06
5
1c187db6ec83
11.404717
27
2
0
Note: This post was originally published on the Canopy Labs website, and describes work I’ve been lucky to do as a data scientist there.
3
Interpreting complex models with SHAP values Note: This post was originally published on the Canopy Labs website, and describes work I’ve been lucky to do as a data scientist there. An important question in the field of machine learning is why an algorithm made a certain decision. This is important for a variety of reasons. As an end user, I am more likely to trust a recommendation if I understand why it was exposed to me. As an organization, understanding that customers made a purchase because this campaign was particularly effective can allow me to tailor my future outreach efforts. However, this is a challenging and still developing field in machine learning. In this post, I am going to discuss exactly what it means to interpret a model, and explore a novel technique called SHAP (https://github.com/slundberg/shap) which is particularly effective at allowing us to take the hood off complex algorithms. What does it mean to interpret a model (and why is it so hard)? Let’s start by defining exactly what it means to interpret a model. At a very high level, I want to understand what motivated a certain prediction. For instance, lets reuse the problem from the XGBoost documentation, where given the age, gender and occupation of an individual, I want to predict whether or not they will like computer games: In this case, my input features are age, gender and occupation. I want to know how these features impacted the model’s prediction that someone would like computer games. However, there are two different ways to interpret this: On a global level. Looking at the entire dataset, which features did the algorithm find most predictive? XGBoost’s get_score() function - which counts how many times a feature was used to split the data – is an example of considering global feature importance, since it looks at what was learned from all the data. On a local level. Maybe, across all individuals, age was the most important feature, and younger people are much more likely to like computer games. But if Frank is a 50-year-old who works as a video game tester, it’s likely that his occupation is going to be much more significant than his age in determining whether he likes computer games. Identifying which features were most important for Frank specifically involves finding feature importances on a ‘local’ – individual – level. With this definition out of the way, let’s move on to one of the big challenges in model interpretability: Trading off between interpretablity and complexity Let’s consider a very simple model: a linear regression. The output of the model is In the linear regression model above, I assign each of my features x_i a coefficient ϕ_i, and add everything up to get my output. In the case of my computer games problem, my input features would be (x_Age, x_Gender, x_Job). In this case, its super easy to find the importance of a feature; if ϕ_i has a large absolute value, then feature xi had a big impact on the final outcome (e.g. if ∣ϕ_Age∣ is large, then age was an important feature). However, there is also a drawback, which is that this model is so simple that it can only uncover linear relationships. For instance, maybe age is an important feature, and if you’re between 12 and 18 you’re much more likely to like computer games than at any other age; since this is a non-linear relationship, a linear regression wouldn’t be able to uncover it. In order to uncover this more complicated relationship, I’ll need a more complicated model. However, as soon as I start using more complicated models, I lose the ease of interpretability which I got with this linear model. In fact, as soon as I try to start uncovering non-linear, or even interwoven relationships — e.g. what if age is important depending on your gender? — then it becomes very tricky to interpret the model. This decision — between an easy to interpret model which can only uncover simple relationships, or complex models which can find very interesting patterns that may be difficult to interpret — is the trade off between interpretability and complexity. This is additionally complicated by the fact that I might be interpreting a model because I’m hoping to learn something new and interesting about the data. If this is the case, a linear model may not cut it, since I may already be familiar with the relationships it would uncover. The ideal case would therefore be to have a complex model which I can also interpret. How can we interpret complex models? Thinking about linear regressions has yielded a good way of thinking about model interpretations: I’ll assign to each feature x_i a coefficient ϕ_i which describes — linearly — how the feature affects the output of the model. We’ve already discussed the shortcomings of this model, but bear with me: Across many data points, the coefficients ϕ will fail to capture complex relationships. But on an individual level, then they’ll do fine, since for a single prediction, each variable will truly have impacted the model’s prediction by a constant value. For instance, consider the case of Frank, the 50-year-old video game tester who loves computer games. For him, ϕ_Job will be high and ϕ_Age will be low. But then, for Bobby, a 14-year-old, ϕ_Age will be high since the model has see that 14-year olds tend love computer games because they are 14 years old. What we’ve done here is take a complex model, which has learnt non-linear patterns in the data, and broken it down into lots of linear models which describe individual data points. Its important to note that these explanation coefficients ϕ are not the output of the model, but rather what we are using to interpret this model. By aggregating all of these simple, individual models together, we can understand how the model behaves across all the customers. So, to sum up: Instead of trying to explain the whole complex model, I am just going to try and explain how the complex model behaved for one data point. I’ll do this using a linear explanation model; let’s call it g. In addition, to further simplify my simple model, I won’t multiply the coefficients ϕ by the original feature value, x. Instead, I’ll multiply it by 1 if the feature is present, and 0 if it is not. In the case of predicting who loves computer games, what I therefore get is the following: where g_Frank=p_Frank, the original prediction of the model for Frank. Note that the coefficients apply only to Frank; if I want to find how the model behaved for Bobby, I’ll need to find a new set of coefficients. In addition, since Bobby doesn’t have a job, I multiplied ϕ_Bobby Job by 0 (since there isn’t an x_Bobby Job). His simple model will therefore be I’ll do this for all the data points and aggregate it to get an idea of how my model worked globally. Now that I have this framework within which to interpret complex models, I need to think about exactly what properties I want ϕ to capture to be useful. Shapley values (or, how can I calculate ϕ?) The solution to finding the values of ϕ predates machine learning. In fact, it has its foundations in game theory. Consider the following scenario: a group of people are playing a game. As a result of playing this game, they receive a certain reward; how can they divide this reward between themselves in a way which reflects each of their contributions? There are a few things which everyone can agree on; meeting the following conditions will mean the game is ‘fair’ according to Shapley values: The sum of what everyone receives should equal the total reward If two people contributed the same value, then they should receive the same amount from the reward Someone who contributed no value should receive nothing If the group plays two games, then an individual’s reward from both games should equal their reward from their first game plus their reward from the second game These are fairly intuitive rules to have when dividing a reward, and they translate nicely to the machine learning problem we are trying to solve. In a machine learning problem, the reward is the final prediction of the complex model, and the participants in the game are features. Translating these rules into our previous notation: g_Frank should be equal to p_Frank, the probability the complex model assigned to Frank of liking computer games. 2. If two features x contributed the same value to the final prediction, then their coefficients ϕ should have the same value 3. If a feature contributed nothing to the final prediction (or if it is missing), then its contribution to g should be 0 4. If I add up g_(Frank+Bobby) then this should be equal to g_Frank+g_Bobby It’s worth noting that so far, our simple model by default respects rules 3 and 4. It turns out that there is only one method of calculating ϕ so that it will also respect rules 1 and 2. Lloyd Shapley introduced this method in 1953 (which is why values of ϕ calculated in this way are known as Shapley values). The Shapley value for a certain feature i (out of n total features), given a prediction p (this is the prediction by the complex model) is There’s a bit to unpack here, but this is also much more intuitive than it looks. At a very high level, what this equation does is calculate what the prediction of the model would be without feature i, calculate the prediction of the model with feature i, and then calculate the difference: This is intuitive; I can just add features and see how the model’s prediction changes as it sees new features. The change in the model’s prediction is essentially the effect of the feature. However, the order in which you add features is important to how you assign their values. Let’s consider Bobby’s example to understand why; it’s the fact that he is both 14 and male that means he has a high chance of liking computer games. This means that whichever feature we add second will get a disproportionately high weighting, since the model will see that Bobby is a really likely candidate for liking computer games only when it has both pieces of information. To better illustrate this, lets imagine that we are trying to assign feature values to the decision tree from the XGBoost documentation. Different implementations of decision trees have different ways of dealing with missing values, but for this toy example, lets say that if a value the tree splits on is missing, it calculates the average of the leaves below it. As a reminder, here is the decision tree (with Bobby labelled): First, we’ll see Bobby’s age, and then his gender. When the model sees Bobby’s age, it will take him left on the first split. Then, since it doesn’t have a gender yet, it will assign him the average of the leaves below, or (2 + 0.1) / 2 = 1.05. So the effect of the age feature is 1.05. Then, when the model learns he is male, it will give him a score of 2. The effect of the gender feature is therefore 2−1.05=0.95. So in this scenario, ϕ_Age Bobby=1.05 and ϕ_Gender Bobby=0.95. Next, lets say we see his gender, and then his age. In the case where we only have a gender, the model doesn’t have an age to split on. It therefore has to take an average of all the leaves below the root. First, the average of the depth 2 leaves: (2 + 0.1) / 2 = 1.05. This result is then averaged with the other depth 1 leaf: (1.05 + (-1)) / 2 = 0.025. So, the effect of the gender feature is 0.025. Then, when the model learns he is 14, it gives him a score of 2. The effect of the age feature is then (2–0.025)=1.975. So in this scenario, ϕ_Age Bobby=1.975 and ϕ_Gender Bobby=0.025. Which value should we assign ϕ_Age Bobby? If we assign ϕ_Age Bobby a value of 1.975, does this mean we assign ϕ_Gender Bobby a value of 0.025 (since, by rule 1 of Shapley fairness, the total coefficients must equal the final prediction of the model for Bobby, in this case 2)? This is far from ideal, since it ignores the first sequence, in which ϕ_Gender Bobby would get 0.95 and ϕ_Age Bobby would get 1.05. What a Shapley value does is consider both values, calculating a weighted sum to find the final value. This is why the equation for ϕ_i(p) must permute over all possible sets of S of feature groupings (minus the feature i we are interested in). This is described in S⊆N/i below the summation, where N is all the features. How are the weights assigned to each component of the sum? It basically considers how many different permutations of the sets exist, considering both the features which are in the set S (this is done by the ∣S∣!) as well as the features which have yet to be added (this is done by the (n−∣S∣−1)!. Finally, everything is normalized by the features we have in total. Calculating a Shapley value For Bobby, what would the Shapley value be for his age? First, I need to construct my sets S. These are all possible combinations of Bobby’s features, excluding his age. Since he only has one other feature — his gender — this yields two sets: {x_Gender}, and an empty set {}. Next, I need to calculate ϕ_i(p) or each of these sets, S. Note than as I have 2 features, n=2. In the case where S = {}: The prediction of the model when it sees no features is the average of all the leaves, which we have calculated to be 0.025. We’ve also calculated that when it sees only the age, it is 1.05, so This yields In the case where S = {x_Gender}: We’ve calculated that the prediction of the model with only the gender is 0.025, and then when it sees both his age and his gender is 2, so So Adding these two values together yields Note that this value makes sense; its right in the middle of what we calculated when we calculated feature importance just by adding features one by one. In summary, Shapley values calculate the importance of a feature by comparing what a model predicts with and without the feature. However, since the order in which a model sees features can affect its predictions, this is done in every possible order, so that the features are fairly compared. Shap values Unfortunately, going through all possible combinations of features quickly becomes computationally unfeasible. Luckily, the SHAP library introduces optimizations which allow Shapley values to be used in practice. It does this by developing model specific algorithms, which take advantage of different model’s structures. For instance, SHAP’s integration with gradient boosted decision trees takes advantage of the hierarchy in a decision tree’s features to calculate the SHAP values. This allows the SHAP library to calculate Shapley values significantly faster than if a model prediction had to be calculated for every possible combination of features. Conclusion Shapley values, and the SHAP library, are powerful tools to uncovering the patterns a machine learning algorithm has identified. In particular, by considering the effects of features in individual datapoints, instead of on the whole dataset (and then aggregating the results), the interplay of combinations of features can be uncovered. This allows far more powerful insights to be generated than with global feature importance methods. Sources S. Lundberg, S Lee, A Unified Approach to Interpreting Model Predictions, 2017
Interpreting complex models with SHAP values
118
interpreting-complex-models-with-shap-values-1c187db6ec83
2018-06-21
2018-06-21 11:39:55
https://medium.com/s/story/interpreting-complex-models-with-shap-values-1c187db6ec83
false
2,638
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Gabriel Tseng
Data Scientist @CanopyLabs
efae00cda994
gabrieltseng
443
62
20,181,104
null
null
null
null
null
null
0
null
0
d36b06c4b048
2018-02-27
2018-02-27 13:42:48
2018-02-27
2018-02-27 13:46:28
1
false
en
2018-02-27
2018-02-27 13:50:13
3
1c1ab44f69d1
0.920755
1
0
0
There are no doubts about the power of Pandas and how useful it is in Data Science and Data analysis.
4
“Close-up of lines of code on a computer screen” by Ilya Pavlov on Unsplash Sort values in a Pandas DataFrame along all axis and get their index There are no doubts about the power of Pandas and how useful it is in Data Science and Data analysis. I’m using it extensively in my current internship project along with other libraries in this stack such as Scikit-learn and Numpy. I encountered lots of unique cases in the paper I’m implementing and decided to share solutions and hacks for those cases so that everyone can utilize them and save their time. For today’s topic, I will also explore the opportunities of submitting a pull request to be added as a feature. For sorting all the values in the data frame, unlike .sort_values() method which requires a by= argument we will first unstack our data frame and sort values in the order we want (descending in this example). Moreover, we can get the indices of their appearance using .index.values method. For Jupyter notebook version of this code snippet, take a look at the following repository. HamedMP/CodeSnippets CodeSnippets — List of code snippets I found useful and usually arrived to them after spending around 5 to 50 Google…github.com
Sort values in a Pandas DataFrame along all axis and get their index
2
sort-values-in-a-pandas-dataframe-along-all-axis-and-get-their-index-1c1ab44f69d1
2018-06-19
2018-06-19 16:24:49
https://medium.com/s/story/sort-values-in-a-pandas-dataframe-along-all-axis-and-get-their-index-1c1ab44f69d1
false
191
Centroid of Outliers_
null
null
null
hamp
null
hamp
TECHNOLOGY,DATA SCIENCE,SOCIAL NETWORK ANALYSIS,HACKS
null
Data Science
data-science
Data Science
33,617
Hamed MP
Futurism, Deep Learning, Big Data and a cup of coffee. Buy me a coffee at buymeacoff.ee/hamedmp
81f6e13dc994
hamedmp
168
363
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-29
2017-11-29 07:08:48
2017-11-30
2017-11-30 06:09:43
3
false
en
2017-11-30
2017-11-30 06:09:43
5
1c1b666c4ebe
1.565094
2
0
0
xgboost is one of the most commonly used libraries for gradient boosting.
5
How to install python xgboost on windows in 3 quick steps xgboost is one of the most commonly used libraries for gradient boosting. It was initially introduced to the kaggle comunity on the otto competition. rapidly it became the benchmark go-to library for many other challenges, and it can usually be found as a part of top solutions for almost every Kaggle competition since. Installing xgboost on linux based systems is easy but on windows the installation process used to be much more complicated. After going through this process several times, I found a solution that worked quite well and felt I should share it to save the hassle for other people dealing with the same issue, so here it is: install anaconda, I used version 5.0.1 use this link for python 2.7 or this link for python 3.6 install mingw 64 bit if you haven’t already done that using this link, then, promote the mingw-w64 line in the system path to be the first one, this can be done by going to your environment variables, either using win key+pause or right click “This PC” choose advanced system settings -> environment variables your end result should look something like this while this second step is not required to finish compilation and loading of the xgboost library, without it I got many memory violation errors when training models, so I highly recommend it! 3. now go to anaconda prompt (make sure you log on as administrator) and run: conda install -c anaconda py-xgboost That’s it ! you can verify your install running: python import xgboost as xgb
How to install python xgboost on windows in 3 quick steps
15
how-to-install-python-xgboost-on-windows-in-3-quick-steps-1c1b666c4ebe
2018-04-26
2018-04-26 07:40:27
https://medium.com/s/story/how-to-install-python-xgboost-on-windows-in-3-quick-steps-1c1b666c4ebe
false
269
null
null
null
null
null
null
null
null
null
Python
python
Python
20,142
Nathaniel Shimoni
null
641084278733
nathanielshimoni
69
17
20,181,104
null
null
null
null
null
null
0
null
0
d211c0ef4acd
2018-07-21
2018-07-21 16:12:41
2018-07-21
2018-07-21 16:15:31
2
false
en
2018-07-21
2018-07-21 16:41:55
11
1c1b86232ea3
2.617296
0
0
0
The Edge is a daily round up of the most important, or at least the most interesting, reads in technology policy.
5
The Edge | 07/21/18 The Edge is a daily round up of the most important, or at least the most interesting, reads in technology policy. Artificial Intelligence and Machine Learning Waymo’s autonomous vehicles are driving 25,000 miles every day Beep Beep A Next Generation Intelligence Development Plan | As a point of contrast to a previous update on the state of AI strategy in the US government… It is worth reading all of China’s comprehensive AI development plan. In case you don’t have the time, I have reproduced the strategic objectives below. 2020 By 2020 China will have achieved important progress in a new generation of AI theories and technologies. The AI industry’s competitiveness will have entered the first echelon internationally. The AI development environment will be further optimized, opening up new applications in important domains, gathering a number of high-level personnel and innovation teams, and initially establishing AI ethical norms, policies, and regulations in some areas. 2025 By 2025, a new generation of AI theory and technology system will be initially established, as AI with autonomous learning ability achieves breakthroughs in many areas to obtain leading research results. The AI industry will enter into the global high-end value chain. By 2025 China will have seen the initial establishment of AI laws and regulations, ethical norms and policy systems, and the formation of AI security assessment and control capabilities. 2030 China will have formed a more mature new-generation AI theory and technology system. AI industry competitiveness will reach the world-leading level. China will have established a number of world-leading AI technology innovation and personnel training centers (or bases), and will have constructed more comprehensive AI laws and regulations, and an ethical norms and policy system. (dis)Information In a fifth of these 48 countries — mostly across the Global South — we found evidence of disinformation campaigns operating over chat applications such as WhatsApp, Telegram and WeChat. Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation| A new report from the Computational Propaganda Project at the Oxford Internet Institute. The full report is definitely worth reading. But more insidious, more frequent on both our forum and the internet at large, is the technique known as “just asking questions” — in internet parlance, “JAQing off.” Designed to further Holocaust deniers’ aim of spreading their talking points, this involves (a) framing a denialist talking point in the form of a good-faith question and (b) calling for “open debate.” How the AskHistorians Subreddit Handles Holocaust Deniers | Space The very fundamental prohibition under the Outer Space Treaty to acquire new state territory, by planting a flag or by any other means, failed to address the commercial exploitation of natural resources on the moon and other celestial bodies. Who Owns the Moon? A Space Lawyer Answers | And lest you think this is idle conversation… Asteroid Mining Company Planetary Resources and Luxembourg: a Love Story | Ok, so that isn’t the title of the conversation. But it’s not a bad description of what is happening as companies seeking favorable jurisdiction for novel space endeavors partner with smaller, flexible states. Watch this space, pun intended. 🚀 The two superpowers are butting heads on trade, military, and cybersecurity issues. Congress has banned NASA officials and NASA money from going to China. That might be because of a recent history of Chinese espionage targeting US military, aerospace, and technological secrets. Some Scientists Work with China, but Nasa Won’t Follow me on Twitter. Like the Edge? Subscribe to MetaPolicy and never miss an update.
The Edge | 07/21/18
0
the-edge-07-21-18-1c1b86232ea3
2018-07-21
2018-07-21 16:41:55
https://medium.com/s/story/the-edge-07-21-18-1c1b86232ea3
false
592
An Exploration of Public Policy and Emerging Technologies
null
null
null
MetaPolicy
ryan.mail.email@gmail.com
metapolicy
POLICY,PUBLIC POLICY,EMERGING TECHNOLOGY,TECH POLICY,TECHNOLOGY TRENDS
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ryan Williams
Antidisciplinarian. Studies Global Policy at the LBJ School of Public Affairs.
8fd521a02506
ryan_t_w
23
345
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-26
2018-07-26 20:06:20
2018-09-17
2018-09-17 19:31:02
1
false
en
2018-09-17
2018-09-17 19:31:02
5
1c1bbfc18ba1
1.177358
2
0
0
This fall, new interdisciplinary master’s programs integrating social sciences and data science will commence at two of the UK’s top…
5
Graduate Programs in Social Data Science to Commence at Oxford and the London School of Economics This fall, new interdisciplinary master’s programs integrating social sciences and data science will commence at two of the UK’s top universities. Oxford University’s Oxford Internet Institute, a multi-disciplinary center for the study of social and computer sciences, will offer a 1-year MSc in Social Data Science to approximately 25 students, followed by a PhD program in the subject beginning in fall 2019. At the same time, the London School of Economics’ Department of Methodology will offer a 1-year MSc in Applied Social Data Science. The new programs will be among the first in the world to focus explicitly on the overlap between social and behavioral sciences and the glowing-hot fields of data science and artificial intelligence. Precedents The two master’s programs follow the convening of social data science centers and working groups at some of Europe’s most impactful data science institutions. For example, the London-based Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, is home to a working group in social data science with researchers from the Universities of Cambridge, Oxford, Warwick, and Edinburgh. According to its site, the group is working on, “developing foundational theories of human behaviour at diverse social and temporal scales” as well as “identifying methodological challenges and solutions to enable social data science to deliver robust and credible results in key application domains,” with the support of high-profile partners such as Google, Twitter, and Facebook. READ MORE
Graduate Programs in Social Data Science to Commence at Oxford and the London School of Economics
5
graduate-programs-in-social-data-science-to-commence-at-oxford-and-the-london-school-of-economics-1c1bbfc18ba1
2018-09-17
2018-09-17 19:31:02
https://medium.com/s/story/graduate-programs-in-social-data-science-to-commence-at-oxford-and-the-london-school-of-economics-1c1bbfc18ba1
false
259
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
#ODSC - The Data Science Community
Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.
2b9d62538208
ODSC
665
19
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-22
2018-03-22 06:13:34
2018-03-22
2018-03-22 08:28:20
3
false
en
2018-04-30
2018-04-30 12:47:19
2
1c1ccb899abe
1.410377
5
0
0
This is part of the course “Probability Theory and Statistics for Programmers”.
5
Probability theory: Poisson distribution This is part of the course “Probability Theory and Statistics for Programmers”. Probability Theory For Programmers The Poisson distribution is a discrete probability distribution that express the probability of a given number of events occurring in a fixed interval of time, distance, area or volume if these events are independent and the probability that an event occurs in a given length of time does not change through time. Then random variable X, the number of events in a fixed unit of time, has a Poisson distribution. Lambda is an average number of events occurring in the specified period of time. probability that event occurs x number of times Let’s take a look at the example. Some pizzeria receives an average of 20 orders per hour. Considering that the number of orders in any part of the time is distributed according to Poisson distribution, find the probability that in just two minutes the pizzeria will receive exactly two orders. As you can see chart have right skewness and probabilities going down after reaching maximum. Poisson distribution will always have right skewness, but it depends on the value of lambda if lambda is large distribution will be close to symmetric. We can show it using the previous example by changing the average number of orders. Let’s take a look at characteristics of the Poisson distribution. The mean and the variance: Next part -> Clap if you enjoy 😎
Probability theory: Poisson distribution
121
13-poisson-distribution-1c1ccb899abe
2018-04-30
2018-04-30 12:47:19
https://medium.com/s/story/13-poisson-distribution-1c1ccb899abe
false
228
null
null
null
null
null
null
null
null
null
Probability
probability
Probability
604
Rodion Chachura
geekrodion.com
4a34eb6ffe36
geekrodion
68
2
20,181,104
null
null
null
null
null
null
0
wei_total = 0 for time_step in range(sz): wei = math.exp(-(self.DEQUE_LENGTH-time_step)*alpha) # the smaller alpha, the more even the weights spread # the larger alpha, the most recent image/lane dominates wei_total += wei for x1, y1, x2, y2 in cached_lines[time_step]: wx1 += x1*wei wy1 += y1*wei wx2 += x2*wei wy2 += y2*wei # weighted return line if wei_total > 0: ret_line = [[int(wx1/wei_total), int(wy1/wei_total), int(wx2/wei_total), int(wy2/wei_total)]] return ret_line threshold ~ 50 min_line_len ~ 100 max_line_gap ~ 160
3
null
2017-12-25
2017-12-25 06:09:44
2017-12-25
2017-12-25 06:19:27
12
false
en
2018-01-08
2018-01-08 22:56:15
4
1c1d4618cc55
5.021698
2
0
0
The goals / steps of this project are the following:
5
Finding Lane Lines on the Road The goals / steps of this project are the following: Make a pipeline that finds lane lines on the road Reflect on your work in a written report All the code can be found at my github link. The stacks I used are python, OpenCV, and computer vision algorithms. For example, edge detection via Canny algorithm, line detection via Hough Transformation. 1. Description of the Lane detection pipeline. My pipeline consisted of steps as follows. 1. Converted the images to hls and get the color regions of white and yellow to the image mask. The reason is that RGB grayscaled image is not robust enough to get the lanes in different light conditions compared to HSL. In the color selection, I set the value ranges for white and yellow. 2. To get the edges, OpenCV Canny algorithm is used. 3. Use Hough transformation with the edges to get the lines. Step 2 and 3 are directly applied from Udacity course so far before the Project 1 assignment. Original image of “solidYellowCurve” Edge detection via RGB color space Edge detection via HSL The HSL image can detect left lanes well. Another image from the challenge.mp4, Original image Edge detection via RGB color space Edge detection via HSL 4. In order to draw a single line on the left and right lanes, I calculate the slope of each line segments. Based on the slope, we separate them into the category of left or right lane. For more robust detection, I filter out the too small and too large slopes, weight the lines based on the length, and remove the outliers (outstanding slopes). For the weighting lines based on the length, the intuition is the longer the line, the better chance it is the part of lane close to the camera, which is more useful to serve as a baseline of the lane model. For outstanding slopes, we consider them outside one standard deviation. “SolidWhiteCurve” “solidWhiteRight” “SolidYellowCurve2” “SolidYellowCurve” “solidYellowLeft” “whiteCarLaneSwitch” 5. Video is a sequence of images. The pipeline memorizes the images in each time step and caches the lines with a fixed size of deque data structure. I use it to weight the lanes to provide a more stable lanes. The assumption is the changes of images are continuous. The weight scheme is based on a weighted scheme. For each time step (the index in the deque). The most recent one get larger weight. We use softmax function for this flow. The videos can be accessed via following youtube links 2. Identify potential shortcomings with your current pipeline One potential shortcoming would be what would happen when the road has wide turns. The slopes of the lanes may change ruptly compared to the cached lanes. The weighted or averaging lanes may be unstable. Another shortcoming could be the steepness of the road. The region of interest assumption in this work may not hold. So it will introduce more unpredicted variables which are not considered in this work. 3. Suggest possible improvements to your pipeline A possible improvement would be to use curvature detection to get more smooth and continuous the line. For steep roads, we should first need to detect the region of interests. An interesting hands-on project from Project 1 of Udacity Self-Driving Car Engineer Nanodegree (Term I). After the code submission, a review was made by the Udacity mentor. Review from the helpful Udacity Mentor for my code review Meets Specifications All in all I would like you to congratulate you for taking an amazing first step in this journey!! The project submission includes all required files Lane Finding Pipeline The output video is an annotated version of the input video. Great work! Your annotations are clearly visible in the video. In a rough sense, the left and right lane lines are accurately annotated throughout almost all of the video. Annotations can be segmented or solid lines There is a clear demarkation between the two lanes! You have taken a very good step towards lane detection and your pipeline reflects that :) Visually, the left and right lane lines are accurately annotated by solid lines throughout most of the video. Reflection (is what I wrote for this project) Reflection describes the current pipeline, identifies its potential shortcomings and suggests possible improvements. Great job with the reflection. We will address some of the issues as we go further along the course. In regards to the shortcomings I would suggest tweaking the following values to get an improvement in lane detection for the first two videos : Increasing the value threshold increases the minimum number of intersection required to detect a line and thus is able to differentiate between the left and right lanes better min_line_len as the name suggests will help you make sure that the line segments are drawn on the actual lines and thus help eliminate some of the lines. Reducing your value of max_line_gap will help you get more accurate connected annotated lines when there are broken lanes as it allow points that are farther away from each other to be connected with a single line. This research paper goes into how to detect curves and will also help in detecting faded lanes. It uses an extended version of hough lines algorithm to detect tangents to the curve which can help you detect the curve.
Finding Lane Lines on the Road
3
finding-lane-lines-on-the-road-1c1d4618cc55
2018-01-08
2018-01-08 22:56:18
https://medium.com/s/story/finding-lane-lines-on-the-road-1c1d4618cc55
false
973
null
null
null
null
null
null
null
null
null
Self Driving Cars
self-driving-cars
Self Driving Cars
13,349
Hao Zhuang
Hacked linear algebra and matrix algorithms, applied machine learning, design automation | UCSD CS PhD | linkedin.com/in/zhuangh
9f155190ea1e
zhuangh
72
509
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-05
2018-03-05 14:49:08
2018-03-05
2018-03-05 14:55:43
1
false
en
2018-03-07
2018-03-07 17:34:31
0
1c1dd7263323
1.275472
1
0
0
I believe the Internet is how the economy can be grown, repeating what it did in the 90s and 2000s, this time via individual empowerment…
1
Evolving Neo I believe the Internet is how the economy can be grown, repeating what it did in the 90s and 2000s, this time via individual empowerment. From when I was a child I wanted a world where I could communicate with others through personal electronic devices connected via (then telephone) networks. We’ve moved from Commodore 64’s to supercomputers on a desktop, from modems to the network structures now linking organic and inorganic brains. I saw the power of combining browser and Javascript in the late 90’s and have done nothing since then but build tools for browsers that are free where they were once costly, fast where once slow, and connected as they never were. And now I see individuals, offline or on, using their personal devices to broadcast information for sale or for fun and to becoming smarter — smart enough to craft their digital identities, and in some way their very existence. Evolution has evolved to let us become the first intelligent designers. We’ve evolved evolution biologically — our evolved big brains enable us to dominate all other species simply because we can think counterfactually, and communicate what we then imagine using language. We’re now poised to do it technically, designing the evolution of our digital selves. We’re now evolving out of the need for a supernatural intelligent designer to craft the countours of our voicebox (Facebook et. al). The next neo-cortex will be distributed, its topology memetic, each meme crafted out of code, much of them written by individuals like you. We will collect, curate, and circulate digital packets together, each of us downloading our favorites into our necktops. Humility will create greatness. Smart people are the best dumb people.
Evolving Neo
1
evolving-neo-1c1dd7263323
2018-03-07
2018-03-07 17:34:32
https://medium.com/s/story/evolving-neo-1c1dd7263323
false
285
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Sandro Pasquali
Software architecture and design.
21df6ead5
spasquali
101
128
20,181,104
null
null
null
null
null
null
0
null
0
ff923d683f1e
2018-03-23
2018-03-23 10:45:31
2018-03-26
2018-03-26 11:56:09
1
false
en
2018-03-26
2018-03-26 11:56:47
13
1c1e070687a4
3.060377
10
0
0
A few days ago, The Guardian, The New York Times and Channel 4 revealed that Facebook & Cambridge Analytica got access to tens of millions…
5
Facebook / Cambridge Analytica scandal: how to protect data that matter Photo by Matthew Henry on Unsplash A few days ago, The Guardian, The New York Times and Channel 4 revealed that Facebook & Cambridge Analytica got access to tens of millions of Facebook users’ data mostly without their permission in 2013. This revived and increased the debate around data privacy and transparency. Alex Stamos, Chief Security Officer at Facebook described the situation as « a breach of trust » in a Facebook post answering questions asked by the community. All around the world, personalities such as WhatsApp founder called to #deletefacebook on social media. The thing is, with most of the online services we use today, our data aren’t safe. From the moment you decide to share something online, a message, a photo, …., with someone or a group of people, it goes through servers. From this moment, your data can be compromised. But more and more startups believe users should have the right to data privacy. They use an approach called privacy by design. For all the day to day services you use on your mobile or computer, you can increase the safety of your data and recover the control of them. Messaging The messaging app Telegram was a game changer when they launched end to end encryption. This means that all the messages you share with your friends are encrypted with a specific key when they leave your phone. Messages are not understandable when they reach servers to be redistributed to the people you addressed the messages. Let’s say you’re with two friends who share a specific language that they’re the only ones to know about and they’re discussing. You can’t understand anything unless they share the right dictionary with you. Mail With the same philosophy as Telegram, emails with ProtonMail are end to end encrypted. To go one step further in data transparency, they give access to their code, enabling people to understand how the service works. Internet searches Today, if you’re wondering something, you ask internet. But to get access to the right information you need to use a search engine. With ads and other algorithms, your privacy can be compromised. And that’s exactly Qwant’s motto: being the leading search engine that respects your privacy while preserving net neutrality. Photos management At Zyl, we believe that your photos are the most important data you can own. It’s part of your personality, your story, your memories. And they deserve to be protected, while giving you access to the most useful and smart features with a mobile app. That’s why artificial intelligence applied to sorting your photos for exemple or reliving your best memories is directly done on your phone, without being processed on servers. Cloud storage If you want to enjoy all the benefits cloud storage can offer, Lima can be a great alternative. It’s a personal cloud that you safely keep at home and can access wherever you are. No need to send your personal files and data to a cloud service that stores your information somewhere in the world, everything remains in the safer place you know — your home. Voice assistant Voice assistants such as Google Home or Alexa by Amazon can be threatening because you don’t know exactly how it works and where your data go. But in the meantime you want to enjoy the possibility to have a voice assistant at home to easily stream music, get access to news info or anything else you wonder or want to activate at your place. Here’s the French alternative: Snips created the first privacy by design voice assistant where all the information is AI-analysed directly on the device, without having to process it somewhere else on the cloud. So this doesn’t mean you should leave all social media, say goodbye to the digital world and retire in a cave. This just means you should be careful with the information you share and be aware of how it is spread. And don’t forget, having nothing to hide doesn’t mean you shouldn’t care. If you want to take the next step in protecting your data, the French startup ecosystem is very proactive regarding data privacy. It’s for exemple the leitmotif of eelo, founded by Gaël Duval, that aims to create an independent smartphone operative system that runs Android phones. It’s not live yet, but definitely something to follow. If privacy by design is a topic that matters to you or your business, please share, clap or get in touch on Twitter or Linkedin to create something bigger :)
Facebook / Cambridge Analytica scandal: how to protect data that matter
221
facebook-cambridge-analytica-scandal-how-to-protect-data-that-matter-1c1e070687a4
2018-03-28
2018-03-28 12:36:06
https://medium.com/s/story/facebook-cambridge-analytica-scandal-how-to-protect-data-that-matter-1c1e070687a4
false
758
Thoughts, stories & ideas about Zyl. The first AI-powered photo assistant that manages your photos for you and with you, privately and safely. Free on iOS and Android. https://zyl.ai
null
zylapp
null
Zyl-Story
contact@zyl.ai
comet-app
MOBILE APP DEVELOPMENT,PHOTO SHARING,TECHNOLOGY,ARTIFICIAL INTELLIGENCE
zylapp
Privacy
privacy
Privacy
23,226
Ophély Nhem
First Employee & Growth Manager @ Zyl
289394c314b7
ophelynhem
38
44
20,181,104
null
null
null
null
null
null
0
null
0
b85e336854db
2018-06-08
2018-06-08 17:01:08
2018-06-08
2018-06-08 17:04:42
1
false
en
2018-06-08
2018-06-08 17:04:42
4
1c1fa2533169
0.535849
0
0
0
Objective: To fetch the comments of a youtube video (Baby Driver Trailer #1 (2017) | Movieclips Trailers) for further analysis
5
Extract Youtube Comments | R Programming “Close-up of a laptop screen with lines of code” by Artem Sapegin on Unsplash Objective: To fetch the comments of a youtube video (Baby Driver Trailer #1 (2017) | Movieclips Trailers) for further analysis Extract Youtube Comments | R Programming Step 1 install and load required package >install.packages("SocialMediaLab") >library(SocialMediaLab) Step 2 Get access…www.planetanalytics.in Step 1 install and load required package >install.packages(“SocialMediaLab”) >library(SocialMediaLab) Step 2 Get access to your YouTube API Key Log in with your google account using this Link YouTube Data API > Credentials > Create Credentials > API Step 3 Authorise your API read more…
Extract Youtube Comments | R Programming
0
extract-youtube-comments-r-programming-1c1fa2533169
2018-06-08
2018-06-08 17:04:43
https://medium.com/s/story/extract-youtube-comments-r-programming-1c1fa2533169
false
89
PlanetAnalytics.in is a blog for data analysts and enthusiasts. Its a knowledge sharing platform for anyone and everyone who wants to learn and explore the realm of Data Analytics.
null
planetanalytics
null
PlanetAnalytics.in
info@planetanalytics.in
planetanalytics-in
ANALYTICS,BIG DATA,DATA ANALYSIS,DATA SCIENCE,MACHINE LEARNING
planetanalytics
Data Science
data-science
Data Science
33,617
Manish Gupta
Co-Founder of planetanalytics.in | Trying to Analyse one thing at a time
b875710c1915
manishgupta_41273
26
26
20,181,104
null
null
null
null
null
null
0
null
0
be57722f4594
2018-03-14
2018-03-14 19:23:08
2018-03-14
2018-03-14 19:24:01
5
false
de
2018-03-15
2018-03-15 16:22:11
1
1c1fc19cb3b4
2.108805
7
0
0
Ich lebe seit 8 Wochen mit meinem virtuellen Begleiter. Seitdem habe ich 5 Kilo abgenommen. So sieht unser gemeinsames Leben aus:
5
Ein virtueller Begleiter für deinen Alltag Ich lebe seit 8 Wochen mit meinem virtuellen Begleiter. Seitdem habe ich 5 Kilo abgenommen. So sieht unser gemeinsames Leben aus: Mein virtueller Begleiter und ich im Alltag. In meinem früheren Leben war ich sehr sportlich. Das “Erwachsenwerden” hat sich aber vor allem als Faulwerden erwiesen und in den letzten Jahren habe ich jedes Jahr ca. 1 bis 2 Kilo zugenommen. Als Teil eines kleinen Teams habe ich einen neuen Weg gesucht, mir selbst zu helfen. Das Resultat unserer Arbeit ist ein virtueller Begleiter, der auch dich begleiten möchte. Ein Gesprächspartner, der mit ernährungswissenschaftlichen und psychologischen Erkenntnissen aufgebaut wurde. Der virtuelle Begleiter in Aktion Was macht mein virtueller Begleiter? Mein Begleiter nervt mich weder damit Sport zu machen noch meine Kalorien zu zählen. Stattdessen interessiert er sich einfach für meinen Tag und erklärt mir in kleinen Lektionen wie ich meinem Körper helfen kann, Körperfett zu verbrennen. Abends reflektiere ich meinen Tag und entwickle einen Plan für den Folgetag. Morgens erhalte ich eine Nachricht und werde gefragt, ob ich mein Gewicht angeben möchte. Damit es mir leichter fällt mich täglich zu wiegen, mache ich kleine Übungen, die mir zeigen, dass mein Gewicht und meine Figur losgelöst von meinem Selbstbewusstsein sind. Und es wirkt. Ich freue mich auf unsere Unterhaltungen und habe über 5 Kilo abgenommen. Ohne Hunger, ohne Sport, ohne harte Arbeit. Jetzt mal Tacheles, wirklich 5 Kilo? Zwischen Oktober und Dezember 2017 schwankte mein Gewicht zwischen 77 und 80 Kilo. Das sind 5 Kilo mehr als noch vor 4 Jahren. Und hier ist mein Gewicht seit Januar 2018. Ein Auszug aus meiner wöchentlichen Zusammenfassung. Mein Gewicht schwankt derzeit zwischen 72 und 74 Kilo. Sobald ich mein Ziel von konstant 72 Kilo erreicht habe, schlägt mein Begleiter weitere Übungen vor, damit ich das neue Gewicht auch langfristig halte. Na, fühlst du dich manchmal auch so? Du kannst etwas dagegen tun oder weiterrobben. Probiere deinen eigenen Begleiter aus Versuchst du abzunehmen und würdest dich über einen Helfer an deiner Seite freuen? Teste, ob ein virtueller Begleiter auch dir helfen kann langfristig Gewicht zu verlieren. Melde dich auf unserer Homepage an und hole dir einen kostenlosen Zugang. Es wird Zeit, dass wir uns wohlfühlen. Bist du bereit?
Ein virtueller Begleiter für deinen Alltag
38
ein-virtueller-begleiter-für-deinen-alltag-1c1fc19cb3b4
2018-05-14
2018-05-14 20:34:19
https://medium.com/s/story/ein-virtueller-begleiter-für-deinen-alltag-1c1fc19cb3b4
false
338
Scheiterst du mit deinen Gesundheitszielen? Dein notadiet Coach hilft dir. Kein Druck. Dein Tempo. 15€ im Monat.
null
null
null
notadiet
alex.gansmann@gmail.com
notadiet
ABNEHMEN,ERNÄHRUNG,ARTIFICIAL INTELLIGENCE,PSYCHOLOGIE,CHATBOTS
null
Diabetes
diabetes
Diabetes
4,680
Alex Gansmann
Data Scientist
84f318cabe24
alexgansmann
29
54
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-30
2018-01-30 12:51:05
2018-01-30
2018-01-30 12:53:11
0
false
bg
2018-01-30
2018-01-30 12:53:11
1
1c20104d774a
0.064151
0
0
0
Originally published at www.facebook.com.
5
О гражданах третьего сорта или как технологии и алгоритмы строят общество цифровых рабов Originally published at www.facebook.com.
О гражданах третьего сорта или как технологии и алгоритмы строят общество цифровых рабов
0
о-гражданах-третьего-сорта-или-как-технологии-и-алгоритмы-строят-общество-цифровых-рабов-1c20104d774a
2018-01-30
2018-01-30 12:53:12
https://medium.com/s/story/о-гражданах-третьего-сорта-или-как-технологии-и-алгоритмы-строят-общество-цифровых-рабов-1c20104d774a
false
17
null
null
null
null
null
null
null
null
null
Искусственный Интеллект
искусственный-интеллект
Искусственный Интеллект
174
IT Svit Blog
IT Svit — надежный поставщик комплексных решений для бизнеса. AI, ML, Blockchain, IoT, DevOps и BigData помогают вашему бизнесу процветать!
f90cf19bf08e
elenasem
109
179
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-09
2018-04-09 05:33:24
2018-04-09
2018-04-09 05:36:35
1
false
en
2018-04-09
2018-04-09 05:36:35
1
1c2227fcf4b
1.969811
0
0
0
Machine Learning no longer remains a buzzword on Buzzfeed. It possesses unmatched potential to lead us into more digital turmoil…
4
Know How Machine Learning Will Impact Ecommerce Machine Learning no longer remains a buzzword on Buzzfeed. It possesses unmatched potential to lead us into more digital turmoil, transforming the way ecommerce does business, and the way humans interact with technology. A business that depends on ecommerce needs to stay up-to-date with the latest digital trends like how social media has changed the whole paradigm of digital advertising, and how Google’s Penguin and Panda updates had impacted organic rankings. But Machine Learning is a powerful and more influential beast that will perhaps impact each and everything. In this post, we will take a quick look at some of the ways e-commerce can and will change with Machine Learning. 1. Intelligent Customer-service Chatbots It is undeniable that customer service support can make or break your business. Also, we know that good customer service requires a good conversation between a buyer and a seller. It is for this reason that chat support has to work seamlessly in an ecommerce business. When a buyer asks any question in the chat or raises any issue against any product on social media platforms, a quick and helpful response from a customer service representative can make a big difference in the shopper’s experience. But many small and medium sized enterprises (SMEs) may find it challenging to hire a dedicated team of customer service representatives to handle queries on chat and social media. This is where intelligent customer-service chatbots become very useful. These automated chatbots powered by Machine Learning will be able to handle basic customer service questions in chat sessions and social media tweets and posts. They can be used in place of actual customer service representatives and can easily handle several customer queries depending upon the learning process of chatbots. 2. Improved Product Search Machine learning algorithms will play a big role in improving the product search capabilities on ecommerce sites. Currently, most online store searches focus on the keywords entered into the search box, but with improved learning, ecommerce store search will also start considering click rates, conversion rates, reviews and ratings of customers, and even product inventory or margin. 3. Predictive “Market Right” pricing Before the use of machine learning market-right pricing algorithms, online sellers would often engage in margin-slashing price wars with their rivals, especially during the festive seasons. However with the rise of predictive market-right pricing, online sellers can use data regarding pricing trends, customer behavior, product demand, and product prices to determine the “just right” prices for a particular item and for a particular customer. With the use of machine learning programs, its algorithms, and applications using them, online sellers will now be able to deliver the right products to the right customers, at the right time, and at the right prices.
Know How Machine Learning Will Impact Ecommerce
0
know-how-machine-learning-will-impact-ecommerce-1c2227fcf4b
2018-04-09
2018-04-09 05:36:37
https://medium.com/s/story/know-how-machine-learning-will-impact-ecommerce-1c2227fcf4b
false
469
null
null
null
null
null
null
null
null
null
Ecommerce
ecommerce
Ecommerce
46,740
Ravi Kumar
Experienced professional having an in-depth knowledge of internet of things, big data analytics, machine learning
d120761b1db
fullstackanalytics
10
4
20,181,104
null
null
null
null
null
null
0
null
0
f4e4fee4b1de
2018-08-12
2018-08-12 15:40:42
2018-08-28
2018-08-28 14:08:30
4
false
en
2018-08-29
2018-08-29 11:17:04
3
1c22d1b8bcf6
3.398113
8
0
0
Digitisation and the ever-accelerating human desire to dissect and understand our environments has fuelled an unprecedented rate of…
5
A step towards an open financial data market Digitisation and the ever-accelerating human desire to dissect and understand our environments has fuelled an unprecedented rate of generation, collection and analysis of vast amounts of data. The International Data Corporation (IDC) estimates that in 2025 the amount of data collected annually will surpass a staggering annual 163 zettabytes (163 trillion gigabytes). In financial markets specifically, trillions of data points about buy and sell orders are being collected and curated every day. As abundance keeps growing, the infrastructure to access verified, high-quality financial data lags behind dramatically. To access a broad source of high-quality financial data today requires significant investments, often ranging in the tens of thousands of dollars per year per account. This effectively excludes smaller players from entering and participating in financial markets to the extent that larger corporations and financial players can. Retail investors, startups and SME’s can seldom afford such rates and the use of lagging or incomplete data puts them at a disadvantage in the markets. Another inefficiency of our current financial data market environment is that we are confronted with a plethora of data silos. While Bloomberg has a quasi-monopoly on the provision of financial market data, a diverse range of peripheral data is offered by the many market players that often employ proprietary collection techniques. What is more, new sources of data like the crypto-asset markets have not yet been taken into the equation. This siloed datasphere was borne out of the need for specialisation, making it more cost effective for organisations to lift data for specific market sectors and demands, yet today’s surge of new fintech business models and availability of new data types and sources drive a need for a more holistic and efficient new way to get access. Bloomberg FXC. A single Bloomberg terminal subscription can cost more than 20.000 USD per year. A new market infrastructure for financial data is overdue The rapid emergence of blockchain technology and crypto-assets has created an entire new market with a dedicated range of data and providers thereof. Despite its emergence 10 years ago, the maturity of markets and adoption of its underlying technology is only now seeming to make its way to the mass markets. With this in mind, it is not surprising that the infrastructure to provide data is not yet fully developed and exhibits startling inefficiencies. Questionable data sourcing, calculation irregularities and suspicions of manipulation have given rise to harsh criticism of current data providers in the crypto community — the most recent data glitch at Coinmarketcap proves this point in case. All of the above factors call for an overhauled data provision infrastructure. One that is accessible to investors of all types and sizes and that is characterised by efficiency, transparency and trust. With the availability of blockchain technology, this is finally possible. We are able to create a transparent infrastructure that enables a community-owned process of sourcing, vetting and standardising data. This is exactly what DIA sets out to accomplish: the creation of a non-profit platform for open-source and verified financial data, sourced and made available through a transparent, community-driven process. DIA hosts an ecosystem of data providers, data analysts and DApps DIA is a not-for-profit association that provides a platform where the need for various types of data can be articulated and a community of analysts, developers and data scientists is incentivised to provide it at the highest possible quality. An open API (so called ‘oracles’) provides access to this vetted data to all kinds of decentralised applications like trading screens, content portals, automated calculation agents and many more. This ecosystem of data providers, data analysts, data users and the attached marketplace represents one of many, yet a fundamental building block in the creation an open financial data market. The DIA token incentivises the community to drive high-quality outcomes. Today marks the launch of the beta version of DIA, which provides the basic infrastructure for and a first step towards a truly open and democratic market for financial data, both from the crypto and traditional markets. In the coming months, DIA and its community of analysts, developers and data providers will begin to source, verify and make available trusted financial data, continue developing the platform and drive the adoption of an open-source financial market data infrastructure that improves access to and inclusion for all market participants. DIA (Decentralised Information Asset) is accessible at www.diadata.org.
A step towards an open financial data market
237
a-step-towards-an-open-financial-data-market-1c22d1b8bcf6
2018-08-29
2018-08-29 11:17:04
https://medium.com/s/story/a-step-towards-an-open-financial-data-market-1c22d1b8bcf6
false
715
Updates and insights about the developments on DIA — an open-source platform for transparent, reliable financial and digital asset data built on blockchain.
null
DIAfinance
null
DIA Insights
info@diadata.org
dia-insights
DATA,FINANCE,CRYPTOCURRENCY,BLOCKCHAIN,OPEN SOURCE
dia_data
Fintech
fintech
Fintech
38,568
DIA
DIA is a Swiss non-profit association that provides open-source access to crowd-verified financial and digital asset data.
94c5afe7e8b4
diadata_org
13
4
20,181,104
null
null
null
null
null
null
0
null
0
5cfda4ec81
2017-11-01
2017-11-01 17:38:47
2017-11-01
2017-11-01 17:44:48
1
false
en
2017-12-11
2017-12-11 13:01:41
2
1c2337337b42
3.479245
2
0
0
David Byrne has published an article addressing non friction, that is, the gradual elimination of contact and friction between humans in…
5
Google Images I’m with you, David Byrne: do we want to live without friction? David Byrne has published an article addressing non friction, that is, the gradual elimination of contact and friction between humans in labor relations today and in the future. Concerned about the negative impact of technology in human relations, the musician generated discussion. Do we want to live without friction? (this opinion article was first published in the Portuguese news media Visão) The question that immediately pops-up my mind related to David Byrne’s article “Eliminating the Human” is: considering the elimination of friction in work and business relations, will we gradually assume automated behaviors and become mechanic in thought, or instead will we extrapolate dehumanization and thus potentiate in each one of us what distinguishes humans from machines? For the sake of humanity’s mental and spiritual health, we need answers and the antidote to the side effects of the mechanization of life. The answer, at this stage, will arise from the question: “What then distinguishes us from robots that can be programmed to mimic almost everything?” It is important for each one of us to reach their own conclusions and act accordingly. I even say that we must begin right now re-humanizing the species because we can be close to the extinction of the Homo Sapiens Sapiens as we know it. Becoming bionics and a blend between people and robot is a real possibility in the near future. By delegating more or less consciously human tasks and organic choices to the machines, we are thus bypassing the human factors of contact and decision. And therefore we retire ourselves from the most advanced and perfect technology we were born with and programmed for as human beings, the most developed that exists and privileges social interaction. Large industrials and investors see human friction as a bottleneck that causes them to lose a lot of money and time, whether in sales, customer service, product returns, or in the procedures themselves. By eliminating human friction putting machines or robots performing tasks instead of people, speeds up the processes, saves money, and eliminates annoyances. Megatrends such as automation, digitization and the internet of things, among others, are introducing and will introduce more profound changes in work, education and human relations. Positive and negative. Let’s look at current businesses that exemplify how things can turn out to be massively in the near future: Uber, Spotify or Amazon. By digitizing or removing intermediaries the processes become easier, faster and cheaper. The negative side of it has to do with the potential automation of human beings in face of the changes imposed by the system of work, commerce and the like. We can become similar to machines while they resemble more like us. And this does not sound good at all. Removing humans from the interaction process, whether eliminating individuals from shops, customer services, customer support centers, hospitals or driving cars etc., could mean the beginning of the decline of human skills and our innate ability to intuit, infer, understand, accept and negotiate. Will we thus become colder and more inflexible to error, failure and differences? Will we shape our behaviors and criteria according to a normalization imposed by artificial intelligence systems? “Remove humans from the equation, and we are less complete as people and as a society,” says Byrne in the article. By eliminating friction of ordinary life such as impasses or complaining, we erase from the equation unique factors that establish the boundary between human and humanoid: accepting the other, knowing how to deal with differences, discussing and abolishing prejudices, arguing for integrating opposite ideas, or even more complex, look into the eyes and knowing how to read what goes inside. If we are to take this to the extreme, it is possible to slowly erase the emotions of humankind and have control over the birth rate of the population by placing most of the alienated individuals having emotional relationships with operating systems or having sex with robots. It just takes someone with power and considerable money to set this goal in a world where ethics will be fragmented, citizens deprived of rights and privacy, and dependent on virtual life. The position of some experts displaced from the reality that we will be God by overcoming death through the application of highly sophisticated technological systems such as the DNA editing, reveals that there are megalomaniacs playing with the future with sophisticated tools that are not within the reach of the common citizen, conferring dominion and power to some while placing the majority in the role of submissive. Therefore, it is foreseeable that there is or will be a war to domain technological power against which we have to be vigilant, to protect ourselves as a species and maintain the democracies. Against radical attitudes that led to David Byrne’s realistic technopanic, we have to start re-humanizing the mind and the actions applying unique and beautiful human skills (compassion, mutual help, kindness) while not allowing aberrations to overcome good sense. Soon ethics will have to be reviewed by each country in order to adjust to new and unexpected realities. As humans, we can expect everything from us, from the worst to the best.
I’m with you, David Byrne: do we want to live without friction?
15
im-with-you-david-byrne-1c2337337b42
2017-12-11
2017-12-11 13:52:54
https://medium.com/s/story/im-with-you-david-byrne-1c2337337b42
false
869
The latest and greatest updates about the Future of Work, from the CodeControl crew.
null
codecontrol.io
null
The Future of Work
hello@codecontrol.io
future-of-work
FUTURE OF WORK,REMOTE WORKING,TECHNOLOGY,PRODUCTIVITY,FREELANCING
CodeControl_
Future Of Work
future-of-work
Future Of Work
8,540
Carla Isidoro
Trends Columnist | Communication manager in Lisbon
ff005e6a3092
carlaisidoro
8
18
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-26
2018-09-26 11:29:33
2018-09-26
2018-09-26 11:44:25
2
false
en
2018-09-26
2018-09-26 11:44:25
67
1c238a545f8e
4.602201
1
0
0
Infographic van de week
1
Datanieuws binnen en buiten het Rijk 26–09–2018 Infographic van de week Van IenW Van Nesta Datavacatures bij het Rijk /DICTU: (senior) Data Scientist/consultant OCW: Data-analist Fin/ BD: SQL Data Analist en Senior Data Analist, Data Engineer, Data scientist (hbo) EZ/ DICTU: (senior) Data Scientist/consultant en BI specialist / consultant JenV/ NFI: Forensisch data scientist IenW/ RWS: Specialistisch Adviseur Advies en Toetsing Geodata ACM: Data-analist Agenda Do 27 sept, 14.00–17.00u, 2e Data Meet-up Rijk 27–09 Stationscollege: Big Brother is guiding us (door Nart Wielaard) 1 okt 12.00–13.30u, Turfmarkt, Den Haag: Lunchlezing: Toepassingsmogelijkheden Blockchain in het identiteitsdomein 1–11 okt, Den Haag: Serious game ‘Crisis’ of Serious game ‘Keteninformatie’ 5 okt (Universiteit Leiden): Big data en data science in het pensioendomein 8 okt (Hartstichting) / Big Data & Gezondheid. Denk mee over onderzoeksvoorstellen die ingediend zijn voor het programma Big Data & Gezondheid. 11–10: Kennisplatform (Big) data over de NDA (zie links) 12 okt: Data-challenge: Versterk de Petitie 18 okt (Den Haag): Aftrap van het actieplan Open Overheid 2018–2020: Open moet het zijn! 25–10, 12.00–16.00u, Utrecht: Workshop ‘DEDA (De Ethische Data Assistent) in de praktijk’ 29 okt 11:45–14:00u, Turfmarkt, Den Haag: Lunchlezing “De digitale kooi” door Arjan Widlak 6 nov, 16.00u, Utrecht: OWNH bijeenkomst — Data-science in de watersector 08–11: JenV Symposium ‘Zorgvuldig hergebruik van JenV-data op privacy bestendige wijze’ 15 nov 14:30–19.00, WTC Den Haag: bestuur en democratie in de data-maatschappij, met gastspreker: Viktor Mayer-Schönberger. 20–11: innovatiecongres JenV Datanieuws binnen het Rijk Deze week volg je me op Instagram, met oa morgen een verslag van onze tweede data meet-Up Rijk Ook dit jaar verschijnt weer een uitgave van Trends en Cijfers van onze hand. Hierin staan personele en financiële kengetallen en infographics over de 14 arbeidsvoorwaardelijke sectoren van de overheid. De uitgave is in te zien via de kennisbank openbaar bestuur Openstelling ov-data laat op zich wachten Tweakers: D66 pleit voor oprichting waakhond om werking algoritmes te controleren Van het Rathenau Instituut: Film: heeft iedereen toegang tot de digitale samenleving? EZK: Innovation Expo 2018: innovatieve fietsen, drone-eiland, de Nieuwe Winkelstraat, innovatielab TNO. Een interview met de projectleider Helmy van Erp Op donderdag 11 oktober (9.30–13.30 uur in Den Haag) is de 5e netwerkbijeenkomst van het Kennisplatform (Big) Data met als thema”de Nationale Data Agenda (NDA)”: De NDA wordt samen met stakeholders en experts opgesteld op verzoek van het ministerie van BZK. In de NDA komt te staan hoe data (nog) beter ten goede kan komen aan beleidsvorming en het oplossen van maatschappelijke vraagstukken, maar er wordt ook nadrukkelijk aandacht besteed aan de bescherming van de rechten van burgers. Aanmelden: mail naar kennisplatform@ictu.nl en vermeld uw naam, functie en overheidsorganisatie. Datanieuws buiten het Rijk Binnen overheid: Van nesta.org: 10 questions to answer before using AI in public sector algorithmic decision making Govexec: Social Science? Data Science? Evidence-Based Government Needs Both Livestream: What role should artificial intelligence play in government? Medium: On Being CDO of San Francisco Economist: How Europe can improve the development of AI Apolitical.co: London’s chief digital officer on his first year transforming city government UTwente: Met data herkennen en voorspellen. Een systeem dat aan de hand van een digitale foto tot op 80% nauwkeurigheid kan vaststellen of iemand CEO is van een onderneming of niet. SMU leverages multi-disciplinary expertise, launches Centre for AI and Data Governance: Made possible by $4.5 mil research grant from NRF and IMDA, new Centre will support the work of the Advisory Council on the Ethical Use of AI and Data Data en AI algemeen Van Executive people: Vijf tips voor het inzetten van AI Van ECP: Introductie Big Data en onderzoek voor MKB bedrijven NYT: What China Can Teach the U.S. About Artificial Intelligence. Visionary research is no longer the most important element of progress. Over robots en de klantenservice: In gesprek met een robot FD: Algoritme WBSSP voorspelt de dood van een start-up QZ.com: An algorithm is learning to detect whether patients will wake from a coma McKinsey: Notes from the AI frontier: Modeling the impact of AI on the world economy Impact op de arbeidsmarkt: Economist: How Europe can improve the development of AI. Its real clout comes from its power to set standards NYTimes: The Week in Tech: Are Robots Coming for Your Job? Eventually, Yes. Ethiek & Privacy: Over fairness van algoritmen: Techradar: Certification for AI technology could soon be a reality. A spanner in the works of the robot uprising NYTimes: The Week in Tech: Are Robots Coming for Your Job? Eventually, Yes. College (Video) van Hetan Shah: How can we put data ethics at the heart of the discussion? What can we do to use data for the common good? How can we tackle data monopolies? Irish Times: Philosophers key as artificial intelligence and biotech advance. Just because something can be done with technology doesn’t mean it has to be done Watch out, algorithms: Julia Angwin and Jeff Larson unveil The Markup, their plan for investigating tech’s societal impacts: “Journalists in every field need to have more skills to investigate those types of decision-making that are embedded in technology.” Trouw: Een verantwoord gebruik van kunstmatige intelligentie vraagt niet alleen om transparantie Computable: IBM en Deloitte willen angst voor ai tegengaan. Implementatie kunstmatige intelligentie stagneert door zorgen Undark.org: New Algorithms Perpetuate Old Biases in Child Welfare Cases. I believe unfair predictive software may have influenced an investigator’s recommendation to take my children away. I’m surely not alone. NYT: don’t call it privacy. Amazon, Google and Twitter executives are heading to Congress. Should legislators give consumers control over the data companies have on them? Guardian: Don’t trust algorithms to predict child-abuse risk Vice: An AI Analyzed My Twitter Feed and Discovered I’m a Shithead Verdict.co.uk: The Foundation for Responsible Robotics (FRR) FRR announces quality mark for responsible robotics Berkman Klein Luncheon Series: Bruce Schneier “Click Here to Kill Everybody” Bedrijfsvoering & HR analytics: WSJ: Artificial Intelligence: The Robots Are Now Hiring See how new data-science tools are determining who gets hired, in this episode of Moving Upstream HR Trend: How AI can help to decrease bias in sourcing and selection The Forum Network: Not Just a Facebook Problem: Ethical data collection must be employed at work HR observer: Don’t forget the ‘H’ in HR: Ethics & People Analytics — Part One If You Haven’t Invested in Analytics, Start Now. Here’s How Recruiters Struggle with Predictive Data Analytics: HR’s lack of data science acumen continues to be a challenge Voor de Nerds Flowingdata: How to Make a Tiled Bar Chart with D3.js The data scientist is in. ASU Library opens center for data science, research collaboration; check out the lab during Data Science Week open-house events
Datanieuws binnen en buiten het Rijk 26–09–2018
1
datanieuws-binnen-en-buiten-het-rijk-26-09-2018-1c238a545f8e
2018-09-26
2018-09-26 11:44:25
https://medium.com/s/story/datanieuws-binnen-en-buiten-het-rijk-26-09-2018-1c238a545f8e
false
1,118
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Betty Feenstra
Data driven, Head of Policy Information @ DG Public Administration, Ministry Internal Affairs and Kingdom Relations, Amsterdam, NL
6768e21844e9
bettyfeenstra
93
80
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-28
2018-07-28 16:04:19
2018-07-28
2018-07-28 16:32:39
1
false
en
2018-07-28
2018-07-28 16:32:39
7
1c23d30214c7
5.241509
25
0
0
Steve Deng: “Some areas of the tech industry still need to develop smart chip manufacturing and other opportunities to compete with Europe…
5
People’s Daily’s Interview with Steve Deng Steve Deng: “Some areas of the tech industry still need to develop smart chip manufacturing and other opportunities to compete with Europe and the US” MATRIX’s Chief of AI, Steve Deng, recently shared his opinions on intelligent chipsets in an interview with People’s Daily, one of the biggest online media networks in the world. Again we’ve translated the article into English. Source: http://ydyl.people.com.cn/n1/2018/0723/c412092-30164159.html?from=groupmessage In the past year, the ZTE incident and the trade war between China and the US has aroused various social circles’ reflection on China’s current development and also backwardness in high-tech fields, in particular, the fields of chips (integrated circuits) and artificial intelligence. For this, Steve Deng indicated during a recent media interview that in such fields as integrated circuit design and core components manufacturing, it may take several generations of unrelenting effort and accumulation for China to keep up with European countries and the US. But in the field of chip manufacturing, where sensors are combined with intelligent processing, China has obvious advantages, and the future looks bright. Industry growth is still necessary in order to catch up with Europe and the US in the field of integrated circuits. “The ZTE incident has sounded an alarm for China’s domestic chip R&D, and also exposed the differences between China, Europe and the US in high-tech fields such as chip manufacturing,” said Steve Deng. The manufacture of integrated circuits is divided into such processes as designing and manufacturing. It is often that there are billions of transistors on an integrated circuit. The designing of integrated circuits is now totally accomplished through automatic tools, but these mainstream design tools all belong to the US, none of them is owned by China. In some senses, the differences in terms of manufacturing is even bigger. For example, so far, China still cannot make photo-etching machines, which is the core equipment needed for manufacturing integrated circuits. Although the government has given lots of support and input to many projects, it can still be said to be in a relatively primative state. Steve Deng went to the US to study at the end of the 1990s, earning a doctorate in Electronics and Computer Engineering at Carnegie Mellon University. Steve Deng’s initial research was oriented around integrated circuits and CAD, he later moved on to researching GPGPU, and gradually focused in on the field of machine learning. During his ten years of work in the US, he accumulated a lot of technical and management experience in chip research and industry. Upon returning to China in 2013, Steve Deng acted as an associate research fellow and PhD supervisor at the School of Software at Qinghua University, focusing on research in the fields relating to computer system structures, artificial intelligence and industrial big data. Regarding the questions asked by many people about China’s potential capacity for manufacturing integrated circuits, Steve Deng notes that China actually has quite a large capacity for producing integrated circuits, but it mainly consists of OME factories or integrated circuit manufacturers. For example, SMIC, domestically the highest performing company, ranks fifth in the world in terms of capacity, and is two or three generations behind Intel in terms of advanced technological processes. According to reports, at present, the most advanced technology in the world for manufacturing chips is in the hands of American and Taiwanese enterprises, and are divided into two camps. The first camp currently only has Intel in it, Intel’s model is to design its own integrated circuits as well as its own production line, giving the company the ability to control their circuits and manufacturing process, at the same time. Intel has been a world leader since the 1970s. Another camp has TSMC as its representative, an OEM enterprise, manufacturing various chips with common purposes. More opportunities in the smart chip manufacturing industry Although it will take a lot of time to catch up and accumulate market share in respect of integrated circuits designing and core technologies, Steve Deng points out that China has a huge demand for chips in such fields as Intelligent IoT (internet of things). Therefore, a chip manufacturing industry that combines sensors and intelligent processing will have more opportunities for development. “CPU and GPU are still difficult to catch up in. This is because the focus of CPU is on an industry chain, and though China will have no difficulty in independently designing a CPU, it will be difficult to get others use it.” Steve Deng adds that, for example, currently every computer uses an Intel processor, and this processor has an operating system built to support it, but a new CPU will have no operating system to support it and no application software, thus it will be very hard to increase its market share and popularity. Steve Deng believes that current Chinese economic development puts a huge demand for intelligent terminals. For example, the transmission capacities for high-speed rail, aircraft and intelligent terminals is not very high because there is no wireless network, and the volume of data collected is far more than that of the data transmitted. Therefore, a portion of data can be processed first in intelligent terminals before being transmitted, thus significantly increasing the efficiency of data transmission. As a result, the chips combining sensors and intelligent processing have a bright future, because China has its own demands as well as a dominant advantage in manufacturing. “In the future, AI chips will command substantial market share. Take intelligent terminals, we can use AI technology to process them. These chips are used to support Intelligent IoT, and China has huge demand for these kinds of chips. If development is concentrated on this, it will be an advantage.” Steve Deng described. The major role of massivly talented people and good technical culture “Whether it is about catching up with and developing the ability to design integrated circuits or the technological process of manufacturing, the key lies in talented people.” Steve Deng believes that a good technical culture and philosophy are required to retain and develop professional talents. Steve Deng sees the US as an example of this, where, some professional technical companies always have a group of senior engineers, whose careers are dedicated to making CPUs, they have done so from the first generation of a CPU, and continue doing so up until they reach the age of fifty or sixty. They have gone through every generation of technology as well as every setback, and their experience is so valuable that it is very hard to replace. Many such technical experts simply love to study technology, and it can be difficult to entice them with high salaries into working for others. Moreover, these professionals are at different positions in the corporate hierarchy, some are responsible for management, some are engaged in technology, and managers are not necessarily higher than engineers, but are still very valuable to their team of engineers. “It is hard to have such a system in China, where, one will have nothing to do with technology as soon as reaching a certain age, resulting in the lack of experienced senior engineers, and this is a difficulty. It is necessary to cultivate a good technical culture like that in some American tech companies,” said Steve Deng. However, Steve Deng also indicates that, seen from a near- and long-term perspective, although China cannot be compared to the US in terms of historical accumulation, the talented people in reserve for integrated circuits is relatively high in numbers, and a great number of young talent is trained at universities every year, and this is an advantage. As for technical breakthroughs and organization, Steve Deng suggests that lessons can be learned from some research institutes in the US, and it is necessary to encourage projects that are small in scale but high in quality, each project is not particularly large but has its own special goals. About MATRIX: Website | Telegram | Twitter | Reddit | Facebook |White Paper
People’s Daily’s Interview with Steve Deng
470
peoples-daily-s-interview-with-steve-deng-1c23d30214c7
2018-07-30
2018-07-30 12:33:03
https://medium.com/s/story/peoples-daily-s-interview-with-steve-deng-1c23d30214c7
false
1,336
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
MATRIX AI NETWORK
An open source public intelligent blockchain platform
ad51c60ef692
matrixainetwork
1,003
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-20
2018-06-20 14:38:45
2018-06-20
2018-06-20 14:39:48
0
false
en
2018-06-20
2018-06-20 14:39:48
0
1c25f714a69b
0.071698
0
0
0
null
5
What if you had an interactive helping hand for your citizen developers? Learn more about our AI-Assisted Development
What if you had an interactive helping hand for your citizen developers?
0
what-if-you-had-an-interactive-helping-hand-for-your-citizen-developers-1c25f714a69b
2018-06-20
2018-06-20 14:39:50
https://medium.com/s/story/what-if-you-had-an-interactive-helping-hand-for-your-citizen-developers-1c25f714a69b
false
19
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mendix
Mendix is the app platform company for the enterprise. We enable companies to build, integrate and deploy web and mobile apps fast.
ba482148ea26
Mendix
614
1,072
20,181,104
null
null
null
null
null
null
0
null
0
98e37200303a
2017-12-19
2017-12-19 15:56:06
2017-12-19
2017-12-19 20:35:52
8
false
en
2017-12-20
2017-12-20 04:44:23
2
1c26a93744a5
2.095597
2
0
0
Authors: M.Çağdaş ÇAYLI , Buğrahan AKBULUT
3
[WEEK -5 ] Sound of The City Authors: M.Çağdaş ÇAYLI , Buğrahan AKBULUT This week we finally get some concrete results from our program,we have tried Neural Networks(NN) and Support Vector Machines(SVM) techniques on 8750 training data.In training; We separate our data into two parts after shuffling our data matrix, first part for training (8000 *.wav sounds ) second for validation (750 *.wav sounds). Next we pass our training matrix to both SVM and NN.Because SVM algorithms are not scale invariant, we scaled each attribute on our data matrix to [-1,1] interval - We do this process for both train and validation data matrices to obtain meaningful results-. But we did not use this approach for NN because of its characteristics. In validation part; Our NN accuracy results with different activation functions, Neural Network Results Among these functions logistic function is given the best results, so there are some different results for logistic function below.(we played with number of neurons, number of hidden layers, learning rate and batch size) number of neurons .1 number of neurons .2 number of hidden layers batch size 1.1 batch size 1.2 learning rates SVM results with different kernel functions and C values; Note : In the table below all results are taken by using “ovr” as decision function! (We have also tried “ovo” but results did not change a lot.) SVM Results So we get the best results from NN we will use it later experiments and upgrades for our project. Some Useful Resources 1.4. Support Vector Machines - scikit-learn 0.19.1 documentation The support vector machines in scikit-learn support both dense ( numpy.ndarray and convertible to that by numpy.asarray…scikit-learn.org http://scikit-learn.org/stable/modules/neural_networks_supervised.html
[WEEK -5 ] Sound of The City
12
week-5-sound-of-the-city-1c26a93744a5
2018-01-24
2018-01-24 12:10:04
https://medium.com/s/story/week-5-sound-of-the-city-1c26a93744a5
false
255
Course Projects for Introduction to Machine Learning, an undergraduate class at Hacettepe University — This semester the theme is Machine Learning and The City..
null
null
null
bbm406f17
null
bbm406f17
MACHINE LEARNING
null
Machine Learning
machine-learning
Machine Learning
51,320
çağdaş çaylı
null
a43873438b7d
cagdas_cayli
1
2
20,181,104
null
null
null
null
null
null
0
null
0
b230ea2a6eb8
2017-07-03
2017-07-03 11:45:40
2017-07-03
2017-07-03 15:49:11
1
false
en
2017-07-03
2017-07-03 15:49:11
9
1c27764b7e62
3.569811
298
17
0
Chatbots can schedule meetings, tell you the weather, and provide customer support. And that’s just the beginning.
4
11 Best Uses of Chatbots Right Now Chatbots can schedule meetings, tell you the weather, and provide customer support. And that’s just the beginning. Want to order pizza, schedule a meeting, or even find your true love? There’s a chatbot for that. Just as apps once were the hot new thing that would solve whatever problem you had back in 2009, now we’re moving into the age of chatbots. Chatbots make life even easier for consumers. With chatbots, there’s no more long waits on hold to talk to a person on the phone or going through multiple steps to research and complete a purchase on websites. Millions of people already get it. They’re using chatbots to contact retailers, get recommendations, complete purchases, and much more. Adoption of chatbots is increasing. People are discovering the benefits of chatbots. All of this is good news for entrepreneurs and businesses because pretty much any website or app can be turned into a bot. Now is the perfect time to hop on the bandwagon. Even I’ve jumped on the bandwagon with my new startup. What’s so great about chatbots? Check out these 11 interesting examples of ways you can use chatbots right now. 1. Order Pizza It’s ridiculously easy to order pizza with the help of chatbots. You can order by texting, tweeting, voice, or even from your car. Domino’s was one of the early adopters of chatbots. Today, Domino’s lets you easily build a new pizza (or reorder your favorite pizza) and track your order all from Facebook Messenger. 2. Product Suggestions Many consumers know they want to buy some shoes, but might not have a particular item in mind. You can use chatbots to offer product suggestions based on what they want (color, style, brand, etc.) It’s not just shoes. You can replace “shoes” with any other item. It could be clothes, groceries, flowers, a book, or a movie. Basically, any product you can think of. For example, tell H&M’s Kik chatbot about a piece of clothing you have and they’ll build an outfit for you. 3. Customer Support Last year, brands including AirBnB, Evernote, and Spotify started using chatbots on Twitter to provide 24/7 customer service. The goal of these customer support chatbots is to quickly provide answers and address customer complaints, or simply track the status of an order. 4. Weather There are numerous weather bots to choose from. Most are pretty basic, though a few are designed to be a bit more fun. You can use these to ask about the current conditions in your area and find out whether you should bring the umbrella before you leave for work. Some bots allow you to set regular reminders for a certain time of day. 5. Personal Finance Assistance Chatbots make it easy to make trades, get notifications about stock market trends, track your personal finances, or even get help finding a mortgage. Banks have created chatbots to let you check in on your account, such as your current balance and most recent transactions. And there are tax bots that help you track your business and deductible expenses. 6. Schedule a Meeting With so many schedules to juggle, setting up meetings can be a pain. Unless you let a chatbot do the work for you. Meekan is one such example. Simpy request a new meeting and this Slack chatbot will look at everyone’s calendars to find times when everyone is available. 7. Search for & Track Flights You can use chatbots to get some vacation inspiration. Others will let you search for and compare flights based on price and location. Kayak’s chatbot even lets you book your flights and hotels entirely from inside Facebook Messenger. Once you’re all booked, there are other chatbots that will let you track current flights, wait times, delays, and more. 8. News Chatbots help you stay up to date on the news or topics that matters to you. You can get the latest headlines from mainstream media sources like CNN, Fox News, or the Guardian. Or you can get the latest tech headlines from TechCrunch or Engadget. 9. Find Love A match made by chatbots? It could happen. Instead of swiping left or right on an app, you could use Foxsy. This Messenger bot promises to help you find a “beautiful and meaningful connection with the right person.” 10. Send Money You can easily send payments to your team or friends with chatbots. All you have to do to send money on the Slack PayPal account is type /paypal send $X to @username. That’s it. Crazy simple, right? 11. Find a Restaurant Where do you want to eat tonight? Not sure? Ask a chatbot. Much like the product recommendation chatbots, restaurant chatbots can provide recommendations based on cuisine, location, and price range. Some chatbots will even make reservations for you or take your order online. Summary These are just 11 examples of how businesses are already using chatbots. There are nearly limitless possibilities for what can be done with chatbots. So don’t miss out on this huge opportunity to help, engage, or sell to your customers. If you enjoyed reading this article, please recommend and share it to help others find it! About The Author Larry Kim is the CEO of Mobile Monkey and founder of WordStream. You can connect with him on Twitter, Facebook, LinkedIn and Instagram.
11 Best Uses of Chatbots Right Now
730
11-best-uses-of-chatbots-right-now-1c27764b7e62
2018-06-06
2018-06-06 14:56:31
https://medium.com/s/story/11-best-uses-of-chatbots-right-now-1c27764b7e62
false
893
We publish stories, videos, and podcasts to make smart people smarter. Subscribe to our newsletter to get them! www.TheMission.co
null
TheMissionHQ
null
The Mission
Info@TheMission.co
the-mission
TECH,ENTREPRENEURSHIP,STARTUP,LIFE,LIFE LESSONS
TheMissionHQ
Bots
bots
Bots
14,158
Larry Kim
CEO of MobileMonkey. Founder of WordStream. Top columnist @Inc ❤️ AdWords, Facebook Advertising, Marketing, Entrepreneurship, Start-ups & Venture Capital 🦄
81b376bf1c56
larrykim
194,910
5,123
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-14
2018-08-14 05:06:29
2018-08-14
2018-08-14 05:08:22
0
false
en
2018-08-14
2018-08-14 05:08:22
1
1c28296e3b53
1.532075
0
0
0
[PDF] Download Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow Ebook | READ…
1
DOWNLOAD in PDF Machine Learning A Journey from Beginner to Advanced Including Deep Learning Scikit-learn and Tensorflow FULL-PAGE [PDF] Download Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow Ebook | READ ONLINE Download at http://ebookcollection.space/?book=1723484725 Download Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow read ebook Online PDF EPUB KINDLE Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow pdf download Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow read online Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow epub Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow vk Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow pdf Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow amazon Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow free download pdf Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow pdf free Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow pdf Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow epub download Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow online Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow epub download Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow epub vk Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow mobi Download Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow PDF — KINDLE — EPUB — MOBI Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow download ebook PDF EPUB, book in english language [DOWNLOAD] Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow in format PDF Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow download free of book in format PDF #book #readonline #ebook #pdf #kidle #epub
DOWNLOAD in PDF Machine Learning A Journey from Beginner to Advanced Including Deep Learning…
0
download-in-pdf-machine-learning-a-journey-from-beginner-to-advanced-including-deep-learning-1c28296e3b53
2018-08-14
2018-08-14 05:08:23
https://medium.com/s/story/download-in-pdf-machine-learning-a-journey-from-beginner-to-advanced-including-deep-learning-1c28296e3b53
false
406
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
fandel
null
2e24e0c99030
tyoussef.a
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-12
2018-05-12 05:58:10
2018-05-13
2018-05-13 01:59:34
0
false
en
2018-05-13
2018-05-13 01:59:34
0
1c28bd48d322
4.392453
0
0
0
Hi. Like everyone else i too got into the AI hype last year around this time of the year.With 10 years of work experience in bank and with…
5
What I learnt from AI in the last one year. Hi. Like everyone else i too got into the AI hype last year around this time of the year.With 10 years of work experience in bank and with no specific specialisation in technology . I jumped the band wagon. My colleagues at bank think , I may be nuts. My wife believes that I have lost my head-office , working at the head office of the bank. Anyways she supports me. What else choice does she have. So what did I learn the last one year. First fairy tales may become true. Yes believe me I really feel so. Jokes apart this has been the most phenomenal year of my life. Working for 12 hours a day 6 days a week at bank and after that coming back home to study coding esp python and it’s machine learning siblings has been daunting task. Like all of my other planned activities this one was also perfectly unplanned. At one time I felt I would be an AI expert in like one year. Well Siraj Raval made me believe that , I could apply to deepmind in 3 months. He gave a perfect plan for this on his youtube channel. Well anyone who thinks I am writing this from London head quarters of Deepmind then they need a psychiatrist just like me.And if I find Mr. Raval I will ask him to share his adsense revenues with me for the long hours I spent watching his videos along with the advertisements. Also, Dear Mr Hassabis please stop saying “we will solve intelligence and through intelligence solve everything else ”. By saying so you may get many middle aged men thinking they might get young again to sleep with taylor swift. Common on, for all the fancy terms you guys throw ,what actually is happening is a big fat curve fitting. Plain and simple curve fitting. Marketing has a new sub field deep learning. Every one believe that this is the elixir of every thing. I use this as a weapon of mass destruction at office, whenever anyone questions about anything I make sure it finally gets turned towards AI and there i am, feeling great top of the world. But what really happened? I learnt a few things AI. But more important was , I genuinely learnt what it is to persist. How many a times in life we do things without perceptible results but the joy of doing the work keeps the faith alive. I have found that love of doing a thing , even if I don’t achieve anything tangible , is something that cannot be bought. So if there is something that one loves and is able to do it or even spend a few minutes with it after a day of hectic work at office is worth the pain. I found that if I really really love something , I find time for it. Second , if I do things that I genuinely love then other aspects of my life automatically changes. For example , I didn’t do exercise regularly in last one year and got a lot of fat around my tummy . But the love for AI has permeated to the depth of soul and I have started exercising , not because I want to be fit again , but to be fit enough so that I can keep learning AI. The most important thing that I experienced is something I read in man’s search for meaning by doctor Victor frankl, Where he said life is worth the meaning we have. This guy spent time in hitler’s concentration camp and lost everyone of his near and dear . But was able to survive the concentration camp. When he came out of the hell. He not only thrived but gave a new psychotherapy to the world called logotheraphy. This kind of psyco-therapy is based on finding meaning to ones life and continue following that path to be happy. I am a voracious reader, have read about the cutting edge psychological break through during the last few decades . Psychologists now point that happiness and meaning let’s a person thrive. Resilience is the differentiator between success and failure. Also those having better social life that is atleast someone who they can talk to , share one’s inner feelings without the fear of ridicule are those who live the longest than those who are lonely. Believe me , I tried everything from resilience (can be developed?!?) To trying to be genuinely interested in others even going that extra mile to help others. But all this did gave me a momentary good feeling. But all those snug feeling gave into a feeling of void later. My practice of resilience like spending a certain amount of the day for what makes me happy say for example, exercising , went down the drain the moment things got rough. So finally I stopped chasing happiness and resilience and friends. But started searching for meaning. It has taken me almost ten years to find it. Today I don’t have to force myself to be happy or be resilient. They just have come to me. I have found my meaning in AI . And those who are thinking how to find the work that gives meaning to your life. My prescription is very simple look at your deepest fears and pains. Understand them. Make friends with them. Be vulnerable. Feel the feeling of having no control over them. Surrender to them. Once you had done this than find activities that might alleviate them. This would take time and patience but once you find it , you have found it. The world becomes better than place for you. There are many self help gurus who have ready made prescription for happiness. Please avoid them. There are no shortcuts in life. Listening to motivational videos won’t help either. Because they are all like psychiatric drugs effective over a short period but leave you drained over the long run. Work to find what gives your life a meaning. But how to know if this is what gives meaning. Answer match your meaning with your fears and pains. If the meaning full activity of yours give answer to your deepest fears and pains. Then bang you have found happiness , resilience and everything in between. Nowadays I don’t have to force myself to do exercise but intuitively know that I should else if my body fails than I would not be able to learn about AI. This might look like a bit stretched. But this is what I have found as true. And I would not trade this for anything else. For those who are thinking what AI is :it is called Artificial Intelligence. Good luck to you in finding your meaning of life.And if you are still struggle after finding a meaning than its not your true calling. Keep searching.
What I learnt from AI in the last one year.
0
what-i-learnt-from-ai-in-the-last-one-year-1c28bd48d322
2018-05-13
2018-05-13 01:59:36
https://medium.com/s/story/what-i-learnt-from-ai-in-the-last-one-year-1c28bd48d322
false
1,164
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Anirban Ghosh
null
8ffcc5b8a568
anirbanghoshsbi
0
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-12
2018-08-12 15:56:19
2018-08-13
2018-08-13 04:45:57
9
false
en
2018-08-14
2018-08-14 13:23:53
1
1c2b43d71d5e
4.630189
6
0
0
This is my second blog on Medium. I have tried to analyze the 2015 Flight Delays and Cancellations.
5
FLIGHT DELAY ANALYSIS — USA 2015 This is my second blog on Medium. I have tried to analyze the 2015 Flight Delays and Cancellations. About Data: This data about delay and cancellation was collected and published by the DOT’s Bureau of Transportation Statistics. It has a list of all the airports, the airlines served there, the flights cancelled and the reasons behind it. My intention behind this analysis is to understand which airports lead to maximum delays and the worst airlines. I have also made efforts to understand how the month and day of the week have an affect on this. Location In the above visualization I have shown how the Origin airport and Destination airports have an affect on flight delays and cancellations. We can see that the west coast have the maximum number of flight delays with respect to the origin airport followed by Chicago though the east coast has it spread across 2–3 states and it counts up to a major portion when combined together. We can also observe that the cancellations on the basis on Destination airport is widely spread all across the country. 2. Airline v/s Delay Time The above visualization shows 14 airlines w.r.t. the number of flights offered by them and the average delay time for each of those airlines. We can clearly see that the number of flights does not majorly affect the departure delays as the airlines like Frontier Airlines Inc which doesn’t have high number of flights still has a very high departure delay time. At the same time, airlines like Southwest which has a high number of flights offered still has a very less departure delay time. Therefore, we can say that Southwest is a better airlines when compared to Frontier. 3. Airline v/s Month v/s State From the above visualizations we can see that some airlines only serve particular states while others have a dense network all across the country. Also, in the above airline v/s state graph, the density of the color is also dependent on the size of the state as the bigger the state, more is the number of flights offered, for e.g. — CA, TX and GA has darker patches for each airline when compared to other states. Also, from the airline v/s month graph we can clearly see that the number of flights offered by any airline is more during the June through December period when compared to the rest. 4. Delay Reasons According to the data, there can be 8 reasons for delays in the flight — Diverted, Security Delay, Weather Delay, Late aircraft delay, Departure Delay, AirSystem Delay, Airline Delay and Cancelled. Cancelled: We can see that the maximum number of flights cancelled are in February which when researched upon is majorly because of weather conditions. Diverted: We can see that period from May through August have maximum diversions which is majorly due to the fact that the number of flights served by airports increase during this period thus leading to maximum diversions. Same can be said about the Security Delays, Late Aircraft Delays and Airline Delays. Weather Delays: We can see that December and February have majority weather delays due to snow storms and September through November have the least as the weather conditions are most favorable during that period. Also the sudden increase in June is majorly due to rains and floods in some portions of the country. 5. Airline v/s Month The above visualization shows top 5 states based on # of flights and the top 3 airlines n those states. We can see that American Airlines, Delta Airlines and Southwest Airlines are the top ones and that 1 airline dominates a particular state. Also, each airline is divided into 12 sections based on months which makes it clear that they have almost equal proportion of flights in each month. 6. Month v/s Day From the above visualization we can see that Friday is the most common day where people don’t prefer flying while Sunday and Monday are the most common days in months with maximum number of flights 7. Technical Factors From the above visualization, we can see that airlines with high departure delays have higher speeds and more taxing time when compared to others leaving Hawaiian Airlines as an exception as it doesn’t have much air time. Also, flights with no or little delays have very low taxiing time. 7. Airline Ratings The above graph shows top 14 airlines based on the proportion of flights delayed from the overall flights served by a particular airline(know as the S factor). We can see that considering the number of flights offered by Southwest, American and Delta, Delta airlines is better of the lot followed by Southwest and American airlines. FINAL INSIGHTS June through August are the most common months for flying and these months cause delays due to human factors like Security delay or Airline delay, etc. December and February are the most common months for delays due to Weather conditions with American Airlines having the maximum delays followed by Delta airlines. Delays can be due to weather conditions, human reasons or technical reasons. Of the technical reasons, taxing time happens to affect the departure delays a lot. Though the increase in air speed in these cases make up for it trying to ease down on the arrival delays a lot. All the visualizations are created using Tableau. Any feedback and suggestions are always welcomed. Thank you!
FLIGHT DELAY ANALYSIS — USA 2015
64
flight-delay-analysis-usa-2015-1c2b43d71d5e
2018-08-14
2018-08-14 13:23:53
https://medium.com/s/story/flight-delay-analysis-usa-2015-1c2b43d71d5e
false
909
null
null
null
null
null
null
null
null
null
Data Visualization
data-visualization
Data Visualization
11,755
Manya Gulati
Data Scientist
4e7a8effcd5
manya_gulati27
7
4
20,181,104
null
null
null
null
null
null
0
null
0
270a7361fc0c
2018-07-07
2018-07-07 10:19:53
2018-07-07
2018-07-07 10:29:17
1
false
en
2018-07-07
2018-07-07 10:29:17
1
1c2c1a6fa6da
2.856604
1
0
0
Future of Fantasy Sports一 Part 2
5
Artificial Intelligence Revolutionary for Fantasy Sports? Future of Fantasy Sports一 Part 2 This article is the second part of the series “Future of Fantasy Sports” where implications and present research taking place in the field of Artificial Intelligence with respect to fantasy sports would be discussed. So make a hot cup of coffee, and have fun reading! Since inception of technology, it has only been trying to make everyone’s life easier. Some cynics still manage to find a negative side to it, and some continue to innovate and create. Today, it is believed that artificial intelligence can take a diabolical turn if not managed correctly but for once if we believe that the same artificial intelligence can (r)evolutionise the way mankind lives today, we can harness the power of artificial intelligence just the way it should be. If you’re a fantasy sport (whichever sport) fan, you’d know what an arduous task assembling a fantasy line can be, yet you do it religiously because you love sports so much. To get an edge over your opponents, you try to watch as many games as possible, read as many blogs, go through great deal of statistics, listening to a variety of commentary, referring to online guides leaving no stone unturned! But what if all your opponents are also trying to do the same? So does anyone really have an edge over the other? Artificial intelligence may be of help to you, my friend. It’s can be a panacea for all your fantasy sports related problems! Artificial intelligence is intelligent because it collects a lot of data from the internet and tries to interpret it in a way which is coded into the machine by machine learning engineers. Many OTT players have come up with their own supercomputers which other people in the world can make use of through APIs. The technology of artificial intelligence will look for as many player interviews, social media comments, analysis reports, statistics for your league’s players to give you a consolidated report of each and every player so that it all the research is done by the intelligent machine and all you’ve to do is select the players. And my friend, artificial intelligence can read a lot faster than you can. But if you’d still want to be Napolean about it, you can be. (Reference: Napolean once said: “If you want a thing done well, do it yourself.”) Artificial intelligence can not only help you with all the research to be done in order to get a perfect line up but also predict football scores! Does it sound unbelievable? Well, it is happening already. Computer scientists at University of Southampton are testing an artificially intelligent tool for predicting Premier League football results. The machine learning algorithm has managed to beat BBC football commentator Mark Lawrenson’s predictions for two seasons in a row. The team now wants fantasy football fans to try to beat it. The name of this magic machine is Squadguru. The Engineering behind Squadguru The system is a two-layer one. The first layer uses Bayesian Machine Learning technique which is fed last five years of football related data. This data is used to train the machine and only based on probability (mathematical one), the machine can predict future scores. A drawback could be that- this machine can’t predict unexpected outcomes. For instance, if your league has not won a single game in last 5 years, then this machine may predict that it’ll not win the next game. The machine doesn’t believe in miracles, it only believes in math. The second layer to the system has combinatorial optimisation algorithm which is used to analyse best transfers with the budget. The future of fantasy sports is so bright and exciting as long as technology keeps improving. Many unprecedented trends will be recorded. The artificial intelligence machine may even try competing with the world’s best fantasy sports players. Let’s see who wins! We are hopeful that this article gave you a new perspective into the fantasy sports industry. This is a series of articles which may help you get an idea of technology innovation and transformation. To read more about technology and its use case in fantasy sports industry, stay tuned!
Artificial Intelligence Revolutionary for Fantasy Sports?
1
artificial-intelligence-revolutionary-for-fantasy-sports-1c2c1a6fa6da
2018-07-11
2018-07-11 01:01:16
https://medium.com/s/story/artificial-intelligence-revolutionary-for-fantasy-sports-1c2c1a6fa6da
false
704
MyDFS Fantasy Sports Platform
null
mydfs.page
null
MyDFS
rating@mydfs.net
mydfs
SPORTS,BLOCKCHAIN,ICO,DAILY FANTASY SPORTS
mydfs_net
Machine Learning
machine-learning
Machine Learning
51,320
MyDFS
Telegram: t.me/mydfs
8cdc8315fbf1
mydfs
1,705
30
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-30
2018-07-30 21:44:39
2018-07-30
2018-07-30 21:45:33
1
false
en
2018-07-30
2018-07-30 21:45:33
6
1c2c1bd3136d
4.698113
4
0
0
There’s plenty of coverage on what machine learning may do for healthcare and when. Painfully little has been written for non-technical…
5
A Manager’s Guide to Making Machine Learning Work in the Real World There’s plenty of coverage on what machine learning may do for healthcare and when. Painfully little has been written for non-technical healthcare leaders whose job it is to successfully execute in the real world with real returns. It’s time to address that gap for two reasons. First, if you are responsible for improving care, operations, and / or the bottom line in a value-based environment, you will soon be forced to make decisions related to machine learning. Second, the way this stuff actually works is incredibly inconsistent with the way its being sold and the way we’re used to using data / information technology in healthcare. I’ve been fortunate to have spent the past dozen years designing machine learning-powered solutions for healthcare across hundreds of academic medical centers, international public health projects, and health plans as a researcher, consultant, director, and CEO. Here’s a list of what I wish I had known years ago. The machine learning is a capability, not a solution. Machine learning is math that we learned how to automated (i.e., software) that allows us to analyze, optimize, customize, and prophesize in new and powerful ways. We can use machine learning to discover what needs to change and how best to change it. A solution is a very different thing. It requires getting folks to do something differently, tracking those differences, relating them to outcomes (good or bad), and sharing that information back with the team. For managers, machine learning minus change is at best an innovation project and at worst a waste of resources. Machine learning + change can be a powerful solution. Improvement is not installed. I recently met with a data scientist at a large healthcare org that created a fantastic model (“great performance, highly predictive”), handed it off to “the business,” and lamented the fact that it didn’t make a difference. He quickly followed with, “It’s dangerous to assume that the success of a data scientist is dependent on whether a model is used.” It’s dangerous to pretend that it doesn’t. If you think machine learning is a thing for IT to install or hand off, you’re doing it wrong. Amazon doesn’t install a third party population book selling algorithm. Target doesn’t use the same software as all of their competitors to learn how best to manage their logistics. The opportunity is to move beyond one size fits all. It’s about learning from data and improving. Learning and improving are processes, not products. Pick a $5M problem. Change is hard. Pick a problem that will garner enough attention and resources to achieve change. Our rule of thumb is we want projects that lead to $5M in new revenue or cost savings but 5 is just a heuristic. Whatever number is large enough to get & keep the CEO’s attention is the right number. There’s another implication here. Spend the time doing the math to know how much potential value there actually is. It’s a lot cheaper to discover early that there isn’t enough value than to get months into a project and learn that your project isn’t all that important. Trust me on this one:) It’s an executive (C suite) decision to proceed. At Cyft we have learned to require not only exec sign off but bi-monthly check in meetings to keep things on track. If you picked the $5M project, helping you succeed is worth 30 minutes of their time every other week so they can clear barriers. And a team sport to execute. Successful execution requires a committed multidisciplinary team. Our must have list includes representatives from IT, business analytics (someone that knows the data), the business (or clinical) owner, and a project manager. This working group meets weekly and is run by a project plan with milestones and timelines. This isn’t rocket science but we have found that this level of discipline is necessary for keeping a project on track. If you build it, they’re unlikely to come. However, if you built it with them based on their needs, calculating potential ROI, informing them of the trade offs, and involving them in decision making, they’re likely to support your combined efforts to address some of their most important challenges. No one cares about your c-stat. If you’re using machine learning in an enterprise environment you will be held accountable to a return on investment. I have yet to meet an executive willing to measure returns in terms of p-value, c-stat, F-measure, or any of the statistics that researchers are judged by. The good news is that a solid understanding of the business + basic arithmetic + model performance allows one to bridge from interesting statistics to real ROI calculations. The bad news is it’s yet another important task that must be planned for that has little to do with machine learning itself. Don’t underestimate the importance of education. This stuff works differently than people are used to. Clinicians that have spent decades becoming masters of triage will be asked to focus on people whose need isn’t overtly obvious. Managers that are used to telling their IT teams exactly what reports they need will now be shown the data they should care about. And all because a computer said so. If you don’t invest in helping people understand how this works and foster trust in your approach do not expect them to simply adopt your results. Clear, concise storytelling is critical. I have been at this for 15 years and I’m still constantly searching for metaphors, analogies, common ground that will allow me to bring people up to speed on how this works, the results we got, baseline versus new ROI, etc. Integrate or bust. Every healthcare organization has a nearly unique combination of information systems and workflows. Absolutely none of them wants a new interface / system to log into or a new parallel workflow to keep track of. Find a way to integrate. Keep learning & improving. Improvement is not a binary thing that did or did not occur. Yet so many data-related projects seem to assume as much in their execution. If ROI is the goal then agree to baseline, activity, and outcomes metrics as soon as the problem is defined. Measure them often. Change practice accordingly. Otherwise, it’s just machine learning — not human learning. The good news is, this stuff is as doable as it is important. With a clear focus on the problem to be solved (hint: it’s not ‘use machine learning’), a dedicated team, and disciplined project management, your team will begin to capitalize on the tremendous gains experienced by nearly every other industry. Dr. Leonard D’Avolio is an Assistant Professor at Harvard Medical School and Brigham and Women’s Hospital and the founder and CEO of Cyft, a company that helps healthcare organizations become learning healthcare systems. As part of his role, he writes, speaks, and consults on the topic of making data work for healthcare and he shares lessons learned on Twitter at @ldavolio and on Cyft’s website (cyft.com). #education #storytelling #graphics #statistics #machinelearning
A Manager’s Guide to Making Machine Learning Work in the Real World
11
a-managers-guide-to-making-machine-learning-work-in-the-real-world-1c2c1bd3136d
2018-07-30
2018-07-30 21:45:33
https://medium.com/s/story/a-managers-guide-to-making-machine-learning-work-in-the-real-world-1c2c1bd3136d
false
1,192
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Leonard D'Avolio PhD
Co-founder @CyftInc, Asst. Prof @HarvardMed & @BrighamWomens, data doc, AI & healthcare, writer, researcher, entrepreneur
ba379b9de769
ldavolio
270
185
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-03
2017-11-03 07:00:32
2017-11-03
2017-11-03 07:11:26
0
false
en
2017-11-03
2017-11-03 07:11:26
8
1c2c24c0069c
0.792453
0
0
0
Written by Siva S, CEO at Powerupcloud & IRA.ai.
1
Artificial Intelligence — What is Powerupcloud’s focus? Written by Siva S, CEO at Powerupcloud & IRA.ai. Artificial Intelligence — a highly misunderstood and abused term in today’s world. While some theorists claim that AI is too complex to be defined, some argue that it is actually pretty simple to understand as a concept but difficult to build. At Powerupcloud, we focus on building Decision Making AI (DMAI) engines for large enterprises to adopt AI at different levels and functions of their business. These DMAI engines integrate with Perception AI (PAI) engines like Amazon Lex/Microsoft Face API, Google Translation API, etc., to provide a wholesome AI solution to businesses. We have launched 2 new service lines in Artificial Intelligence space. Decision-Making AI: https://lnkd.in/f88gxxq 2. AI for Chatbots: https://lnkd.in/fesjQTu I strongly believe that we are few years behind in achieving Artificial General Intelligence (AGI) — where a machine will think and perform tasks like a human. But businesses don’t need AGI yet. The current crop of AI technologies is quite sufficient to improve processes and understand customers better by solving difficult and mundane problems. Speak to our experts today to know more about how we can help you on AI solutions. Drop in your queries to ai@powerupcloud.com. #artificialintelligence #decisionmakingai #ira.ai #powerupcloud
Artificial Intelligence — What is Powerupcloud’s focus?
0
artificial-intelligence-what-is-powerupclouds-focus-1c2c24c0069c
2017-11-03
2017-11-03 07:11:27
https://medium.com/s/story/artificial-intelligence-what-is-powerupclouds-focus-1c2c24c0069c
false
210
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Yogitha O
null
c8315a91c141
yogitha.o
4
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-20
2018-04-20 02:30:37
2018-04-20
2018-04-20 02:56:58
25
false
zh-Hant
2018-04-20
2018-04-20 03:18:17
11
1c2f149c822f
3.396226
1
0
0
初心者一開始常常會無限卡關,我把我怎麼抓code到電腦內,使用anaconda打開來跑的過程記錄下來,希望可以幫助到想入門機器學習的人。
3
LeNet手把手實作教學 初心者一開始常常會無限卡關,我把我怎麼抓code到電腦內,使用anaconda打開來跑的過程記錄下來,希望可以幫助到想入門機器學習的人。 網路上的code下載下來有時會編譯失敗,就要去找怎麼debug,像這份code問題在dataset檔案名稱跟位置不太相同,我的解法是把dataset放在自己創的資料夾內 這次廣泛使用GIF取代影片,希望可以達成輕量化以及易於傳遞的功效 正文開始... 為什麼要實作LeNet * 增進對AI的認識 * 直接挑一個NN(Neural Network)跑起來看更能了解Deep learning * 學會怎麼使用open source * 搭配圖樣資料集Data set: MNIST AI、ML、DL分類介紹 NN演變史 What is LeNet LeNet是一個已經train好的NN(Neural Network)模型,幾層/幾個節點/struct都已經設定完成 LeNet是用來識別手寫數字的經典CNN (convolutional neural network) 當時美國大多數銀行用來識別支票上的手寫數字,可靠度高 Hidden-layer共有7層,設定如下圖 官網介紹: http://yann.lecun.com/exdb/lenet/ LeNet結構圖 Why LeNet 有完整論文1998_Gradient-Based Learning Applied to Document Recognition http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf 有完整程式碼GitHub — feiyuhug_lenet-5 implement lenet-5 in lecun-98 by python https://github.com/feiyuhug/lenet-5 (感謝LeNet實作團成員提供) 左邊是特徵抽取 Why MNIST 經典的手寫辨識圖樣的資料集 堪稱ML的Hello world Keras已有內建 The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems MNIST DB http://yann.lecun.com/exdb/mnist/ 網頁下載四個檔案即可,如下圖紅字部分 MINST官網,內有下載連結 Prepare to go 開始囉,下載code來跑看看吧 Python with anacoda 先來安裝python執行環境 下載頁: https://www.anaconda.com/download/ 選擇最新3.6版 安裝anaconda,安裝完後可以在程式集看到這些工具 安裝完後在cmd 打 python(或是py)確認python成功安裝 開啟Spyder & jupyter notebook 設定環境 + jupyter notebook + helloworld Learning Python Python語法速查網頁: https://www.tutorialspoint.com/python/index.htm 互動式網頁教學: https://www.codecademy.com/learn/learn-python 教材/題目 http://hemingwang.blogspot.tw/2017/04/lenet.html 建立Github帳號 Random Number MNIST and LeNet Edge Detection Activation Function Pooling LeNet-5 程式碼實作 mAiLab_0001:GitHub https://github.com/ 去github註冊帳號 下載整份code https://github.com/HiCraigChen/LeNet mAiLab_0002:Random NumberPython code 連結: https://github.com/exeex/LeNet-made-by-hand/blob/master/homework/hw2.py 1. 產生五個亂數,並將其輸出。 2. 產生N個介於-1與1之間的亂數,計算其平均值與標準差並輸出,每個亂數的值則不用輸出。N=10**1, 10**2, 10**3, 10**4, 10**5。 ◎ 進階題 3. 做基本題2時,一併輸出產生每N個亂數前後的系統時間,並計算所需的時間。 mAiLab_0003:MNIST and LeNet ◎ 基本題 1. 下載以下四個檔案 [1]: train-images-idx3-ubyte.gz: training set images (9912422 bytes) train-labels-idx1-ubyte.gz: training set labels (28881 bytes) t10k-images-idx3-ubyte.gz: test set images (1648877 bytes) t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes) 將其用解壓縮軟體解壓縮之後得到四個檔案,如圖1所示。 解壓縮之後的檔,其檔案格式可在 [1] 的下方取得,參考圖2。 2. 輸出 train-images.idx3-ubyte 檔案中的第一個圖,大小為 28x28。 格式以下面 4x4 矩陣為例: 00 11 22 33 44 55 66 77 88 99 AA BB CC DD EE FF 3. 輸出 train-images.idx3-ubyte 檔案中前十個圖的平均圖,採無條件捨去,大小為 28x28。 4. 輸出 train-labels.idx1-ubyte 檔案中前十個 labels 的平均,精確度取至小數點以下兩位,採無條件捨去。 5. 輸出 train-images.idx3-ubyte 檔案中的第一個圖,大小為 32x32。原圖置中,多出來的地方補0。 發現bug,需要安裝numpy + matpotlib 跑一遍試試看 mAiLab_0004:Edge Detection 一般影像處理是使用適當的 filter 達到所需目的。Deep Learning 則是透過 NN 的方式,把 filter / convolutional kernel 訓練出來。 HW0004: ◎ 基本題 1. 將作業3 [1] 的前五個圖,增大至30x30後,分別跑Gx與Gy [2],共輸出10個圖,以圖示呈現 [3], [4],並請反白處理 [4]。 Gx: -1 0 +1 -2 0 +2 -1 0 +1 Gy: +1 +2 +1 0 0 0 -1 -2 -1 2. 將作業3 [1] 的第一個圖,分別增大至 32x32、34x34、36x36,然後跑5x5、7x7、9x9的filter [5],輸出其值為 28x28 的矩陣,格式以下面 4x4 矩陣為例: 00 11 22 33 44 55 66 77 88 99 AA BB CC DD EE FF mAiLab_0005:Activation Function 畫出底下的函數及其一次微分的圖: 1. Sigmoid 2. tanh 3. ReLU 4. Leaky ReLU 5. ELU 把所有圖形跑出來 mAiLab_0006:Pooling 將作業3 [1] 的前五個圖,分別跑 max pooling 與 average pooling [2],共輸出10個圖,以圖示呈現 [3], [4],並請反白處理 [4]。 mAiLab_0008:LeNet-5 完成 LeNet-5 的程式碼、訓練與測試。 2. 輸出所訓練、C1 的6個 convolutional kernels,大小為5x5,格式為16進位。 最後跑了半小時,出現正確率85%,還不錯 以上就是手把手實作LeNet的過程了,程式碼需要時間消化,找時間慢慢看吧
LeNet手把手實作教學
1
lenet手把手實作教學-1c2f149c822f
2018-04-20
2018-04-20 03:18:18
https://medium.com/s/story/lenet手把手實作教學-1c2f149c822f
false
370
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
mathilda hsu
null
795e7618a6f2
rgnuj122
10
11
20,181,104
null
null
null
null
null
null
0
<?xml version="1.0"?> <!DOCTYPE PARTS SYSTEM "parts.dtd"> <?xml-stylesheet type="text/css" href="xmlpartsstyle.css"?> <PARTS> <TITLE>Computer Parts</TITLE> <PART> <ITEM>Motherboard</ITEM> <MANUFACTURER>ASUS</MANUFACTURER> <MODEL>P3B-F</MODEL> <COST> 123.00</COST> </PART> <PART> <ITEM>Video Card</ITEM> <MANUFACTURER>ATI</MANUFACTURER> <MODEL>All-in-Wonder Pro</MODEL> <COST> 160.00</COST> </PART> <PART> <ITEM>Sound Card</ITEM> <MANUFACTURER>Creative Labs</MANUFACTURER> <MODEL>Sound Blaster Live</MODEL> <COST> 80.00</COST> </PART> <PART> <ITEMᡋ inch Monitor</ITEM> <MANUFACTURER>LG Electronics</MANUFACTURER> <MODEL> 995E</MODEL> <COST> 290.00</COST> </PART> </PARTS> scala> import scala.xml.XML import scala.xml.XML scala> val xml = XML.loadFile("data/Posts.xml") java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3332) ... scala.runtime.ScalaRunTime$.replStringOf(ScalaRunTime.scala:345) at .$print$lzycompute(<console>:10) at .$print(<console>:6) at $print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) spark-shell --driver-memory 6G scala> import scala.xml.XML import scala.xml.XML scala> val xml = XML.loadFile("data/Posts.xml") xml: scala.xml.Elem = <posts> <row FavoriteCount="1" CommentCount="1" AnswerCount="1" Tags="&lt;job-search&gt;&lt;visa&gt;&lt;japan&gt;" Title="What kind of Visa is required to work in Academia in Japan?" LastActivityDate="2013-10-30T09:14:11.633" LastEditDate="2013-10-30T09:14:11.633" LastEditorUserId="2700" OwnerUserId="5" Body="&lt;p&gt;As from title. What kind of visa class do I have to apply for, in order to work as an academic in Japan ? &lt;/p&gt; " ViewCount="415" Score="16" CreationDate="2012-02-14T20:23:40.127" AcceptedAnswerId="180" PostTypeId="1" Id="1"/> <row ClosedDate="2015-03-29T20:06:49.947" CommentCount="2" AnswerCount="2" Tags="&lt;phd&gt;&lt;job-search&gt;&lt;online-resource&gt;&lt;chemistry&gt;" Title="As a computational chemist, which online resources are avail... <?xml version="1.0" encoding="utf-8"?> <posts> <row Id="1" PostTypeId="1" AcceptedAnswerId="180" CreationDate="2012-02-14T20:23:40.127" Score="16" ViewCount="415" Body="&lt;p&gt;As from title. What kind of visa class do I have to apply for, in order to work as an academic in Japan ? &lt;/p&gt;&#xA;" OwnerUserId="5" LastEditorUserId="2700" LastEditDate="2013-10-30T09:14:11.633" LastActivityDate="2013-10-30T09:14:11.633" Title="What kind of Visa is required to work in Academia in Japan?" Tags="&lt;job-search&gt;&lt;visa&gt;&lt;japan&gt;" AnswerCount="1" CommentCount="1" FavoriteCount="1" /> <row Id="2" PostTypeId="1" AcceptedAnswerId="246" CreationDate="2012-02-14T20:26:22.683" Score="11" ViewCount="725" Body="&lt;p&gt;Which online resources are available for job search at the Ph.D. level in the computational chemistry field?&lt;/p&gt;&#xA;" OwnerUserId="5" LastEditorUserId="15723" LastEditDate="2014-09-18T13:02:01.180" LastActivityDate="2014-09-18T13:02:01.180" Title="As a computational chemist, which online resources are available for Ph.D. level jobs?" Tags="&lt;phd&gt;&lt;job-search&gt;&lt;online-resource&gt;&lt;chemistry&gt;" AnswerCount="2" CommentCount="2" ClosedDate="2015-03-29T20:06:49.947" /> scala> val texts = (xml \ "row").map{_.attribute("Body")} texts: scala.collection.immutable.Seq[Option[Seq[scala.xml.Node]]] = List(Some(&lt;p&gt;As from title. What kind of visa class do I have to apply for, in order to work as an academic in Japan ? &lt;/p&gt; ), Some(&lt;p&gt;Which online resources are available for job search at the Ph.D. level in the computational chemistry field?&lt;/p&gt; ), Some(&lt;p&gt;As from title. Not all journals provide the impact factor on their homepage. For those who don't where can I find their impact factor ?&lt;/p&gt; ), Some(&lt;p&gt;I have seen many engineering departments want professional engineer registration. Why do they care? &lt;/p&gt; ), Some(&lt;p&gt;What is the h-index, and how does it work ?&lt;/p&gt; ), Some(&lt;p&gt;If your institution has a subscription to Journal Citation Reports (JCR), you... scala> val lower_texts = texts map {_.toString} map { _.trim } filter { _.length != 0 } map { _.toLowerCase } lower_texts: scala.collection.immutable.Seq[String] = List(some(&lt;p&gt;as from title. what kind of visa class do i have to apply for, in order to work as an academic in japan ? &lt;/p&gt; ), some(&lt;p&gt;which online resources are available for job search at the ph.d. level in the computational chemistry field?&lt;/p&gt; ), some(&lt;p&gt;as from title. not all journals provide the impact factor on their homepage. for those who don't where can i find their impact factor ?&lt;/p&gt; ), some(&lt;p&gt;i have seen many engineering departments want professional engineer registration. why do they care? &lt;/p&gt; ), some(&lt;p&gt;what is the h-index, and how does it work ?&lt;/p&gt; ), some(&lt;p&gt;if your institution has a subscription to journal citation reports (jcr), you can check it t...
11
479df42c7b8e
2018-08-01
2018-08-01 09:11:17
2018-08-24
2018-08-24 07:58:24
5
false
en
2018-08-27
2018-08-27 15:11:42
5
1c2f9c2fed6e
5.42956
2
0
0
Conquer XML land with Scala
5
XML and Scala Conquer XML land with Scala As data scientists and machine learning engineers, we don’t always appreciate that most of the data we get are usually in CSV or at times in JSON file format. In reality this is great, as we need to deal with large volumes of data and any format that makes it easy to read and understand data should be highly appreciated. And people who work with CSV know how great this is as a data format. Having said that, it might not always be the case. If you are a Scala developer (a JVM language), you are likely to work in a Java environment. And since XML has been the preferred format for data interchange, you are most likely to receive data in an XML format. Which means, you will need to parse the data from XML files and build data pipelines out of it. XML, which stands for Extensible Markup Language, was thought of as a way in which both computers and humans should be able to understand the text. Of course, the designers got their inspiration from the hugely successful HTML. You might argue that no one actually reads HTML and we only see the final output thrown by the browsers. Well, may be, it was assumed that XML would be read only by developers and hence it should work. But then we moved to Service Oriented Architecture (SOA) where XML has become the standard data-format for communication between services. In this post we will see how we can parse XML in spark-Scala. Table 1 A simple XML file (1) Interestingly it’s quite easy to parse and create XML pipelines in Scala. To load an XML file, you will need to pass the filename to the loadFile utility from XML. Please note that parsing the whole file requires a lot of processing power and therefore, chances are that you may run into ‘OutOfMemoryError’ as shown in table 2. Table 2 Scala code first run In case that happens, you will need to boost the memory for the spark driver. I am allocating a random high number here (as shown in table 3). Table 3 Increased driver memory Now I can parse the XML with ease (refer to table 4). Table 4 Read XML I have not talked about the file data that I am using till now. The xml dataset is a stackoverflow dataset downloaded from archive.org. It contains data in this format. Table 5 Our target dataset Each record is allocated a row tag, that combines multiple attributes. We can now parse these individual tags and get the value of the attributes. To parse the records, you will need to search an XML tree for the required data, using XPath expressions. The way it works is that you need to pass \ and \\ methods for the equivalent XPath / and // expressions. For example, you can get the ‘row’ tags and then on each record get the ‘body’ attribute. This gives us a sequence of scala.Option. Table 6 Getting the appropriate text Now that you have an iterator you can run complex transformations on top of it. Below (in table 7) we are converting texts to string, trimming them for extra whitespace, then filtering out the text with some string, and converting them to lowercase. Table 7 Spark transformation Scala offers a convenient and easy way for basic XML processing. This post is aimed at helping beginners use XML and Scala with ease. If you found this useful, do leave a comment, we would love to hear from you and share the post with your friends and colleagues. I have recently completed a book on fastText. FastText is a cool library by Facebook for efficient text classification and creating the word embeddings. fastText Quick Start Guide: Get started with Facebook's library for text representation and… Perform efficient fast text representation and classification with Facebook's fastText libraryKey FeaturesIntroduction…amzn.to
XML and Scala
2
xml-and-scala-1c2f9c2fed6e
2018-08-27
2018-08-27 15:11:42
https://medium.com/s/story/xml-and-scala-1c2f9c2fed6e
false
1,218
Technology insights, selection & implementation
null
nineleaps
null
Technology@Nineleaps
null
technology-nineleaps
STARTUP,TECH,PRODUCT DEVELOPMENT,PROGRAMMING LANGUAGES
nineleaps
Spark
spark
Spark
1,375
joydeep bhattacharjee
null
28ee5420c3df
joydeepubuntu
305
12
20,181,104
null
null
null
null
null
null
0
- <TIME>|<GEO:CITY>|now (сущности + леммы свободных слов) - <TIME>|<GEO:CITY>|RB (сущности + POS теги свободных слов) - и так далее
1
null
2018-07-24
2018-07-24 19:28:24
2018-07-24
2018-07-24 21:26:15
3
false
ru
2018-07-25
2018-07-25 08:08:56
10
1c30c0a1ab2
6.995283
1
0
0
В этой статье мы рассмотрим способы расширения семантической модели, основанные на процессе курации пользовательских запросов.
5
Обучение семантической модели куратором В этой статье мы рассмотрим способы расширения семантической модели, основанные на процессе курации пользовательских запросов. Как мы уже упоминали в предыдущих материалах, при использовании в архитектуре диалоговой системы механизма курации, процесс формирования модели может быть разбит на два этапа: Начальное конфигурирование модели. Описание всех элементов модели со списком их базовых синонимов. Проводится на начальном этапе разработки. Отложенное обучение куратором. Проводится в режиме функционирования сервиса, путем отслеживания, анализа и использования информации по обработанным куратором вопросам, то есть тем, на которые сервис не смог ответить автоматически в силу своего несовершенства, недостаточной степени покрытия синонимами и т. д. Первый этап является весьма трудоемкой и ресурсозатратной задачей. Чтобы не откладывать старт системы и не тратить значительных усилий на попытку создания идеальной модели, часть работы по ее дальнейшему совершенствованию можно перенести на второй этап. Ниже приведен обзор подходов и алгоритмов для решения задачи отложенного обучения, применяемых в компании DataLingvo. Однако основные используемые методы и принципы совпадают для большинства решений подобного рода. Процесс курирования Куратор помогает системе распознать вопрос, который она не смогла обработать автоматически, как правило редактируя его. Редактирование — это правка существующих и удаление лишних слов, но иногда и наоборот, добавление дополнительных элементов, понимаемых куратором лишь из контекста или просто на основе собственного экспертного знания сущностей, не нашедшего отражения в спроектированной модели. Основная задача последующей автоматизации — максимально эффективно воспользоваться результатами курирования и научить модель понимать множество похожих вопросов. Пример. Пусть мы разрабатываем ассистента для ответов на следующие типы вопросов “Сколько сейчас времени в таком-то городе“. Подобный пример представлен среди прочих примеров системы DataLingvo https://github.com/aradzinski/datalingvo-examples — “Time example” 1. В нашем примере будут задействованы всего две сущности: Индикатор вопроса “время” <TIME> Параметр “город” <GEO:CITY> 2. Кроме того для простоты запретим использование свободных слов. При использовании DataLingvo, GEO — это встроенный элемент и вам не придется при разработке модели каким-то специальным образом распознавать понятие “город”. Сущность <TIME> можно определить через минимальный набор синонимов, таких как: What time is it What is local time Таким образом наша максимально упрощенная модель состоит из единственного семантического шаблона: <TIME><GEO:CITY> Пример обработки: Вопрос “What time is it in Tokyo” при такой конфигурации сразу распознается и будет отвечен. Вопрос “What time is it in Tokyo now” уже не может быть отвечен немедленно. Помешает нераспознанное свободное слово “now“. Вопрос перейдет на стадию курации. Куратор исправит текст исходного вопроса таким образом, чтобы он мог бы быть распознан автоматически. В нашем случае самый простой и логичный способ редактирования команды — это удаление не несущего никакой полезной информации слова “now”, после чего запрос будет удовлетворять представленному шаблону и сможет быть обработан в обычном режиме. Что мы хотим получить от обучения Теперь нужно продумать, какую максимальную пользу мы можем извлечь из уже проделанной куратором работы. Очевидно, что после курирования система должна стать умнее и научиться отвечать не только на вопрос “What time is it in Tokyo now” но и на все “похожие” вопросы. Итак на какие именно вопросы следует научиться отвечать дополнительно: 1. На все вопросы с учетом стоп-слов. То есть вопрос “What time is it in Tokyo now, please“ тоже должен быть отвечен. Стоп-слова, в данном примере “please”, не должны помешать предложению быть распознанным автоматически. 2. На вопросы с измененным порядком следования сущностей: “Now, what time is it in Tokyo”. Если отбросить в сторону то, что вопрос сформулирован грамматически некорректно, он тоже должен быть успешно распознан. Эта возможность может быть сконфигурирована, то есть разрешена или запрещена для модели, так как для некоторых систем порядок следования элементов может быть критически важен. Классический пример — это сервис заказа билетов, для которого “Билет от Токио до Лондона” совершенно не тоже самое что “Билет от Лондона до Токио”. 3. На вопросы с той же типовой схемой сущностей. Было бы хорошо, если бы вопрос “What time is it in New York now” тоже был бы отвечен автоматически, несмотря на то, что курирование было произведено для вопроса по городу Tokyo. Система должна распознать сущности одного типа (GEO:CITY для Tokyo и New York). Также обратите внимание на то, что название города New York состоит из двух слов и это тоже не должно сбить с толку. 4. На некоторые вопросы со схожей с типовой схемой сущностей. Некоторые сущности могут быть склеены или удалены без ущерба для структуры предложения, по которой собственно и будет происходить поиск наиболее подходящих среди уже прошедших курацию вариантов. Cмотри далее описание формирования ключей поиска. Например, очень бы хотелось чтобы на вопрос “What time is it in Saint Petersburg, Russia now” также мог бы быть получен немедленный ответ, несмотря на его несколько отличающуюся структуру от структуры ранее прошедшего курацию вопроса. В нашем случае <TIME> <GEO:CITY> <GEO:COUNTRY> Обратите внимание на то, что подобный подход трудно формализуется и должен применяться для всех подходящих наборов сущностей индивидуально. 5. На вопросы с “похожими” свободными словами. Стоит рассмотреть все синонимы свободных слов в предложении и попробовать найти сохраненные в процессе курирования запросы с “похожими” свободными словами. Похожие — это чаще всего обычные языковые синонимы. Пример. Ниже представлен результат морфологического разбора предложения (частеречный разбор), построенный в процессе курации. Теги частей речи (POS — Part of speech) соответствуют следующей таблице Penn Treebank II tag set. Наша задача поддержать возможность автоматического ответа не только на непосредственно исправленный куратором вопрос, но и на все вопросы с синонимами к слову “now”. Заглянем в wordnet и найдем все синонимы слова “now” для части речи “adverb“ (POS тег RB) По ссылке видно, что одним из синонимов слова now (с типом RB) является слово today (типа RB). Таким образом, было бы желательно уметь отвечать на вопрос “What time is it in Tokyo (New York, etc) today”. Для ряда систем подобный подход может выглядеть немного рискованным. Кроме того при использовании WordNet в качестве базы синонимов, имеет смысл осознанно сконфигурировать глубину поиска среди wordnet синсетов, а также количество синонимов на каждый синсет . Подробнее о WordNet и синсентах. Ограничения для схожих по структуре запросов После того как в систему поступил новый запрос, не прошедший автообработку, мы пытаемся найти “похожие“ вопросы, ранее сохраненные на этапе курирования. По какому принципу мы их ищем и чем нам это может помочь при ответе на текущий вопрос — будет рассмотрено далее. Если найденное предложение “очень” похоже и отличается, например, только наличием или отсутствием некоторых стоп-слов, мы можем быть абсолютно уверены в том, что наш новый запрос будет столь же успешно обработан, как и тот, что уже распознан и обработан ранее. По мере уменьшения “степени схожести“, наша уверенность в успешности данной процедуры несколько уменьшается. Но и вопросы с даже достаточно схожей структурой могут обрабатываться по разному. Пусть программируемый нами ассистент умеет отвечать лишь на один вопрос — “Какая сейчас погода в Токио“, и только в одном единственном городе. Таким образом вполне “похожий“ вопрос “Какая сейчас погода в Лондоне” не может быть отвечен в принципе, несмотря на всю их очевидную структурную близость. Иными словами, критерии схожести у систем сравнения текста и модуля обработки пользовательских команд могут не совпадать. Расширение модели Рассмотрим процесс шаг за шагом. 1. Куратор редактирует текст не прошедшего автообработку запроса, формируя вопрос с ожидаемой структурой. 2. Для исходного текста формируется набор ключей поиска, состоящих из сущностей, таких как GEO, NUMBERS, DATES и так далее, начальных словоформ свободных слов, POS тегов и, возможно, прочей полезной информации. Стоп-слова должны быть исключены из ключей. Пример такого набора для “What time is it in Tokyo now” Каждый такой ключ имеет свой “вес”, зависящий от типа преобразования исходного текста. Чем меньше преобразований — тем больше “вес”. 3. В расширенной модели для каждого ключа сохраняется следующий набор информации: его “вес“ начальный вариант вопроса отредактированный куратором вариант вопроса Таким образом модель обогащается новой информацией. Использование данных обогащенной модели Что происходит когда в систему поступает новая команда, которая не может быть распознана автоматически? Она не сразу перенаправляется куратору. Сначала по ее тексту строится набор ключей поиска, по которым мы пытаемся найти уже сохраненные, ранее отредактированные куратором команды с теми же ключами, сформированными по их изначальным текстам. Если такие команды не обнаружены, запрос перенаправляется на курирование. Если обнаружены — мы пытаемся воспользоваться найденной информацией и программно скорректировать новый пришедший вопрос согласно логике уже примененной куратором к тексту найденной сохраненной команды. Таким образом, отыскав сохраненные в модели ранее отредактированные предложения, и выбрав одно из них с максимальным весом ключа, мы располагаем следующим набором данных: Текст вопроса, на который мы пытаемся ответить (1) Сохраненный и прошедший курацию вопрос (2) Сохраненный и прошедший курацию отредактированный вопрос, на который система должна уметь ответить автоматически (2’) Наша задача распознать функцию, которая была применена к (2) для получения (2’) и применить эту функцию к (1) чтобы получить текст (1’), который с некоторой долей вероятности, тоже может быть обработан автоматически. Алгоритм поиска и применения такой функции задача нетривиальная, он может быть сколь угодно сложен, а также непрерывно развиваться и совершенствоваться с развитием системы. Описание деталей возможных реализаций выходит за рамки настоящей заметки. Если найти такую функцию в силу каких-либо причин затруднительно или ее применение порождает вопрос, который все равно не может быть автоматически обработан — управление перенаправляется куратору. Все подобные “неуспешные” в обработке запросы должны быть помечены и в дальнейшем обработаны на втором контуре курирования и, по крайней мере в данном существующем виде, исключены из доступных для поиска. Ниже подробнее о втором контуре курирования. Второй контур курирования Первый (основной) контур курирования — это процесс заключающийся в редактировании не прошедших автообработку пользовательских запросов. Мы неоднократно упоминали о нем в данной статье, также он подробно описан в предыдущих заметках. Первый контур может использоваться в режиме близком к режиму реального времени и позволяет быстро отвечать на все типы пользовательских запросов. Второй контур — это процесс отложенной по времени проверки правильности работы куратора первой стадии. На второй стадии существует возможность удаления результатов курирования первого этапа и, как следствие, исключения из модели некоторых ранее сохраненных предложений. Также на второй стадии куратор делает выводы о состоянии модели, полноте описания синонимов ее элементов, корректности правил построения intents и т. д. Куратор корректирует и дополняет описание модели. Второй контур очень важен для процесса последовательного улучшения качества модели, выявления существующих ошибок и недоделок текущего состояния, а также их исправления. Выводы Расширение модели путем анализа результатов работы куратора — мощный механизм, позволяющий сократить время необходимое для описания начальной структуры модели, а также последовательно улучшать ее качества на каждой итерации. Использование второго контура курирования помогает контролировать работу кураторов первой ступени, анализировать состояние модели, своевременно обновлять и расширять ее, с каждым шагом повышая процент автоматически обрабатываемых запросов.
Обучение семантической модели куратором
1
обучение-семантической-модели-куратором-1c30c0a1ab2
2018-07-25
2018-07-25 08:08:56
https://medium.com/s/story/обучение-семантической-модели-куратором-1c30c0a1ab2
false
1,708
null
null
null
null
null
null
null
null
null
Chatbots
chatbots
Chatbots
15,820
Dmitriy Monakhov
null
2d0ded630db0
dmonakhov_47478
1
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-04
2018-07-04 11:10:55
2018-07-04
2018-07-04 12:05:33
4
false
tr
2018-07-05
2018-07-05 13:49:06
2
1c31a13774ee
2.662264
0
0
0
Naïve Bayes Sınıflandırması
5
Yapay Zeka Eğitim Serisi — Bölüm 9 Naïve Bayes Sınıflandırması Bayes teoreminin en faydalı uygulamalarından biri Naïve Bayes Sınıflandırıcısı olarak adlandırılan kuraldır. Bayes sınıflandırıcısı, metin belgeleri gibi nesneleri iki veya daha fazla sınıfa ayırmak, classification işlemini yapmak için bir istatistik ve makine öğrenmesi tekniğidir. Bu sınıflandırıcı, doğru sınıfların verildiği bir dizi eğitim verisini analiz ederek eğitilir. Naïve Bayes sınıflandırmasında sisteme belirli bir oranda öğretilmiş veri sunulur (Örn: 100 adet). Öğretim için sunulan verilerin mutlaka bir sınıfı/kategorisi bulunmalıdır. Öğretilmiş veriler üzerinde yapılan olasılık işlemleri ile, sisteme sunulan yeni test verileri, daha önce elde edilmiş olasılık değerlerine göre işletilir ve verilen test verisinin hangi kategoride olduğu tespit edilmeye çalışılır. Elbette öğretilmiş veri sayısı ne kadar çok ise, test verisinin gerçek kategorisini tespit etmek o kadar kesin olabilmektedir. Naïve Bayes — Gerçek Dünyada Uygulaması: Spam E-Posta Naïve Bayes sınıflandırıcısını gerçek hayatta nasıl kullanıldığını göstermek için çalışan bir örnek olarak bir spam e-posta filtresini kullanacağız. Böylece, sınıf değişkeni bir mesajın spam olup olmadığını (“gereksiz e-posta”) ya da meşru bir mesaj olup olmadığını gösterecek. Bu örnekte gelen mesajdaki kelimeler özellik değişkenlerine karşılık gelir, böylece modeldeki özellik değişkenlerinin sayısı mesajın uzunluğuna göre belirlenir. Yukarıdaki görselde görebileceğiniz üzere mesajların “Önemli” veya “Spam” olarak algılanmasında belirli kelimelerin rolü büyüktür ve algoritma bu kelimelere odaklı olarak çalışır. İçerideki kelimeler uyarınca ilk mesajı önemli sınıfına dahil eder, ikinci mesajı da spam olarak sınıflar. Sonuca etki eden kelimeler kırmızı ile gösterilmiştir. Tahmin Parametreleri Bu işlemi yürütmeye başlamak için spam mail gönderimi rakamlarını önceki olasılıkları kullanarak belirlemeliyiz. Hesaplamanın ve anlamanın daha kolay olması için bunun 1: 1 olduğunu varsayalım, bu da gelen mesajların ortalama yarısının spam olduğunu gösterir. (Gerçekte, spam miktarı muhtemelen daha yüksektir.) Kelime içerikleri alanında hangi kelimenin spam olduğunu hangisinin önemli maillerde kullanıldığını belirtmek de yine bir çalışmanın önemli bir aşamasını oluşturur, sonuçta tahmin parametreleri bu kelimelere duyarlı olarak çalışmaktadır. Bu yöntemi detaylandırmak için yapmamız gereken ise, elimizdeki bazı spam ve önemli mailleri iki dosyaya kaydetmek ve bu text verilerinin analizini yapmak olacaktır. Bunun için aşağıdaki mesajların (içindeki tüm kelimelerle birlikte) sayılarını iki mesaj sınıfında hesapladığımızı varsayalım: Artık bir spam mesajında bir kelimenin var olma olasılığını tahmin edebiliriz, örneğin, million kelimesi spam mesajlarda 95791 kelimede 156 defa geçmektedir ki bu da 614’te 1’e eşittir. Aynı şekilde önemli 306438 kelimeden 98’inin million kelimesi olduğunu görüyoruz yine bu da 3127'de 1'e eşittir. Her ne kadar bu iki oran da oldukça az görünse de bir hesaplama yapacak olursak million kelimesinin spam mesajlarda, önemli mesajlara oranla 5 kat daha fazla bulunduğu gerçeğiyle karşılaşırız ve bu oran sınıflandırmanın önemli bir aşamasını geride bırakmamızı sağlar. Yukarıdaki mantığı kullanarak, sıfıra ihtiyaç duymadan tüm sözcüklerin olasılık oranını belirleyebiliriz ve bu da bize aşağıdaki olasılık oranlarını verir: Artık yeni mesajları sınıflandırmak için bu yöntemi uygulamaya hazırsınız, fakat elbette bu işi sizin yerinize makineler yapacak yine de en azından teorik düzeyde nasıl yapıldığını bilmek işinizi kolaylaştıracak, anlamanıza yardımcı olacaktır. Naïve Bayes Sınıflandırıcısı konusunu daha derinlemesine öğrenmek için farklı kaynaklara da ihtiyaç duyabilirsiniz, özellikle veri bilimi alanında bu formülün nasıl kullanıldığıyla ilgili Doç. Dr. Şadi Evren Şeker tarafından hazırlanan kısa eğitime de buradan ulaşabilirsiniz. Bilgisayar Kavramları — Naïve Bayes Eğitimi Yapay Zeka Eğitim Serisi’nin onuncu bölümüne buradan ulaşabilirsiniz.
Yapay Zeka Eğitim Serisi — Bölüm 9
0
yapay-zeka-eğitim-serisi-bölüm-9-1c31a13774ee
2018-07-05
2018-07-05 13:49:06
https://medium.com/s/story/yapay-zeka-eğitim-serisi-bölüm-9-1c31a13774ee
false
520
null
null
null
null
null
null
null
null
null
Yapay Zeka
yapay-zeka
Yapay Zeka
502
Fatih Bildirici
Start-up enthusiast, Software Analyst, Marketing Analyst, Geek, MIS Specialist, Comic lover, Data Sapiens, Muggle.
47532a1d23b4
fatihbildiriciii
37
27
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-26
2018-03-26 03:12:05
2018-03-26
2018-03-26 03:17:29
2
false
en
2018-04-25
2018-04-25 00:35:34
1
1c31f34582c3
3.168239
0
0
0
Adrian Colyer
1
Evaluation of Credibility https://www.accel.com/team/adrian-colyer Adrian Colyer I would like to evaluate a blog post written by Adrian Colyer, an author for The Morning Paper blog. The morning paper is Colyer’s blog which has weekly posts discussing what is happening in the field of computer science. The blog post that I have chose is titled Artificial Intelligence and life in 2030 and was written by Colyer on November 21, 2016. This writing briefly summarizes a 2016 report from a Stanford University study called One hundred year study of Artificial Intelligence. Colyer talks about how he perceives the information given from this study and explains some projections for the future of AI. He discusses the major hot topics in the world of AI today and provides explanations of what we should expect to see in the next twenty years from specific AI categories. Blog After looking through Colyer’s blog website and researching what his blog is about, I found that the morning paper blog is a credible source because of his credentials, references and experiences with AI. The morning paper is a highly respected blog in the world of computer science. I can see this from all of the feedback and reviews on his blog posts. Adrian Colyer has the proper credentials and is well certified to discuss the topic of AI. He also has a huge following from both the computer science and AI communities, which tells me that he can be trusted. Also, the study that he has chosen to explain and quotes throughout his writing is from Stanford University, which is an extremely prestigious university and is certainly a reliable source. All of these reasons support my case that Colyer and his blog post are credible. Experience Adrian Colyer has many years of experience in the fields of computer science and technology. He has been a part of open source infrastructure for over twenty years and was the CTO of large corporations such as Acell, SpringSource, VMware, and Pivotal. He is also a venture partner with software-based companies across Europe. He is also the author of his blog The Morning paper which reviews and discusses research in computer science. Sources Colyer cites and quotes the hundred year study of AI from Stanford University. He pulls quotes from studies to give the reader a better idea of what this study has found and to help explain his points. He uses these quotes very well throughout his writing and puts them into a different font and color so you can tell which parts of writing come from him and which ones come from the hundred year study of AI. Although Colyer did not do the best job citing and integrating the quotes. He briefly cites the study at the top of the article and then mentions it a few times and then starts putting quotes in without really explaining where they came from. We can still assume that the quotes are from the Stanford study even though he doesn’t do the best job at making that clear. I am still confident that this blog is credible. Following Colyer’s blog the morning paper has a huge following as well. I can see this from his blog which has thousands of followers and many readers that comment feedback to his blogs. His twitter page that has nearly twenty thousand followers is posted all over his blog and is essentially his form of communication with his blog readers. It seems as though the computer science community respects Colyer and his points of views, and this is also a good sign when it comes to his credibility. Overall Credibility I enjoyed how Colyer structured his writing, he did a great job answering so many of the questions that I had about AI and explained how it would affect different parts of our world. He talks about various categories that AI will either help or harm in our future. He discusses the impact on transportation, health care, home service, education, low resource communities, public safety and security, entertainment and employment in the workplace. He provides quotes for each category from the hundred year study on AI. For all of these reasons, I believe that Colyer knows what he is talking about when it comes to AI and that is a credible source. Works Cited: Colyer, Adrian. “Artificial Intelligence and Life in 2030.” The Morning Paper, 31 July 2017, blog.acolyer.org/2016/11/21/artificial-intelligence-and-life-in-2030/.
Evaluation of Credibility
0
evaluation-of-credibility-1c31f34582c3
2018-04-25
2018-04-25 00:35:34
https://medium.com/s/story/evaluation-of-credibility-1c31f34582c3
false
738
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Justin Thomas
Hello, my name is Justin Thomas. I go to San Francisco State University and I am majoring in business marketing. This is my blog about Artificial Intelligence.
fd1d0c0779c1
justinthomas_79251
3
1
20,181,104
null
null
null
null
null
null
0
null
0
f5af2b715248
2017-12-11
2017-12-11 09:59:22
2017-12-11
2017-12-11 10:34:29
3
false
en
2018-01-18
2018-01-18 13:23:02
10
1c334002487b
4.365094
15
0
0
Automation is perceived as a cost-saving shortcut, but until we can design better conversations with machines, firms can’t rely on robots…
5
How to design better conversations with machines Automation is perceived as a cost-saving shortcut, but until we can design better conversations with machines, firms can’t rely on robots to build trust in their services. The robots are coming for us. The University of Oxford predicts that 47% of jobs worldwide will be automated by the year 2033, while research by the Organization for Economic Cooperation and Development (OECD) claims that almost 10% of jobs across its member countries are fully automatable. With robots moving from factories to offices, we need to make sure they’re used to improve our services, not just make them cheaper. In essence, we need to design better conversations with machines. Smart assistants that use artificial intelligence (AI) have been sci-fi’s holy grail for decades, but it’s easy to feel underwhelmed with the technology now it’s here. To be frank, Alexa doesn’t come across as the smartest cookie in the jar, nor does she make the best company over a glass of wine in the evening. So where do we go from here? Having studied human-machine interaction at university, I’ve watched developments in AI from innovative companies like Amazon and Google with interest over the past few years. How are the big players tackling the challenges we faced with tech more than a decade ago? How close are smart assistants to being able to understand the subtle ambiguities of natural language in 2017? Have today’s robots been taught to understand the sometimes vague needs and desires of consumers? “I’m sorry, I didn’t understand the question that I heard.” Amazon and Google’s product offerings may be the pinnacle of digital technology, but anyone who has interacted with Alexa or Home will know smart assistants are still wrought with frustration. During periods of uncertainty short-term thinking sets in, budgets are tightened and operational costs rise. Across the services sector, firms are looking for new ways to find efficiencies, and that’s why more and more of our clients have been talking to us about robots. Machines may well be the answer — at least in part — but they need to improve fast if they’re going to live up to the hype that surrounds them. Imagine a hospital where you can’t see a GP because the robot secretary can’t understand your accent, or a bank where you can’t access your savings because there’s a typo in their records. Minor misunderstandings like this can easily be resolved by humans, but when we rely on machines to do the work, we have to play by their rules. Or at least design around their limitations. So how can we make robots not only usable, but natural and intuitive too? While most people will be familiar with Siri or Alexa, you don’t need to be Apple or Amazon to start using this technology. Mycroft AI is an open source alternative that enables anyone with a Raspberry Pi to cobble together their own personal assistant. Andrew Vavrek, the project’s community manager, believes it’s never been cheaper or easier for firms to build this technology into their services. He says: “We’re in a new era of development, with a growing focus on manufacturing devices to adapt to users rather than the other way around. Enabling this requires data collection, machine learning, and improving intent analysis of utterances. Essentially, our machines will continue to grow and learn along with us as we use them. And as the machines improve, so do the services we can build with them.” At the moment, digital assistants never seem to know whether they’re being talked to or not. There’s nothing more uncanny than when Alexa eerily chimes into conversations with inane statements like: “That’s good to know.” These accidental wake-ups are only minor inconveniences, but each time they happen, people become more inclined to reach for the power plug. Vavrek believes that trust is a key component for our relationships with machines, and if we want machines to build that trust we need to stop thinking visually: “As we learn how to design better conversations, poking and prodding at screens will become obsolete and soon we’ll look back at keyboards and mice with nostalgia.” For the past two decades, UX designers have been streamlining graphical user interfaces to make them simpler, flatter and more responsive. But these tricks only work for simple interactions. When it comes to designing more complex interactions with machines, we need to think carefully about how to use AI’s learning and improvisation abilities to better understand how people naturally speak. The future of AI will heavily depend upon designers and engineers identifying where machine learning processes can make the most impact. As the technology continues to evolve over the next couple of years, it will adapt to provide an ever increasing number of services. For now though, AI needs to learn to crawl before it can run. Consumers want to feel like they’re being understood, not just listened to. Technology may seem like it’s advancing quickly, but so far few service providers have been able to deliver truly meaningful human-machine interactions. Get in touch if you’d like to co-design one in your industry. About the author: Sarah Ronald is founder of Nile, a Gold Cannes award winner and a pioneer in the field of commercial service design. She holds an MA in Psychology and an MSc in Human Computer Interaction, and was an executive steering member of the British Interactive Media Association for 5 years. About Nile | Service design with ambition We develop solutions and strategies that are right for your brand, your business and your customers. Our backgrounds stem from behavioural research and digital design. We’re passionate about designing simple, effective products and services. If you have a challenge for us, get in touch — hello@nilehq.com. Read more from Nile on Twitter, Facebook and LinkedIn, or sign up for our newsletter to receive updates on digital innovation. This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 273,384+ people. Subscribe to receive our top stories here.
How to design better conversations with machines
60
how-to-design-better-conversations-with-machines-1c334002487b
2018-05-03
2018-05-03 22:50:49
https://medium.com/s/story/how-to-design-better-conversations-with-machines-1c334002487b
false
1,011
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
null
null
null
The Startup
null
swlh
STARTUP,TECH,ENTREPRENEURSHIP,DESIGN,LIFE
thestartup_
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Nile HQ
Nile HQ is a Design Practice. We use Forward Thinking Design to create products and services that meet the needs of today and tomorrow’s people.
f9ed996b1811
NileHQ
577
338
20,181,104
null
null
null
null
null
null
0
$ pact install xinit xorg $ export DISPLAY=<your-machine-ip>:0.0 $ startxwin -- -listen tcp & xhost + <your-machine-ip> docker run --name my_container -e DISPLAY="$DISPLAY" -v <my working dir>:/mnt/workspace spacemacs/emacs25:develop docker start my_container
5
null
2018-08-06
2018-08-06 19:05:34
2018-08-06
2018-08-06 19:20:23
1
false
es
2018-08-06
2018-08-06 19:27:08
1
1c33c2190c70
1.181132
0
0
0
[English version here: Dockerized Spacemacs]
5
Spacemacs en Docker [English version here: Dockerized Spacemacs] Hablabamos el otro dia de cómo crear un entorno de programación para R basado en Emacs + ESS con algun añadido interesante, como es la terminal posix con ZSH para windows, y me recordó un amigo que probablemente haya situaciones aún más restrictivas en las cuales solamente tienes un emulador de terminal y poco más. Pues bien, para estos casos, si tenéis la posibilidad de usar Docker no hace falta más que seguir estos pasos para poder usar toda la potencia de emacs (spacemacs + ess) en un entorno windows utilizando el contenedor oficial de la distribución. Son dos pasos muy sencillos: 1- instalar xinit y xorg en tu entorno posix (Cygwin/mintty/babun…) usando la terminal 2- Establecer las variables de entorno para que el contenedor ofrezca un acceso visual no basado en terminal Los comandos para babun, que es el entorno que conozco, son sencillos, pero asumo que todos sabéis cómo instalar paquetes en vuestra capa POSIX preferida. Una vez instalado, ejecutamos los siguientes comandos (colocando la ip de vuestro equipo) : Ya está el entorno listo para levantar el contenedor basado en la imagen oficial de spacemacs. Teniendo en cuenta que esta será la primera ejecución y deberá descargar todas las capas de la imagen: Las siguientes ejecuciones simplemente deberán levantar el contenedor anteriormente generado: Buen provecho!
Spacemacs en Docker
0
spacemacs-en-docker-1c33c2190c70
2018-08-06
2018-08-06 19:27:08
https://medium.com/s/story/spacemacs-en-docker-1c33c2190c70
false
260
null
null
null
null
null
null
null
null
null
Emacs
emacs
Emacs
246
SWIMMING IN THE DATA LAKE
Some hints & discoveries on my path to knowledge
8e5ba4b95d52
verajosemanuel
47
53
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-07
2018-08-07 15:26:28
2018-08-06
2018-08-06 00:00:00
4
false
en
2018-08-07
2018-08-07 15:28:45
6
1c36086bb68
3.979245
0
0
0
Rachel Oldroyd, one of the UK Data Service Data Impact Fellows, takes a step-by-step approach to using R and RStudio to analyse Food…
4
Analysing Food Hygiene Rating Scores in R: a guide Rachel Oldroyd, one of the UK Data Service Data Impact Fellows, takes a step-by-step approach to using R and RStudio to analyse Food Hygiene Rating Scores. Data download and Preparation In this tutorial we will look at generating some basic statistics in R using a subset of the Food Hygiene Rating Scores dataset provided by the Food Standards Agency (FSA). Visit http://ratings.food.gov.uk/open-data/en-GB now and download the data for an area you are interested in. I’ve downloaded City of London Corporation. R is able to parse XML files but it’s easier to load the file into Excel (or a similar package) and save as a CSV file (visit this page if you’re unsure how to do this: https://support.office.com/en-us/article/import-xml-data-6eca3906-d6c9-4f0d-b911-c736da817fa4). R and RStudio R is a statistical programming language and data environment. Unlike other statistics software packages (such as SPSS and Stata) which have point and click interfaces, R runs from the command line. The main advantage of using the command line is that scripts can be saved and quickly rerun, promoting reproducible outputs. If you’re completely new to R, you may want to follow a basic tutorial beforehand to learn R’s basic syntax. The most commonly used Graphical User Interface for R is called RStudio (https://www.rstudio.com/products/rstudio/) and I highly recommend you use this as it has nifty functionality such as syntax highlighting and auto completion which helps ease the transition from point and click to command line programming. Basic Syntax Once installed, launch RStudio. You should see something similar to this setup with the ‘Console’ on the left-hand side, the ‘Environment window’ on the top right and another window with several tabs (Files, Plots, Packages, Help, Viewer) on the bottom right: Don’t worry if your screen looks slightly different, you can visit View > Panes from the top menu to change the layout of the windows. The console area is where code is executed. Outputs and error messages are also printed here but content within this area cannot be saved. As one of the main advantages of using R is its ability to create easily reproducible outputs, let’s create a new script which we can save and rerun later. Hit CTRL+SHIFT+N to create a new script. Save this within your working directory using the save icon. Loading Data Let’s get on with loading our data. Type data = read.csv(file.choose()) into the script file and again hit CTRL + Enter whilst your cursor is on the same line to run the command, you can also highlight a block of code and using CTRL + Enter to run the whole thing. You should see a file browser window; navigate to the CSV file you saved earlier containing the FHRS data. Note the syntax of this command, it creates a variable called data on the left hand side of the equals sign and assigns it to the file loaded in using the read.csv command. Once loaded, you should see the new variable, data, appear in the environment window on the right hand side. To view the data you can double click on the variable name in the environment window and it will appear as a new tab in the left hand window. Note the variables that this data contains. The object includes useful information such as the business name, rating value, last inspection date and address. Summary statistics Let’s do some basic analysis. To remove any records with missing values first run the complete.cases command: data = data[complete.cases(data),] here we pass our data variable into complete.cases which removes any incomplete cases and overwrites our original object. To run some basic statistics we need to convert the RatingValue variable to an integer: data$RatingValue = strtoi(data$RatingValue,base =0L) Note how we use the $ to access the variables of our data object. To see the minimum and maximum rating values of food outlets in London we can use the minimum and maximum functions: min(data$RatingValue) max(data$RatingValue) These commands simply give us the minimum and maximum values without any additional information. To see the full records for these particular establishments we can take a subset of our data to only include those which have been awarded a zero star rating for example: star0 = data[which(data$RatingValue==0), ] Creating a graph Lastly, let’s create a barchart to look at the distribution of star ratings for food outlets in London. We will use the ggplot library, to install and then load this library, call: install.packages(‘ggplot2’) library(ggplot2) To create a simple barchart use the following code: ggplot(data = data, aes(x = RatingValue)) + geom_bar(stat = "count") Here you can see we have passed RatingValue as the X axis variable in the ‘aesthetics’ function and passed in ‘count’ as the statistic. The output of which should look something like this: To add x and y labels and a title to your graph use the labs command at the end of the previous line of code: ggplot(data = data, aes(x = RatingValue)) + geom_bar(stat = "count") + labs(x = "Rating Value", y = 'Number of Food Outlets', title = 'Food Outlet Rating Values in London') Originally published at lab.ukdataservice.ac.uk on August 6, 2018.
Analysing Food Hygiene Rating Scores in R: a guide
0
analysing-food-hygiene-rating-scores-in-r-a-guide-1c36086bb68
2018-08-07
2018-08-07 15:28:45
https://medium.com/s/story/analysing-food-hygiene-rating-scores-in-r-a-guide-1c36086bb68
false
869
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
UKDS Impact
The UK’s largest collection of UK and international social, economic and population data. Funded by ESRC. Writing about how the data we hold makes impact.
24f2bbf14aa7
ukdsimpact
5
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-23
2018-08-23 17:21:39
2018-08-23
2018-08-23 18:02:58
0
false
en
2018-08-23
2018-08-23 18:02:58
2
1c397ce61c9f
1.483019
0
0
0
One of the essential instruments of Ignitus is the internship, both internal and external. Not only do we select the best candidates, those…
3
Rethinking Ignitus Scholar Research Projects: “ML-score” One of the essential instruments of Ignitus is the internship, both internal and external. Not only do we select the best candidates, those who best fit the profiles that universities and industries request, but internally we also create positions for our own staff (more permanent), as well as for specific projects (usually one month apart duration). Internshala and AngelList are our main publication centers for all opportunities, and the work of understanding the profile and adjusting the CVs received to these profiles requires a lot of time and effort. Does this mean that they can be automated? Imagine that we are looking for the profiles of candidates closest to the target profile. It is easy to think about the use of the k-NN algorithm and others of the same style. In this project we propose from the summary extraction information to support automatic summarization management and routing through cascaded information extraction to the use of the algorithms of Horspool and Karp-Rabin (text mining) for automated summarization extraction. Among the tools that are plated is the direct extraction of information through Resume in pdf format, bu segmenting a page into blocks according to heuristic rules, then classifying each block using the Conditional Random Field model. The people who are part of this project must not only have important analysis and synthesis capabilities, but the programming load is more than remarkable: it is about creating a real tool for a real need of a real company: Ignitus. Interested? There is still time to enroll in this project. If you do not belong to Ignitus yet, click on http://bit.do/join-ignitus and once inside our Slack, send me a DM. Let’s talk, let’s reach common solutions. In any case, you can also write me an email to the following address: afelio@ignitus.org Ignitus is a non-profit initiative for the welfare of the student community that helps students and professionals get handpicked top-quality global research and industrial internships. The students participate in projects and training programs supervised by our experts. Ignitus is made with love from Students, Researchers of Stanford, MIT, Princeton, Georgia Tech, SUNY, Harvard, Oxford, UCB, UCLA, USC etc. Team Ignitus boasts of a dedicated workforce from Boston, Miami, Pittsburgh, Madrid, Houston, Munich, Princeton, Los Angeles, Vancouver and different parts of India. Afelio Padilla COO, Ignitus
Rethinking Ignitus Scholar Research Projects: “ML-score”
0
rethinking-ignitus-scholar-research-projects-ml-score-1c397ce61c9f
2018-08-23
2018-08-23 18:02:58
https://medium.com/s/story/rethinking-ignitus-scholar-research-projects-ml-score-1c397ce61c9f
false
393
null
null
null
null
null
null
null
null
null
Startup
startup
Startup
331,914
Afelio Padilla
null
2705f83dc3b
afelio
16
78
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-18
2018-05-18 14:16:07
2018-05-18
2018-05-18 14:22:32
1
false
pt
2018-05-18
2018-05-18 14:46:31
0
1c39bcb855f0
2.732075
1
0
0
Álvaro Pereira, jornalista e diretor-presidente da AP Comunicação
5
Entre o like e o aperto de mão Álvaro Pereira, jornalista e diretor-presidente da AP Comunicação (Artigo publicado no Correio Braziliense em 18/05/2018) As próximas eleições presidenciais deverão marcar a história política do país por algumas características inusitadas. A começar pelo fato de que o pré-candidato com melhor desempenho nas pesquisas, Luiz Inácio Lula da Silva, está preso e impedido de participar da campanha. Foi atingido pela onda de denúncias da Operação Lava-Jato e condenado em duas instâncias pelos crimes de corrupção e lavagem de dinheiro. Outros pré-candidatos, como Geraldo Alckmin, respondem a processos na Justiça que ainda não os tornam inelegíveis, mas a simples divulgação das denúncias já poderá prejudicá-los eleitoralmente. Além disso, o escabroso esquema de corrupção desvendado pela Lava-Jato levou o Congresso Nacional a aprovar uma legislação que limita consideravelmente os gastos com campanha. Foram proibidas as doações de empresas aos candidatos e limitadas a percentual mínimo as doações de pessoas físicas, que não poderão exceder a 10% do rendimento bruto do cidadão no ano anterior ao pleito. Nesse quadro de escassez de recursos, os partidos políticos se apresentam como os grandes financiadores, podendo usar o Fundo Partidário para bancar os custos das campanhas. No entanto, o maior diferencial do processo que se avizinha será o papel de protagonismo da internet e de tudo o que ela agrega em termos de tecnologia, como inteligência artificial e ciência de dados (data science). Vamos a um rápido flashback: nas últimas eleições presidenciais, há quatro anos, os brasileiros já frequentavam as redes sociais e alguns partidos souberam tirar bom proveito delas — caso do PT. Mas os espaços virtuais eram utilizados de forma relativamente transparente e previsível. Militantes políticos, voluntários ou não, alimentavam o debate ideológico e programático com maior ou menor grau de radicalização. Em 2018, ao contrário, a campanha já se inicia sob forte influência das redes sociais (Facebook, Instagram, Twitter, WhatsApp), acessadas na maioria das vezes por meio de smartphones. O embate político entre os candidatos se deslocará em boa parte do aparelho de TV, ou mesmo do computador, para os celulares de milhões de eleitores espalhados pelo país. A nova legislação eleitoral vai favorecer a mudança ao permitir que partidos e candidatos possam usar seus perfis para divulgar programas e ideias de forma orgânica e também impulsionada. Ocorre que, ao lado da exposição regulamentada por lei, haverá também o jogo sujo das fake news e dos perfis construídos por hackers e operados por robôs, de forma clandestina e ilegal. Vejam as denúncias de manipulação de dados na última eleição americana. As investigações revelam que os falsos conteúdos utilizados contra Hillary Clinton, nas redes sociais, foram criados e impulsionados por serviços de inteligência russos para beneficiar Donald Trump. Na sequência, a empresa de consultoria inglesa Cambridge Analytica, que trabalhou na campanha de Trump, foi acusada de usar indevidamente dados de usuários do Facebook e acabou por fechar as portas. Nesse ponto, cabe uma explicação. Uma coisa é usar a ciência de dados de forma transparente e legítima para identificar hábitos e comportamentos do público que possam orientar estratégias de comunicação. Plataformas digitais como Google, Twitter e Facebook permitem o acesso a esse tipo de informação, que pode ser obtida a partir de um trabalho criterioso de “mineração” e cruzamento de milhões de dados. Não por acaso, o cientista de dados é hoje um dos profissionais mais valorizados na área de comunicação. Outra coisa, bastante diferente, é propagar anonimamente conteúdos fake que atingem a honra das pessoas, confundem o eleitor e prestam desserviço à democracia. Esse é um mal a ser combatido com rigor nas próximas eleições! Por último, não se deve imaginar que a forte presença das redes sociais na campanha vá dispensar o uso de outros veículos também importantes, como o rádio e a TV. Ou que a comunicação virtual vá substituir por inteiro o contato real entre o candidato e o eleitor. Um like jamais será tão autêntico, como intenção de voto, quanto um caloroso abraço ou aperto de mão.
Entre o like e o aperto de mão
1
entre-o-like-e-o-aperto-de-mão-1c39bcb855f0
2018-05-18
2018-05-18 15:39:36
https://medium.com/s/story/entre-o-like-e-o-aperto-de-mão-1c39bcb855f0
false
671
null
null
null
null
null
null
null
null
null
Comunicacao Digital
comunicacao-digital
Comunicacao Digital
170
AP Exata Inteligência em Comunicação Digital
Somos uma agência de comunicação orientada por dados.
bd2d12239f90
apcomunicacao950
12
82
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-25
2018-09-25 07:11:12
2018-09-25
2018-09-25 07:12:20
0
false
en
2018-09-25
2018-09-25 07:12:20
1
1c3a0234cf95
1.969811
0
0
0
Despite the importance of stating the measurement uncertainty and analysis, the concepts are still not widely applied by the broader…
5
Measurement uncertainty courses in Singapore Malaysia Brunei Despite the importance of stating the measurement uncertainty and analysis, the concepts are still not widely applied by the broader scientific community. The Measurement uncertainty courses Guide to the expression of uncertainty in measurement approves the use of both the partial derivative approach and its approach. There are some limitations to the partial derivative approach. Initially, it includes the computation of first-order derivatives of each component of the output quantity. This requires some mathematical skills and can be tedious if the mathematical model is complex. Next, it is not able to predict the probability distribution of the output quantity accurately if the input quantities are not normally distributed Knowledge of the probability distribution is essential to determine the coverage interval. The Measurement uncertainty courses in Singapore Malaysia Brunei is aiming to provide the analytical community with a tool to estimate measurement uncertainty using software that is already widely available and that is so simple to apply that it can even be used by students with basic computer skills and minimal mathematical knowledge. To know more about Measurement uncertainty for both testing and calibration laboratories and also to know the steps required, accepted practices, and the types of uncertainties that need to be included by an accredited laboratory. The importance of Measurement Uncertainties and analysis and its impact on the marketplace by tests and measurements allows the attendees will learn techniques to calculate best measurement capabilities. Estimating the measurement Uncertainty is the main activity in measurement laboratories and on production lines in modern day industry. The ISO 9000, QS 9000 and ISO/TS 16949 has implemented more focus on this issue with new and upgraded specific requirements. The main objective of Measurement uncertainty courses in Singapore Malaysia Brunei is to give an improved understanding of the measurement uncertainty and required methods of measurement uncertainty and to evaluate the circumstances in which these methods are applied. The course also supports the provision of any scientifically defensible result. Also, the Measurement uncertainty courses emphasize model-based uncertainty evaluation and help tackle more challenging uncertainty evaluation problems. The Measurement uncertainty courses in Singapore Malaysia Brunei are basically based on the considerable experience that is gained by the presenters of the needs of practitioners in the area of measurement uncertainty evaluation. The reason that this Measurement uncertainty course is important is that it covers underpinning statistical concepts that are used to develop a measurement model. It formulates the measurement uncertainty evaluation. Through various examples and case studies the course details all the approaches to deal with the measurement uncertainty evaluation. Specific courses like meteorological work and calibration of sensors and instrumentation systems or generic testing work are also included. While in the midst of the training, delegates are encouraged to bring forward their own problems and queries so that it can be answered and solved within the training program. It is designed to suffice the testing the measurement requirement and to deliver the customer a technical edge that is required to be truly world class. The Measurement uncertainty courses can solve business-critical problems as responsively and affordable as possible.
Measurement uncertainty courses in Singapore Malaysia Brunei
0
measurement-uncertainty-courses-in-singapore-malaysia-brunei-1c3a0234cf95
2018-09-25
2018-09-25 07:12:20
https://medium.com/s/story/measurement-uncertainty-courses-in-singapore-malaysia-brunei-1c3a0234cf95
false
522
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
consult glp
null
9210ef466614
consultglp576
4
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-21
2017-10-21 14:23:43
2017-10-21
2017-10-21 14:26:59
4
false
en
2017-10-21
2017-10-21 14:26:59
0
1c3a52989aef
2.466038
4
0
0
Deduction: Given the rule and the cause, deduce the effect.
1
Few Machine Learning Concepts Deduction: Given the rule and the cause, deduce the effect. Induction: Given a cause and an effect, induce a rule. Abduction: Given a rule and an effect, abduce a cause. TAXONOMY: What? — Parameters, structure, hidden concepts What from? — Supervised, Unsupervised, Reinforcement What for? — prediction, diagnostics, summarization How? — passive, active, online, offline Outputs? — Classification, Regression Details? — Generative, Discriminative Occom’s Razor — Everything else being equal, choose the less complex hypothesis. The Ultimate goal of Machine Learning is to have data models that can learn and improve overtime. Evaluation Metrics: Learn from data to make predictions/ Classification and Regression Classification is about deciding which categories new instances belong to. Then when we see new objects we can use their features to guess which class they belong to. In regression, we want to make a prediction on continuous data. In classification, we want to see how often a model correctly or incorrectly identifies a new example, whereas, in regression we might be more interested to see how far off the model’s prediction is true from true value. Classification ⇒ Accuracy, precision, recall and F-score. Regression ⇒ mean absolute error and mean square error. Short comings of accuracy: Not ideal for skewed classes may want to err on side of guessing innocent. may want to err on the side of guessing guilty. Causes of Error: Bias due to a model being unable to represent the complexity of the underlying data. Variance due to a model being overly sensitive to the limited data it has been trained on. Bias occurs when a model has enough data but is not complex enough to capture the underlying relationships. As a result, the model consistently and systematically misrepresents the data, leading to low accuracy in prediction. This is known as Underfitting. To overcome error from bias, we need more complex model. Variance is a measure of how much the predictions vary for any given test sample. High sensitivity to the training set is also known as Overfitting. Occurs when the model is too complex. We can typically reduce the variability of a model’s predictions and increase precision by training on more data. If more data is unavailable, we can also control variance by limiting our model’s complexity . Data Types: Numeric data Categorical data Time-Series data Curse of Dimensionality As the number of features or dimensions grows, the amount of data we need to generalize accurately grows exponentially. Learning Curves Bias⇒ When the training and testing errors converge and are quite high this usually means the model is biased. Variance ⇒ When there is a large gap between the training and testing error, this generally means the model suffers from high variance. Alright that’s it for now! Thank you for spending your time. Cheers!
Few Machine Learning Concepts
4
few-machine-learning-concepts-1c3a52989aef
2017-11-30
2017-11-30 14:25:22
https://medium.com/s/story/few-machine-learning-concepts-1c3a52989aef
false
468
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Akhilesh
Machine Learning and Blockchain. Bengaluru, India.
d5135b7a4d7d
Akhilesh_k_r
22
67
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-27
2018-06-27 09:24:57
2018-06-29
2018-06-29 05:21:36
2
false
en
2018-06-29
2018-06-29 05:37:26
8
1c3ab69992a1
0.83239
10
0
0
Tech Talk Luncheon On May 15, CapWealth Advisors partnered with FirstBank and Belmont University
5
Blockchain, Crypto and AI. Harpreet Singh. Expercoin Republics & Experfy founder. Tech Talk Luncheon On May 15, CapWealth Advisors partnered with FirstBank and Belmont University Hoy traemos a este espacio la conferencia celebrada On May 15, CapWealth Advisors partnered with FirstBank and Belmont University to host a Tech-Talk Luncheon featuring Harvard-based entrepreneur and scholar Dr. Harpreet Singh, who spoke about artificial intelligence, blockchain technology and cryptocurrency. Below is a recording of the presentation. Harpreet Singh . Expercoin Republics & Experfy founder Así se nos presenta Harpreet Singh en su twitter: Harvard-trained PhD. Founder @Experfy(Harvard Innovation Launch Lab) and @Expercoin. Co-Founder @Sikh_Coalition. Founder/Host, Masters of #Blockchain podcast. Expercoin’s goal is to achieve platform self-governance. Boston, MA experfy.com . Y aquí os dejamos embebida la conferencia : CapWealth Advisors Chairman, Tim Pagliara (@timpagliara), and @Expercoin Founder, Harpreet Singh (@hsingh ), educating Nashville elites, Y la gira continúa …
Blockchain, Crypto and AI. Harpreet Singh. Expercoin Republics & Experfy founder.
131
blockchain-crypto-and-ai-harpreet-singh-expercoin-republics-experfy-founder-1c3ab69992a1
2018-06-29
2018-06-29 05:37:26
https://medium.com/s/story/blockchain-crypto-and-ai-harpreet-singh-expercoin-republics-experfy-founder-1c3ab69992a1
false
119
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Juan José Calderón Amador * ✘ ★
elige la cadena de la vida abc1chde2ghij3…✘ⓔ-ⓝⓐⓤⓣⓐ, ⓔ-ⓜⓔⓝⓣⓔ Sevilla ★ blockchain ★ elearning ★Ⓐrⓣ★ education ★ P2P ★ economy ★ PhD student @fceyeUS @unisevilla
fda52c756acb
eraser
1,796
3,135
20,181,104
null
null
null
null
null
null
0
null
0
be3acca3475
2018-03-28
2018-03-28 13:14:36
2018-04-07
2018-04-07 14:38:19
2
false
en
2018-04-07
2018-04-07 14:38:19
7
1c3b294feef0
2.805975
0
0
0
Machine translation is a goldmine and also part of our future. Whether some like it or not. And a lot of companies are racing each other to…
5
Via BelugaLinguistics Facebook is analyzing the “uncertainty” in NMT models Machine translation is a goldmine and also part of our future. Whether some like it or not. And a lot of companies are racing each other to the top to be the first to get ahold of the perfect system, which can offer a golden translation. There’s a lot of research out there on the field and advancements are taking place by the minute, but to be fair, there is also a lack of understanding and comprehension of NMT. Enter Facebook’s latest AI Research (FAIR) which published a paper on the inner workings of NMT models. The team behind the project wanted to understand “uncertainty” in NMT and propose ways to reduce its negative impact on all text output. The core problems which they tackled include performance degradation with large beams, an under-estimation of rare words through AI and the lack of text diversity in final translations. Via Giphy FAIR researchers noticed that NMT battles two kinds of uncertainty when performing translations: intrinsic and extrinsic uncertainty. The study further describes intrinsic uncertainty as “noisy training data” which derives from multiple “valid” translations for a single source sentence. Furthermore, another problem with translation derives from lack of context in translations — the lack hereby being both cultural as well as grammatical: “Without additional context, it is often impossible to predict the missing gender, tense, or number, and therefore, there are multiple plausible translations of the same source sentence.” Among the problems researched, the team defined one further category of “extrinsic uncertainties” which also often lead to what they call noise in the data. Such uncertainties can be encountered when human translated high-quality text clashes with “lower quality web crawled data”. Yet another problem arises from partial translations of the corpora and was identified as such: “Target sentences may only be partial translations of the source, or the target may contain information not present in the source. A lesser-known example are target sentences which are entirely in the source language, or which are primarily copies of the corresponding source.” The datasets they’ve used came from the Conference of Machine Translation 2014 and were English to German and German to English to French root translations. Problem-solving proposal Via Giphy After analyzing the core problems and identifying the causes, researchers formulated a solution for mitigating extrinsic uncertainties. The first proposal suggests that low scoring sentences or duplicates should be removed from the NMT corpora (their base data). As an example, they’ve then used the English to German news-commentary portion of Conference of Machine translation 2017 to showcase their results. Secondly another solution offered was to elliminate all source copying with high-impact occurence. For this, they’ve designed an algorithm to comb through datasets and identify overlapping or duplicate sentences. The program works to identify sentences with more than 50% overlapping. After showcasing the solutions, they’ve formulated a recommendation in which both presented options should be used in order to reduce“performance degradation” of NMT modules. We’ll be left to speculate on how and when Facebook will follow-up on its own research and what impact it will have on the quality of their AI translations. After all let’s not forget what is at stake for Facebook, who since 2017 has all its translations operated by an NMT system… ********* If you like this post we would really appreciate a 👏 or 👏 👏 or 👏 👏👏 Check out our Instagram for more. About Beluga Beluga helps fast-moving companies to translate their digital contents. With more than a decade of experience, professional linguists in all major markets and the latest translation technology at use, Beluga is a stable partner of many of the most thriving enterprises in the technology sector. The business goal: To help fast-growing companies offer their international audiences an excellent and engaging user experience.
Facebook is analyzing the “uncertainty” in NMT models
0
facebook-is-analyzing-the-uncertainty-in-nmt-models-1c3b294feef0
2018-04-07
2018-04-07 14:38:21
https://medium.com/s/story/facebook-is-analyzing-the-uncertainty-in-nmt-models-1c3b294feef0
false
642
Weekly publication on topics such as translation, localization, NMT, Machine Learning and of course towels! :)
null
belugateam
null
Beluga-team
medium@belugalinguistics.com
beluga-team
LOCALIZATION,TRANSLATION,TRANSLATION SERVICES,MACHINE LEARNING,NEURAL NETWORKS
beluga_team
Translation
translation
Translation
5,701
Una Titz
null
ed9cc4ea89fa
unaivona
66
195
20,181,104
null
null
null
null
null
null
0
null
0
6bbaaa3aafe7
2018-01-19
2018-01-19 12:25:51
2017-10-11
2017-10-11 16:12:42
1
false
en
2018-01-22
2018-01-22 12:01:02
9
1c3e95f611f0
3.2
0
0
0
By Katie Smith
4
How Artificial Intelligence Is Transforming Retail By Katie Smith From categorization to standardization, artificial intelligence promises to upend retail’s present and future, starting with big data. There’s something the apparel retail industry doesn’t want to talk about. Consumers are spending less on clothing. It doesn’t take an analyst to spot that this is problematic for retailers. To make matters worse, rent prices are rising, product cycles are getting shorter and fast-fashion retailers — with their low prices and enviably speedy reactions — are eating up market share. The outcome is a record number of store closures this year and street upon street filled with those red discount banners. Really, retail can’t be blamed: So much change has been thrust upon this industry in the last 10 years. The way people shop has been entirely reinvented, the way products are purchased has been overhauled, demands around delivery have ramped up to “within the hour” and even the once revered format of seasonal runway shows, fashion’s Super Bowl, are being challenged. AI meets retail sales This is where artificial intelligence promises to upend retail’s present and future, starting with big data. While every retailer has their own internal data, there had previously never been a way to understand what was going on in the industry outside of their own sales. There was no reliable way to know exactly when a new denim trend kicked off, to view which styles sold best for a competitor or understand how to price newly arrived stock. [ Related: IoT in Retail Marketing: Three Ways to Engage Customers ] But with the advent of the Internet and ecommerce, there’s now the data, processing capabilities and technical expertise available to collect and analyze this information. Data scientists, now one of the most in-demand jobs in retail, play a vital role in organizing and making use of the world’s apparel information. Data scientists,help automate recommendations for subscription brand StitchFix, provide personalized clothing suggestions for The North Face’s customers and give retailers invaluable real-time updates when new products arrive, sell out, get discounted or make any movement at all, worldwide. Retailers can now ensure they have the right products for their target market, that new inventory is timed perfectly, and that items are priced ideally to sell. However, while large datasets inspire awe and huge budgets are dedicated to building them with a view to fix all problems, the data is only valuable when it’s meaningful. That’s where machine learning came in. Machine learning adds meaning Using machine learning, retailers can capitalize on market data to understand and anticipate consumer behaviors and trends. For example, British online fashion retailer ASOS, uses machine learning to recommend similar products to shoppers. It also analyzes which items and sizes are returned most often, to improve the customer experience and reduce costs. The apparel sector is a visual and descriptive industry. Aspects such as texture, pattern, color and fit have a lot to do with why things are popular or not. A database not only needs to store information on products, but to make that data insightful, it needs to understand what those items represent. [ Related: Real-Time Payment Systems Start to Take Shape ] The first hurdle is categorization. How a retailer describes a product that consumers consider to be similar can vary wildly. Data scientists need to teach our machines to not only continually build upon their understanding of the language of apparel, but to actually see what was within an image. The need for data scientists, deep learning and neural networks Software needs to view a product photograph, know which parts of the picture were the model, identify the background and differentiate it from the garment being retailed. This could mean separating a long-sleeved polo shirt from a short-sleeved polo shirt, isolating a belt worn over jeans, or knowing what in the database was technical sportswear, versus athleisure. [ Related: Data Scientists vs. Data Engineers and Data Analysts ] Standardizing the data in this way is transforming the industry as for the first time, retailers can run a direct comparison of their product assortment alongside every one of their competitors’ merchandise. They can address the latest consumer trends and capitalize on current best sellers. And they can launch a new product or enter new markets with unprecedented visibility. The next frontier of AI for data scientists is using deep learning and neural networks to the point of revealing what will happen next. AI is clearly being applied in new ways across the entire retail product lifecycle — from design to purchase. Astute retailers will continue to tap into the latest AI advancements to elicit these data points and insights to their advantage. Originally published at www.rtinsights.com on October 11, 2017.
How Artificial Intelligence Is Transforming Retail
0
how-artificial-intelligence-is-transforming-retail-1c3e95f611f0
2018-01-24
2018-01-24 15:02:58
https://medium.com/s/story/how-artificial-intelligence-is-transforming-retail-1c3e95f611f0
false
795
Features and news on real-time analytics, big data, the IoT, and artificial intelligence.
null
rtinsights
null
RTInsights
null
rtinsights
BIGDATA,INTERNET OF THINGS,DATA ANALYTICS,ARTIFICIAL INTELLIGENCE,PREDICTIVE ANALYTICS
RTInsights
Retail
retail
Retail
16,358
RTInsights Team
null
ef1c482f7eba
rtinsights
8
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-23
2018-04-23 17:14:29
2018-04-23
2018-04-23 19:50:25
22
false
pt
2018-04-23
2018-04-23 19:50:25
1
1c4098495b1f
4.012264
1
0
0
Quais são os fatores e condições que alguns funcionários entram em atrito na empresa? Eles podem ser minimizados? Existe alguma maneira de…
5
Análise exploratória e modelo preditivo sobre funcionários que causaram algum atrito na empresa. Qual o perfil deles? Quais são os fatores e condições que alguns funcionários entram em atrito na empresa? Eles podem ser minimizados? Existe alguma maneira de mensurar estes fatores? A resposta é “SIM” e cada vez mais os estudos através de dados coletados internamente ou por fontes externas tem sido aplicados em casos de machine learning e identificação de insights para tomadas de decisões. “Quando utilizamos dados para entender quais comportamentos no local de trabalho tornam as pessoas mais eficientes, felizes, criativas, líderes, seguidoras, pioneiras e especialistas, estamos fazendo people analytics”, explica Ben Waber, pesquisador do MIT Media Lab e autor do livro: People Analytics: How Social Sensing Technology Will Transform Business and What It Tells Us about the Future of Work. People Analytics é o reconhecimento de que os colaboradores são o mais valioso recurso de uma empresa e que, portanto, é necessário mensurar para entender o que os torna engajados, produtivos e felizes no ambiente de trabalho. O estudo abaixo foi realizado com Python (Jupyter Notebook). O arquivo XLSX— IBM HR Analytics Employee Attrition & Performance, foi retirado do site do Kaggle. https://www.kaggle.com/pavansubhasht/ibm-hr-analytics-attrition-dataset Importação de bibliotecas e arquivo XLSX Informações sobre o arquivo importado (1470 linhas e 33 colunas) Visualização das primeiras linhas do Dataframe “rh”. Traduzindo para PT-BR algumas informações para facilitar o entendimento. Renomeando as colunas do Dataframe “rh”. Verificando as primeiras linhas do DF após tradução de colunas e atributos. Construção de gráficos para identificar tendências e possibilitar categorizar o perfil do funcionário. Considerando viagens a trabalho com atritos gerados, proporcionalmente o grupo que viaja com frequência teve maior índice de atrito. O departamento de Pesquisa e Desenvolvimento apresentou alto índice de atrito, comparado aos setores de Vendas e Recursos humanos. Quanto ao nível de escolaridade, não houve grande variância entre os avaliados (Faculdade até Doutorado) Os homens apresentam maior número de registros para os conflitos na empresa. Referente ao estado civil, os solteiros são disparados os causadores de conflitos, com 33% do total de solteiros e 8% da totalidade da amostra. Não houve diferença para o índice “Satisfação no relacionamento” em relação a quantidade de conflitos na empresa. Obs: Este é um fator complexo para ser avaliado, pois é muito subjetivo e a resposta do entrevistado nem sempre será a realidade. Realizado PIVOT com algumas dimensões para entendimento e comparações por Gênero e Estado civil. Obs: Embora no Gráfico acima tenha ficado nítido que os solteiros e do sexo masculino apresentaram maior número de conflitos, quando avaliado: (Feminino, Casada, por Distância da residência), o índice para conflito é elevado. Um possível fator seria uma mãe jovem (aprox 37 anos), com filho pequeno e a distância teria impacto neste aumento de stress. Observações e comparações entre os grupos “briguentos” e “tranquilos”: Idade: O fator idade teve peso para gerações de conflitos. Supostamente, o adulto jovem (em torno de 33 anos) tem mais propensão a ter conflito se comparado ao grupo de “tranquilos”, com média de idade de 38 anos. Distância da residência: 19% superior o grupo de atritos. Satisfação do ambiente e envolvimento: São índices que tiveram ligeira variação entre os grupos (em torno de 8%), mas este dado é subjetivo e complexo de ser aferido. Nível de trabalho: Forte impacto na geração de atrito. Quanto mais baixo o nível de trabalho (amostra demonstra 30%), maior a chance de um atrito. A Renda Mensal segue o mesmo raciocínio e tx semelhante. Total de anos trabalhados e anos de empresa: Aparentemente os funcionários com mais tempo de casa e maior histórico de carreira tendem a ser mais compreensivos e não entram em zonas de atritos. Anos com o mesmo gerente: As relações mais antigas com chefes e gestores mostram que os funcionários sentem-se mais valorizados e próximos a empresa. Desta forma, a troca constante de gestor gera alto índice de atrito, talvez pela nova expectativa do funcionário, perde-se o histórico de negociações informais ao logo do tempo ou pela necessidade de demonstrar qualidades já inerentes ao posto de trabalho ao novo gestor. Após analise exploratória dos dados, serão consideradas para o modelo preditivo apenas as dimensões que diferem entre os causadores de atritos. Será importado novamente o CSV “WA_Fn-UseC_-HR-Employee-Attrition.xlsx”. Excluindo as colunas que não serão utilizadas para criação do modelo de Machine Learnig Criada a variável Y que será nosso TARGET (atrito SIM ou NÂO). Realizada a exclusão na sequência da coluna “Attrition” do DF “ml e criada nova variável X que serão as variáveis independentes. Importando todas as bibliotecas que para pré-processamento e predição. Transformando os dados categóricos em numéricos (exemplo: Solteiro = 0, Casado=1, Divorciado=2). Desta forma, os algoritmos são capazes de trabalhar com todas as variáveis. Dividindo a base de dados em teste e treino, com a ferramenta “train_test_split”. Obs: Utilizado 20% da base para teste. Criados os modelos para treino dos dados. O algoritmo de Regressão Logística apresentou o melhor resultado entre os demais. O algoritmo de Regressão Logística apresentou o melhor valor de acurácia. Isso pode ser confirmado, comparando os algoritmos através de boxplots. Diante de um modelo criado, novas predições podem ser realizadas e aplicadas ao problema (no caso, atrito na empresa). As empresas que utilizam dados para tomadas de decisões, estão: 3 vezes mais propensas a executar as decisões planejadas. 5 vezes mais propensas a tomar decisões mais rápidas; e por conta disso, duas vezes mais propensas a figurar no quartil superior do desempenho financeiro em seus mercados. Dicas para implementar análise de dados na empresa: 1. Adote uma ferramenta, mas aja de forma democrática. 2. Envolva a equipe de Marketing na implementação. 3. Tente começar por quem mais precisa.
Análise exploratória e modelo preditivo sobre funcionários que causaram algum atrito na empresa.
8
análise-exploratória-e-modelo-preditivo-sobre-funcionários-que-causaram-algum-atrito-na-empresa-1c4098495b1f
2018-04-23
2018-04-23 20:45:54
https://medium.com/s/story/análise-exploratória-e-modelo-preditivo-sobre-funcionários-que-causaram-algum-atrito-na-empresa-1c4098495b1f
false
573
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
alegeorgelustosa
Economist and entrepreneur, passionate about marketing, founder of the Don George brand!
537267d0dbd1
alegeorgelustosa
12
48
20,181,104
null
null
null
null
null
null
0
null
0
f67f1a1ca93d
2017-11-06
2017-11-06 13:39:16
2017-11-06
2017-11-06 13:41:14
1
false
en
2017-12-15
2017-12-15 13:55:02
2
1c41265432ba
1.011321
0
0
0
Donald P. Green is a political scientist and quantitative methodologist at Columbia University. Prior to joining the Columbia faculty in…
5
JADS CLLQM Jan 11th: Spillover Effects in Experimentation Donald P. Green is a political scientist and quantitative methodologist at Columbia University. Prior to joining the Columbia faculty in 2011, he taught at Yale University, where he directed the Institution for Social and Policy Studies from 1996 to 2011. Professor Green’s primary research interests lie in the development of statistical methods for field experiments and their application to American voting behavior. About the CLLQM talk Non JADS staff can register here (limited seats). Bogotá intensified state presence to make high-crime streets safer. We show that spillovers outweighed direct effects on security. We randomly assigned 1,919 “hot spot” streets to eight months of doubled policing, increased municipal services, both, or neither. Spillovers in dense networks cause “fuzzy clustering,” and we show valid hypothesis testing requires randomization inference. State presence improved security on hot spots. But data from all streets suggest that intensive policing pushed property crime around the corner, with ambiguous impacts on violent crime. Municipal services had positive but imprecise spillovers. These results contrast with prior studies concluding policing has positive spillovers. See paper here: Pushing Crime Around the Corner? Estimating Experimental Impacts of Large-Scale Security… Bogotá intensified state presence to make high-crime streets safer. We show that spillovers outweighed direct effects on…papers.ssrn.com
JADS CLLQM Jan 11th: Spillover Effects in Experimentation
0
jads-cllqm-spillover-effects-in-experimentation-1c41265432ba
2018-05-01
2018-05-01 14:40:14
https://medium.com/s/story/jads-cllqm-spillover-effects-in-experimentation-1c41265432ba
false
215
Data science is an emerging discipline. Exciting times indeed! At the Jheronimus Academy of Data Science we would like to share our projects, ideas and how we are helping shape this new discipline.
null
JADataScience
null
Data Science Backchannel
a.haring@jads.nl
data-science-backchannel
DATA SCIENCE,AI,BLOCKCHAIN,MACHINE LEARNING,BIG DATA
jadatascience
Cllqm
cllqm
Cllqm
8
Arjan Haring
Scientific Advisor @jadatascience.
ae4a7e23d05
happybandits
513
520
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-20
2018-06-20 12:33:36
2018-06-20
2018-06-20 12:36:05
1
false
en
2018-06-20
2018-06-20 12:36:05
0
1c4199ca9ea7
1.898113
2
0
0
The war of talent has just taken that another leap where managing and measuring candidate experience hasn’t just become one of the key…
3
Candidate Experience “New Metric ?” The war of talent has just taken that another leap where managing and measuring candidate experience hasn’t just become one of the key metric but also an aspect of brand reflection. While i was doing this research, i was awestruck to realise this behaviour which made me take a step back and think…. • Are we really moving? • Are we really building recruiting processes and experience for tomorrow? • Why is that today’s recruiting is still heavily biased and dependent on primitive approaches? In this era of start-ups & AI based services to map customer journey — why is that a potential business (early start up or matured business) or talent have to go thru an experience which is not a delight for either of them. We have solutions in the form of ATS UX improved & objective Assessments Heavy spend of employer brands and ….. That high of my recruiting process being the best as compared to all… though internally all potential customers (hiring managers and candidates) do make mockery of the process or experience. Apart from the few intuitive conversation I have had with few thought leaders, the rest of the audience are still on the path of figuring out but yet comfortable. You will be surprised to know, for reference say an organization of headcount say “150”, growing @ rate of 30% year on year will hire about “70+” roles in a year considering all different movements of talent. Assuming, irrespective of the presence or brand of the organization , it attracts about 1000 applicant interest in a month. A recent study says that from the net traffic received by an organization on applicant interest, only 2% of that traffic converted to a potential hire. 98% of the applicant traffic is an opportunity loss for the brand primarily because of following as reasons – Off the 98%, roughly about 50% people were actually not the right fit for the roles. About 20% were active followers of the brand, but dint know what it takes to be there. About 10% people applied for the same role thru multiple channels but dint get any response. About 7- 8% people had the right competency of what it takes, but the applicant traffic was high enough for manual eye to identify. How about if candidate can be tapped or prepped or positioned right at the first stage when he/she applies for a role change? What all questions does come to a candidate mind when he/she chooses that its the brand i want to work for Thought 1 : Can social broadcast of Employee Benefits be one of the channel ? Hmmm… some rough thoughts.
Candidate Experience “New Metric ?”
2
candidate-experience-new-metric-1c4199ca9ea7
2018-06-20
2018-06-20 12:36:06
https://medium.com/s/story/candidate-experience-new-metric-1c4199ca9ea7
false
450
null
null
null
null
null
null
null
null
null
Recruiting
recruiting
Recruiting
15,454
Gaurav Gaur
Technology Enthusiast with Recruiter @ Heart. Working on to make my own story as i begin my second inning of untested dope called #entrepreneurship.
2ca54b6e053b
gauravgaur_45532
1
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-31
2018-01-31 20:54:50
2018-01-31
2018-01-31 20:55:53
0
false
en
2018-01-31
2018-01-31 20:55:53
0
1c4231d6bc9
0.513208
0
0
0
If you are used to making simulations and forecast on the CWE, the DE-AT splitting has several impacts on your tools.
1
DE-AT power market splitting — The impacts on the “Flow Bases model“ If you are used to making simulations and forecast on the CWE, the DE-AT splitting has several impacts on your tools. One of this is a new structure of the regional markets within the CWE. So this implies a new modelling of the transportation network for the “flow based coupling”. From a practical point of view, this will be translated into a new structure of the PTDF matrix: a new structure of the files published by JAO; a new structure of linear inequalities; a new set of coefficients and RAM’s values. The last item could impact on you if you use time series analysis or neural networks for your simulations. Indeed such kind of tools could give wrong answers to your questions.
DE-AT power market splitting — The impacts on the “Flow Bases model“
0
de-at-power-market-splitting-the-impacts-on-the-flow-bases-model-1c4231d6bc9
2018-01-31
2018-01-31 20:55:54
https://medium.com/s/story/de-at-power-market-splitting-the-impacts-on-the-flow-bases-model-1c4231d6bc9
false
136
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Luigi Poderico
null
4cadf9c75b25
poderico
8
65
20,181,104
null
null
null
null
null
null
0
library(tidyverse) # mostly for ggplot and reader library(ggalt) # for geom_dumbbell library(ggrepel) # for pretty labels that don't overlap # library(extrafont) # for an error I receive when trying to italicize words # font_import(pattern="[A/a]rial", prompt=FALSE) # same library(mediumr) # to publish an R Markdown file to medium books.2017 <- read_csv('books_2017.csv') books.2017$`Date Started` <- as.Date(books.2017$`Date Started`, "%m/%d/%y") books.2017$`Date Finished` <- as.Date(books.2017$`Date Finished`, "%m/%d/%y") # We can do arithmetic with our dates using `difftime`, so `Reading Rate` is just pages/days books.2017$`Reading Rate` <- books.2017$Pages/(as.numeric(difftime(books.2017$`Date Finished`, books.2017$`Date Started`))) glimpse(books.2017) ## Observations: 10 ## Variables: 6 ## $ Title <chr> "italic('Time Travel')", "italic('The Road')",... ## $ Author <chr> "'James Gleick'", "'Cormac McCarthy'", "'Kim S... ## $ Pages <int> 353, 324, 624, 334, 768, 577, 246, 434, 322, 332 ## $ `Date Started` <date> 2017-01-01, 2017-01-28, 2017-03-17, 2017-06-0... ## $ `Date Finished` <date> 2017-01-27, 2017-03-10, 2017-06-04, 2017-06-1... ## $ `Reading Rate` <dbl> 13.576923, 7.902439, 7.898734, 33.400000, 18.2... ggplot(books.2017, aes(x=`Date Started`, xend=`Date Finished`, y=reorder(Title,`Date Started`), color=`Reading Rate`)) geom_dumbbell(aes(size=`Reading Rate`),size_x=0, size_xend=0) scale_y_discrete("", breaks=NULL) + scale_x_date(breaks=seq(as.Date("2017-01-01"), as.Date("2017-12-01"), by = "1 month"), date_labels ="%B") geom_text_repel(size=3.5, ## use labels without formatting # aes(label=paste(Title,'by',Author)), # comment out the next two lines to use unformatted text aes(label=paste(Title,'by',Author,sep='~')), parse=TRUE, nudge_x=0.3, nudge_y=0.5, direction='y', hjust=1, vjust=0, segment.size=0.2, color='Black') labs(x = 'Date Started', y = '', title = 'Books Read During 2017', subtitle = 'Width and color of bars represent reading rate') theme(legend.position=c(0.93,0.3), legend.background=element_blank(), legend.text=element_text(size=8)) + scale_color_continuous("Reading Rate\n(pages/day)", trans = "reverse") + guides(size=FALSE)
10
null
2017-12-28
2017-12-28 17:31:17
2017-12-28
2017-12-28 17:35:55
1
false
en
2017-12-28
2017-12-28 17:35:55
8
1c448aa75a82
4.090566
3
0
0
null
5
Books I Read in 2017 The books of 2017 For this year’s book list, I wanted to kick it up a notch and add some data visualization. If you don’t feel like reading the details below, here is the pretty visualization of this year’s reading. (If you’d rather just read the details, feel free to skip to the Technical aspects section.) I really enjoy publishing a year-end reading summary. This inspiration for this idea goes 100% to Robin. I encourage you to read her (much better) books lists for 2016, 2015, and going all the way back to 2009. My own book lists for 2016 and 2015 are on my old blog, but I’m doing everything in my power to move away from WordPress. As you can see, my reading list this year is puny. But hopefully I made up for it with a pretty graph. Some notes from this year: I do not know why After On took so long to read, but it did. It was alright, but not really worth the time. I am becoming a big fan of John Scalzi. Shaina’s dad got me a bunch of sci-fi ebooks for the holidays (including a lot of Scalzi), and I’m really looking to reading them next year. I was pretty disappointed with two of my favorite authors. Both Artemis by Andy Weir and The Chronicles of DODO by Neal Stephenson were alright, but not nearly as amazing as their past works. I can’t quit Lee Child’s Jack Reacher series. He’s still pumping out engaging page-turners, and even though his books are like the potato chip of the brain (easy to eat, but not filling), I’ll probably still keep reading these whenever I’m stuck in an airport. One of my promises to myself (not a resolution) for 2018 is to only read for pleasure on my commute. It’s 45 minutes of downtime, and I think reading for pleasure will make the transition from work/school to home much better. That being said, I’m sure I’ll still occasionally read a last-minute journal article or work-related item on the commute. Technical aspects If you’re curious how I created the graph above, I also wanted to provide the full code with explanations. If you have any suggestions for how the code can be improved, please leave a comment below. First, load the libraries Then, we need to load the data and add some formatting. The only way I could get the labels to be partially italicized was to add that to the original data, and then use parse=TRUE below. The data frame looks like this, using glimpse from dplyr: Now on to the plotting! The initial call to ggplot provides a frame for everything below. Because I'm using geom_dumbbell, I can specify where each line begins and ends with x= and xend=, respectively. I could probably something like geom_line from ggplot, but it's fun to experiment with new packages. This is basically a dumbbell without the dumbbells, and just the bar in the middle. Yes, I realize that both size below and color above are scaled by Reading Rate. This is redundant, but I felt that the redundancy more effectively communicated how slowly or quickly I read a book. Format the axes of the chart. Get rid of the y-axis, and use date formatting for the x-axis. When I tried using date_breaks it also included January 2018, which was annoying. ggrepel is my favorite new(ish) R package. It ensures that labels do not overlap, which saves a ton of manual work. All of the nudging and adjusting is to finagle with the exact positioning of all of the labels. I don’t know of any magic bullet for these. The one thing I had a lot of trouble with was formatting the italics in the labels. Every few runs, it would throw an error about polygon shapes, and I’d have to restart R. If your data isn’t formatted, you can use something similar to the first aes statement. Otherwise, you need the second aes statement plus parse=TRUE. Next, we want to add some labels to our chart. There isn’t much going on here, and y='' is probably redundant to what's in scale_y_discrete above. Finally, set the options for the legend so it’s transparent and overlaps the chart area. Setting guides(size=FALSE) removes the legend corresponding to the size set above, so we only have one legend for color. And that’s it! If anything didn’t make sense, please let me know in the comments. Also, if you’re at Northwestern, I’ll be giving a tutorial on ggplot at January's R User Group meetup.
Books I Read in 2017
5
books-i-read-in-2017-1c448aa75a82
2018-04-01
2018-04-01 12:30:59
https://medium.com/s/story/books-i-read-in-2017-1c448aa75a82
false
1,031
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Adam Goodkind
PhD student at Northwestern, studying Linguistics and Cognitive Science. Interested in #NLProc[essing] esp. information theory and cognition. Pseudoiterative
9cc38e44b1e2
adamgreatkind
114
120
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-28
2018-04-28 16:05:19
2018-04-28
2018-04-28 16:31:33
0
false
en
2018-04-30
2018-04-30 17:24:24
1
1c448fe2d6da
1.969811
0
0
0
In a podcast episode titled “It’s Complicated: Our Evolving Relationship with AI Personal Assistants” Daniela Hernandez, Digital Science…
5
Comments on Tuesday, April 24, 2018 Episode Wall Street Journal Podcast: “The Future of Everything” In a podcast episode titled “It’s Complicated: Our Evolving Relationship with AI Personal Assistants” Daniela Hernandez, Digital Science Editor for the Wall Street Journal presents some history about chatbots along with a series of short interviews with leading researchers from private business (Microsoft) and academia (University of Washington, Carnegie Mellon University, and Northeastern University) on the topic of today’s personal digital assistants and, specifically, the challenge presented by conversational language, as an impediment to further progress. This podcast can be obtained from iTunes <https://itunes.apple.com/us/podcast/its-complicated-our-evolving-relationship-ai-assistants/id1234320525?i=1000409786084&mt=2> A couple of points struck me about this podcast: The first has to do with what I would call tacit acceptance of the “more data/more processing power” argument about how to improve AI results. Ms Hernandez appears completely convinced this argument is a winner. As applied to the question of how best to bring Deep Learning language solutions over the conversational language hurdle, the idea is to provide digital personal assistants with “more data” about “me”, the user/owner. Researchers got to this point when someone observed neural networks (another name for deep learning) do better when they have more data. This genre of AI would also call for more processing power to keep the improved results via neural networks coming. But one of the academics interviewed in this podcast, Woodrow Hartzog of Northeastern is quoted expressing what sounds like some skepticism on this point: “Everyone always say, oh well, if we only had more data, if we had better data then these assistants would get better … and some of it will be useful and some of it won’t.” I’m skeptical about the real usefulness of just collecting more and more data. Having followed some of the debate around the notion of the inevitability of neural networks as THE tool to use to develop solutions for Natural Language Processing and Understanding, and understanding the rationale of just feeding these programs more data to get them to work better, I have come out thinking tools other than neural networks/deep learning will likely be required to really overcome the conversational language hurdle. The second point has to do with the choice of researchers for this podcast. Noticeably absent from the list is any expert from the field of linguistics. At several points during the podcast researchers without any announced expertise in the field of linguistics presented challenges they faced. These challenges may not have been so formidable had a linguist been on the team. Sadly, two of the researchers interviewed are on faculty at two of the most prestigious universities here in the United States working on voice computing technology: the University of Washington and Carnegie Mellon. Why either of these institutions would put together research teams ostensibly without a seat for a linguist gives me cause for concern. A recommendation I make to the editors and producers of this podcast: Try presenting the topic from the perspective of opposing views. Not everyone working on this technology is sold on current tools maintaining their status over the long haul.
Comments on Tuesday, April 24, 2018 Episode Wall Street Journal Podcast: “The Future of Everything”
0
comments-on-tuesday-april-24-2018-episode-wall-street-journal-podcast-the-future-of-everything-1c448fe2d6da
2018-04-30
2018-04-30 17:24:26
https://medium.com/s/story/comments-on-tuesday-april-24-2018-episode-wall-street-journal-podcast-the-future-of-everything-1c448fe2d6da
false
522
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ira Michael Blonder
32 years non stop experience marketing & selling IT products to enterprise business. MA in English. Technical Writer. MARCOM & PR writer. Product Mktng.
3401437a6564
mikethebbop
2,637
99
20,181,104
null
null
null
null
null
null
0
null
0
d01820283d6d
2018-08-27
2018-08-27 10:03:02
2018-08-27
2018-08-27 10:03:02
0
false
en
2018-08-27
2018-08-27 10:03:02
3
1c46ce0c574
0.837736
0
0
0
null
5
Last week the Mirai Botnet attack on the DYN spawned thousands of articles which exposed the venerability of millions of connected devices. The Internet of Things has never faced this big a challenge before. The answer lies in isolating the wifi network, changing passwords very frequently, dynamic IP addresses, creating a private VPN network, etc. are few of the instant solutions that could be offered. The morale of the customer had already been very low and with the recent attack has reached the pits. For the industry to stave off any further attacks of crashing the entire IoT industry, the industry leaders need to come together to create this first cohesive IoT Protocol Standardisation which I had mentioned in my second article even before this attack had taken place. The IoT is a marriage between automation and connectivity, we need to now introduce one additional member to this marriage which is Artificial Intelligence (AI). Today, AI is on the fringe of IoT but once this threesome is solemnised the progeny would be with a healthy DNA with its own counter effective anti-viral system (antibodies). I know this may sound like a Sci-Fi kind of an answer but this could be the holistic approach to the future in every space of IoT like healthcare, transportation, smart cities, etc. Read More> http://bit.ly/2P49mZm
IoT Security Faux Pas
0
iot-security-faux-pas-1c46ce0c574
2018-09-05
2018-09-05 16:37:34
https://medium.com/s/story/iot-security-faux-pas-1c46ce0c574
false
222
Best place to learn about AR& VR. We share the latest AR/VR News, Info, Tools, Tutorials, ARkit, ARcore, & More.
arvrjourney.com
ARVRJourney
null
ARVR Journey: Augmented & Virtual Reality
team@chatbotslife.com
ar-vr-journey
AUGMENTED REALITY,VIRTUAL REALITY,AR,VR,AR VR
null
Anti Viral System
anti-viral-system
Anti Viral System
0
techutzpah
Technology Audacity
a94b3e321f15
techutzpah
19
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-07
2018-09-07 15:06:52
2018-09-09
2018-09-09 10:13:30
3
false
en
2018-09-19
2018-09-19 06:45:27
0
1c46e0c985c7
3.142453
1
0
0
Life is strange. The universe is vast and human space is small. Each of us is a universe of atoms. We somehow find meaning in shifting…
5
Nature and Nurture Life is strange. The universe is vast and human space is small. Each of us is a universe of atoms. We somehow find meaning in shifting sands. Variation is everywhere and the only mechanism we have to make sense of it is primed for narrative coherence. Turning a corner and seeing a tiger is ok if one is able to escape the tiger and remember the contours of the corner and avoid making the turn tiger-wards in future. The happy consequence of this evolutionary story telling bounty is that we are able to find purpose, to share purpose and in so doing to care for and inspire each other. The less happy consequence is that we are plagued by things happening to us or because of us rather than things happening and us simply being around, or not. The narrative tail wags the rational dog. We are living through an industrial revolution in making sense of the physical world. Machines are helping us to understand and manipulate our materials by learning structure from experience. I have spent most of this year looking at data from patients and writing papers on how looking at data from patients with computers might be a good idea. What is interesting to me about this machine learning is that it is the first time that I am aware of that there is a sense making machinery that can be pointed at the universe that is not derived from an evolutionary root. Why sense making works for machines is no more clear than it is for humans. In analyzing images, or voices or medical record data a deep net is abstractly inferring deeper truths. These deeper truths are the laws of physics and the truth that makes understanding complex systems possible — that the universe and creatures within the universe are compiled hierarchically — smaller units assembled into more complex structures in a causal sequence. For the past few months, I have been looking at the trails patients create as they live their lives and interact with the byzantine health system. From looking at divergence in healthcare outcomes between groups of patients emerges the toxic effects of disparities — of violence in the fabric of society. From looking at divergence in uptake or success with care programs emerges the effect of relationships, both familial and clinical, on the abilities of individuals to modify their trajectories. And with tools less constrained by parametric assumptions, one is able to uncover ensemble interactions between these factors even if it is not possible to necessarily understand them cognitively. This is no different from neural nets deriving the laws of mechanics from video images and using those derived laws to generate future frames “independently”. When looking at the physical world we infer the laws of nature, when looking at the human world we infer the laws of meaning in life. Night falls fast. One of us, recently committed suicide and we have all been abruptly thrust into the role of biographer — trying to understand a life through the lens of a death. Peeling back the layers it seems like the bell had been ringing for such a long time that it had become impossible to hear the individual chimes over their reverberations. Genetic loading, childhood trauma, personality traits and social factors put an awful payload in the mail and then seemingly chance events precipitated delivery. I hope that in the future these things will be understood. I hope that the structural and deterministic influence of emotional trauma on a background of a genetic predisposition to interpret trauma a certain way is understood. I hope the way that this creates the preconditions for cognitive phenomena that over time decimate higher social functioning , the functioning that is the seat of the meaning on which we all depend, is understood. I hope these interactions over time spanning the internal universe of atoms and the interactions externally of the atom in the universe will be understood. And maybe even changed. But they can’t be changed now. Now, we need to find a narrative in the dark.
Nature and Nurture
3
nature-and-nurture-1c46e0c985c7
2018-09-19
2018-09-19 06:45:27
https://medium.com/s/story/nature-and-nurture-1c46e0c985c7
false
687
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Trishan Panch
Co-Founder, Chief Medical Officer and Board Director @Wellframe. Innovation Advisory Board @Boston Children's Hospital. Lecturer@MIT. Instructor@HSPH.
6870f02bc88
trishanpanch
88
99
20,181,104
null
null
null
null
null
null
0
null
0
138d4338e4af
2018-08-02
2018-08-02 14:12:05
2018-08-02
2018-08-02 14:14:30
4
false
en
2018-08-02
2018-08-02 14:19:17
17
1c4708637343
5.983019
4
0
0
null
5
Future considerations of Online Marketplaces — Understanding Tendencies and Technologies. Part 2. In our previous article, we have discussed such technologies as blockchain and virtual/augmented/mixed reality in the context of development of online marketplaces. Let’s look at some other relevant trends that have a potential to influence the industry. AI and Machine learning Machine learning has made significant strides recently. If 50 years ago it was a matter of science fiction and was hardly taken seriously, today it’s an integral part of our lives. From the projecting of first human neural network with electric circuits to accurate cancer diagnosing, from deep learning of natural net research to Google Brain etc. — the evolution of this technology boggles the mind. Currently, it has established its presence in the mainstream of multifarious spheres of our lives. Machine learning is considered a subset of artificial intelligence (AI) that aims to train machines to understand their environments and synthesize the gained information thus progressively improving their performance in completing specific tasks. How can this technology be applied in the sphere of online marketing? First, it can help P2P marketplaces to prevent fraud. Machine learning enables a system, which recognizes scammy patterns and takes necessary measures. Then, ML helps you predict what your users will like. This is achieved through enabling personalized search and recommendations. For instance, Etsy bases its search results and recommendations on the preferences of a specific user, alike customers, and shops that have similar profiles to the ones a user shopped at. Mike Kirkwood, Founder of Eek, knows the price of such awareness, reinforced by trust: “In my opinion, in future, marketplaces will be more connected end-to-end — where we will know a lot more about what individual people have, need, and want. And the supply chains will make it happen with a lot less human touches.With Eek, we’re getting very close to this reality. Since we work with a registry of products through the supply chain, we started doing auto-reorders and fulfillment. The key is trust. If the buyers and sellers trust us, this all works. If not, it’s just another way to try to force purchasing decisions.” ML also allows dealing with prices effectively. Particularly, it helps to manage how to set prices for unique goods and services and how to adjust them to market changes. For example, Airbnb offers the service of lodging in the homes of almost unknown people. How can a person determine the cost of staying in one’s room for a night? The company’s Product Lead Dan Hill said that at first, this problem was a burning one for their business, causing a negative impact on it. The situation changed for the better after the company’s engineers implemented ML on their site. It was an analytical model that predicted the probability of one’s listings to be booked at particular prices within a particular time. Based on these predictions, the system gives you relevant price suggestions that boost your chances of cooperation with the guests and helps you earn as much as it is possible. AI and Deep Learning A broad-ranging concept of AI is well understood in the context of the limited AI applications that are able to solve one particular task. These can be exemplified by the application AlphaGo offered by Google DeepMind, which made a splash in 2016 after it beat a professional Go player Lee Sedol. Deep learning is a sub-aggregate of machine learning that uses some methods of ML for solving real-world tasks while using neural networks that may imitate decision-making processes, intrinsic for humans. Deep learning is a very complex technology, which is difficult to implement since it requires enormous funds and data arrays. For instance, in order to “teach” a machine to recognize a human’s face or an animal’s appearance, it is necessary to adjust an immense amount of parameters and provide a machine with millions of images of the required creature. DL can be used for various business purposes, such as text and voice search, recognition of pen-based computing, spam detection, fraud prevention, speech recognition, translation etc. Marketplaces that decide to adopt the technology will be able to use it for targeting and personalization, pricing optimization, search ranking, advancing recommendations and enhancing customer support. Particularly, the latter function is already successfully implemented via the work of various chatbots. These virtual assistants are able to use human language to communicate with a customer, thus identifying and resolving one’s issue. They work 24/7 without lunch breaks and don’t ask to be paid a salary. Thus, they can substitute human shop assistants to some extent. A beauty retailer Sephora offers its site visitors the opportunity to talk to a bot that will ask you some questions and find the products that will best suit your needs. Then, Amazon has its own digital assistant called Alexa; it’s a programmed smart echo speaker that can react on users’ voices and offer products from Amazon. Okey Menakaya, Founder of MoveSavers “I think within the next 3–5 years, marketplaces’ trends will continue to accelerate. Especially as the gig economy is expanding into almost every sector. The reach of marketplaces is endless. AI will be a game-changing event in most MPs. There are huge amounts of inefficiencies and “time-wasting” aspects, and the use of AI can solve these issues, I believe. For example, some courier marketplaces depend on customer conversion to plan their deliveries. Predicting which customer is likely to convert and how soon a customer can close a deal can be a game changer.” UX With the never-ending promotion of competition in e-commerce, today’s online marketplaces need to keep pace with the provision of an excellent user experience (UX) to their clients. Some think UX is about design only. However, besides the attractive visual appearance (which is, surely, very important), a good UX implies quick interaction and loading of the pages, easy navigation, mobile friendliness etc. Almost 60% of shoppers abandon their carts and go to a competitor in case their user experience appeared to be unsatisfactory. Keep that in mind and be scrupulous about the presentation of your website, since the competition is high like never before. Andrew McConnell, CEO of Rented.com shares his thoughts on the situation with such a competition, “The future of marketplaces will be one of constant flux. The reason is that the rewards are simply so outsized for the space. Whether you label them marketplaces or call them platforms, these businesses tend more than any others do towards “winner takes all” principle. Given technology and globalization are also turning any industry into a globally sized one in relatively short order. This means that ambitious entrepreneurs, as well as deep pockets, will constantly pursue new and innovative businesses and models in the space. Fortunes will be made and lost, and today’s category killers could often see themselves unseated by the next upstarts who come along. It will be anything but boring, that’s for sure.” Getting back on topic of superb UX and web design, we cannot but mention Graze — a site where one can buy healthy snacks. A visually “tasty” appearance, smooth interaction and incredibly easy and pleasing navigation make users want to come back repeatedly. Visit it to see how a good user experience may look like. Romain de Dion, Founder of Novatopo “Beyond chatbots, AI etc. users will be primarily looking for quality of UX, data, and vetting of supply. When they click “Buy”, they want exactly what they have in mind to happen. New techs are a necessary condition to that but not a sufficient one. Especially for verticals, which are fragmented and heterogeneous such as sports and leisure where we operate.” Conclusion The majority of entrepreneurs agree that the discussed technologies offer a substantial contribution to the development of online marketplaces. AI, ML and Blockchain have already been implemented in some noteworthy applications and websites. VR/AR/MR technologies appear to be less acknowledged by today’s startup founders, but we will see whether the situation changes in several years. On that note, our grand article is coming to its end. We heard the opinions of entrepreneurs that work in various spheres of e-commerce, and now we are looking forward to hearing YOUR opinions! Do you find the abovementioned techs worthy of your attention? Which of them are you already using or planning to use?
Future considerations of Online Marketplaces - Understanding Tendencies and Technologies. Part 2.
74
future-considerations-of-online-marketplaces-understanding-tendencies-and-technologies-part-2-1c4708637343
2018-08-02
2018-08-02 14:19:17
https://medium.com/s/story/future-considerations-of-online-marketplaces-understanding-tendencies-and-technologies-part-2-1c4708637343
false
1,400
Moving your business forward http://globalluxsoft.com/
null
globalluxsoft
null
Globalluxsoft
info@globalluxsoft.com
globalluxsoft
STARTUP,SOFTWARE DEVELOPMENT,RUBY ON RAILS,ANGULAR,REMOTE TEAM
globalluxsoft
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Globalluxsoft
Don’t develop anything less than the best! http://globalluxsoft.com/
9bf59c01d316
globalluxsoft
52
50
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-01
2018-02-01 18:25:48
2018-02-19
2018-02-19 19:45:32
4
false
en
2018-02-19
2018-02-19 19:47:11
7
1c47f7949996
3.5
1
0
0
It’s hardly a secret that consumers are demanding more from chatbots, they want faster, more accurate responses, and an experience that…
5
Beyond Chatbots, Can Voice Deliver? It’s hardly a secret that consumers are demanding more from chatbots, they want faster, more accurate responses, and an experience that integrates naturally into their lives. Which is why many consumer segments are making the jump from chatbots to Voice User Interfaces (VUIs), a shift that will impact any industry that has a digital relationship with their customers. Finance, retail, automotive, and healthcare will all face increasing demand to implement a VUI offering into their services, and if they don’t, they will lose customers to a competitor who does. Voice is Now If the new 2018 report on Conversational Commerce from CapGemini is any indication, consumers are already embracing voice assistants more rapidly than any other emerging technology on the market today. According to CapGemini, “A majority of consumers (51%) are already users of voice assistants, and interacting with voice assistants” VUIs have embedded themselves into the very framework of consumer’s lives, and the demand to interact with technology in this new way will only increase as conversational interfaces become more readily available. Why do Consumers Prefer Voice? Why would a consumer prefer a VUI to a GUI (graphical user interface)? According to CapGemini, Convenience (52%) and ability to do things hands free (48%) are the two biggest reasons for preferring voice assistants over mobile apps/websites. In fact, according to research conducted at Stanford University, humans can perform tasks with voice 3x as quickly as they can by typing on a smartphone. “The fact that voice assistants are faster (49%) and more convenient (47%) are the major reasons for preferring them” The significant time savings of VUIs makes a tangible impact in the lives of consumers, relieving them from the burden of clicking tirelessly away at a screen. VUIs present a freedom that hasn’t existed since the dawn of the smartphone, the freedom to simply speak. Voice is Here to Stay According to CapGemini, “Voice assistants will become a dominant mode of consumer interaction in three years” This statistic often comes as a surprise, because the experience of interacting with voice assistants can often be frustrating, or even fall flat of expectations. But this is changing, rapidly. Chatbots are dying off, giving way to truly remarkable conversational interfaces. VUIs are already faster and more efficient, but when the conversational AI behind a VUI is advanced, the experience is seamless, delightful, and unlike anything consumers have seen before. “The promise of a Voice UI frees us from manual touch, it’s a third hand when two are busy” and when we spend 5 hours per day on our phones, that extra hand is more than just an assistant, it’s a life line. Implementing Advanced Conversational AI in VUIs What does a good VUI need to have in order to achieve and maintain adoption? According to CapGemini, “82% of users say fast and accurate replies are the most compelling feature that influences the use of voice assistants” This may seem straightforward, but creating an AI that delivers an experience in which fast and accurate replies are possible requires advanced Natural Language Processing and Deep Learning Techniques. It mandates an understanding of human language in its most basic form (messy, colloquial, erroneous) which is no easy task for an AI. In fact, it’s incredibly complicated. Representation of a Neural Network Architecture There are very few companies today that are doing conversational AI well, and even fewer who have the expertise to scale their solutions. Clinc is one of those companies, one whose executive team is a blend of Professors and PhDs in machine learning and scalable systems mixed with proven leaders in enterprise success and profitability. Clinc is able to use the latest advancements in science and technology to redefine the conversational AI experience and deploy it to millions of users. An Overwhelming Force As the chatbot hype cycle continues to diminish, the underlying purpose for conversational interfaces can begin its realization and dissemination into the marketplace. There is a hunger among consumers for new, extraordinary experiences, and what could be more extraordinary than merging the most natural, human quality (voice) with the most unnatural one (AI)? As the CEO of CapGemini put it, “Brands that are able to capitalize on the huge consumer appetite around voice assistants will not only build closer relationships with their customers, but create significant growth opportunities for themselves” As companies like Clinc branch their solution to more enterprises, in more industries, the original promise of Conversational AI will finally be delivered.
Beyond Chatbots, Can Voice Deliver?
1
beyond-chatbots-can-voice-deliver-1c47f7949996
2018-04-14
2018-04-14 21:36:44
https://medium.com/s/story/beyond-chatbots-can-voice-deliver-1c47f7949996
false
742
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Emma Furlong
AI, Machine Learning, Fintech Journalist
e981fce32339
emma_98834
14
5
20,181,104
null
null
null
null
null
null
0
Tape : Sticky :: Oil : X from scipy.spatial.distance import cosine tape = [0.1, 0.2, 0.1, 0.2, 0.2, 0] oil = [0.2, 0.1, 0, 0.2, 0.2, 0] cosine_distance = cosine(tape,oil) import numpy as np tape = np.array([0.1, 0.2, 0.1, 0.2, 0.2, 0.1]) oil = np.array([0.2, 0.1, 0, 0.2, 0.2, 0.1]) sticky = np.array([0.2, 0.0, 0.05, 0.02, 0.0, 0.0]) result_array = tape + oil - sticky from nltk import stem #Initialize an empty list my_stemmed_words_list = [] #Start with a list of words. word_list = [duct, tape, works, anywhere, magic, worshiped] #Instantiate a stemmer object. my_stemmer_object=stem.snowball.EnglishStemmer() #Loop through the words in word_list for word in word_list: my_stemmed_words_list.append(my_stemmer_object.stem(word)) [anywher, duct, magic, tape, work, worship] T = [anywher, duct, magic, work] C(‘magic’ | c = 2) = [Duct, tape, worship] SUM of v2 (duct index) = 0.2 + 0.02 => 0.22
16
f4f1e49a4f74
2017-12-29
2017-12-29 17:59:32
2017-12-29
2017-12-29 18:04:29
4
false
en
2017-12-29
2017-12-29 18:04:29
6
1c48491144f7
8.081132
1
0
0
By Don Vetal
4
NLP Research Lab Part 2: Skip-Gram Architecture Overview By Don Vetal This post is part of a series based on the research conducted in District Data Labs’ NLP Research Lab. Chances are, if you’ve been working in Natural Language Processing (NLP) or machine learning, you’ve heard of the class of approaches called Word2Vec. Word2Vec is an implementation of the Skip-Gram and Continuous Bag of Words (CBOW) neural network architectures. At its core, the skip-gram approach is an attempt to characterize a word, phrase, or sentence based on what other words, phrases, or sentences appear around it. In this post, I will provide a conceptual understanding of the inputs and outputs of the skip-gram architecture. Skip-Gram’s Purpose The purpose of the Skip-Gram Architecture is to train a system to represent all the words in a corpus as vectors. Given a word, it aims to find the probability that the word will show up near another word. From this kind of representation, we can calculate similarities between words or even the correct response to an analogy test. For example, a typical analogy test might consist of the following: In this case, an appropriate response for the value of X might be “Slippery.” The output for this model, which is described in detail below, results in a vector of the length of the vocabulary for each word. A practitioner should be able to calculate the cosine distance between two word vector representations to determine similarity. Here is a simple Python example, where we assume the vocabulary size is 6 and are trying to compare the similarity between two words: In this case, the cosine distance ends up being 0.1105008200066786. Guessing the result of an analogy simply uses vector addition and subtraction and then determines the closest word to the resulting vector. For example, to calculate the vector in order to guess the result of an analogy, we might do the following: Then you can simply find the closest word (via cosine distance or other) to the result_array, and that would be your prediction. Both of these examples should give you a good intuition for why skip-gram is incredibly useful. So let's dig into some of the details. Defining the Skip-Gram Structure To make things easy to understand, we are going to take a look at another example. “Duct tape works anywhere. Duct tape is magic and should be worshiped.” ― Andy Weir, The Martian In the real world, a corpus that you want to train will be large; at least tens of thousands of words if not larger. If we trained the example above in the real world, it wouldn’t work because it isn’t large enough, but for the purposes of this post, it will do. Preparing the Corpus for Training Before you get to the meat of the algorithm, you should be doing some preparatory work with the content, just as you would do for most other NLP-oriented tasks. One might think immediately that they should remove stopwords, or words that are common and have little subject oriented meaning (ex. the, in, I). This is not the case in skip-gram, as the algorithm relies on understanding word distance in a paragraph to generate the right vectors. Imagine if we removed stop words from the sentence “I am the king of the world.” The original distance between king and world is 3, but by removing stopwords, the distance between those two words changes to 1. We’ve fundamentally changed the shape of the sentence. However, we do probably want to conduct stemming in order to get words down to their core root (stem). This is very helpful in ensuring that two words that have the same stem (ex. ‘run’ and ‘running’) end up being seen as the same word (‘run’) by the computer. A simple example using NLTK in Python is provided below. The result is as follows: Building the Vocabulary To build the final vocabulary that will be used for training, we generate a list of all the distinct words in the text after we have stemmed appropriately. To make this example easier to follow, we will sort our vocabulary alphabetically. Sorting the vocabulary in real life provides no benefit and, in fact, can just be a waste of time. In a scenario where we have a 1 billion word vocabulary, we can imagine the sorting taking a long time. So without any further delay, our vocabulary ends up becoming the following: The following stopwords would also be included: [is, and, should, be]. I’m leaving these out to keep this example simple and small, but in reality, those would be in there as well. The Training Set Just like with other statistical learning approaches, you’ll need to develop some methodology for splitting your data into training, validation, and testing sets. In our specific example, we’ll make 2/3 of the total vocabulary our training set through a random selection. So lets suppose our training set ends up being the following after a random selection: Which means we have 4 training samples t1 through t4 (T={t1,t2,t3,t4}). The vectors used to feed the input layer of the network are as follows: The Input Suppose your only goal was to find the probability that “work” shows up near “tape.” You can’t just throw one example at the Neural Network (NN) and expect to get a result that is meaningful. When these systems (NN) are trained, you will eventually be pushing the bulk of the vocabulary (your training set) into the input layer and training the system regardless of the specific question you may be asking. Our input layer is a vector that is the length of the vocabulary (V) and we have four training samples, one for each word. So the total set of data pushed through the NN during training-time is of size VxT (6x4). During training time, one of the samples in T is input into the system at a time. It is then up to the practitioner to decide if they want to use online training or batch inputs before back-propagating. Back-propagation is discussed in our back-propagation blog post, which will be published soon. For now, don’t worry about those details. The point here is to conceptually grasp the approach. The insertion into the input layer looks something like the following diagram: Each sample of array length V (6) represents a single word in the vocabulary and its index location in the unique word vocabulary. So let’s review our objective here. The objective of the Skip-gram model, in the aggregate, is to develop an output array that describes the probability a word in the vocabulary will end up “near” the target word. “Near” defined by many practitioners in this approach is a zone of c words before and after the target word. This is referred to as the context area or context zone. So in the example below, if the context size is 2 (c=2) and our target word is “magic,” the words in the context area are C shown below. The Output To get to the point where we get a single array that represents the probability of finding any other word in the vocabulary in the context area of the target word, we need to understand exactly what the NN output looks like. To illustrate, I’ve provided the diagram below. In this diagram, if we choose a context size of one, it means we care about words that appear only directly before and directly after the target word.The output layer includes two distinct sets of output values. If our context size was 5, we’d end up with 60 output values. Each word in the vocabulary receives two values for our context size of one. To get the score for an individual word in the context area, we can simply sum up the values. So for example, the score for “duct” (v2) showing up within the context area is 0.22 (0.2 + 0.02). You may have noticed we are calling the results scores instead of probabilities. That is because the raw output of the skip-gram architecture does not produce probabilities that add up to 1. To convert the scores to probabilities, you must conduct a softmax calculation to scale them. The purpose of this post isn’t to describe softmax, so we are just going to pretend the values in the diagram are probabilities. The Error Calculation At the end of forward propagation (stay tuned for the forward-propagation blog we have coming up next), you need to calculate an error in order to backpropagate. So how do you do that? It’s actually pretty easy. We already know what the actual probabilities are of finding a word in the context area of a target word based on our corpus. So for example, if we wanted to know the error in the probability of finding “duct” given “magic” as the target word, we would do the following. In our corpus, the actual probability of finding “duct” in the context area around “magic” is 100% because “magic” is only used once and “duct” is within the context zone. So the absolute error in probability is 1–0.22 = 0.78, and the mean squared error (MSE) is 0.61. This error is used in backpropagation which re-calculates the input and output weight matrices. Conclusion What I have given you is a conceptual understanding of what the input vectors look like and what the output vectors look like, but there are other components of the algorithm that will be explained in upcoming blog posts. There is a weight matrix between inputs and the hidden layer. There is a weight matrix between the hidden layer to the outputs. The input weight matrix (1) is the matrix that becomes the vectors for each word where each row is a word and the vector is of length H (H is the number of nodes in the hidden layer). The output vectors simply give the score for a word being in the context zone and are really not used for anything other than training and error calculation. It is important to understand that a practitioner can choose any number of H nodes for the hidden layer; it is a hyperparameter in training. Generally, the more hidden layer nodes you have, the more expressive (but also the more computationally expensive) a vector is. The output weight matrices are not used outside of the context of training. To learn more about these details and what the process of forward propagation is, please check out our forward propagation blog post, which is coming up next. District Data Labs provides data science consulting and corporate training services. We work with companies and teams of all sizes, helping them make their operations more data-driven and enhancing the analytical abilities of their employees. Interested in working with us? Let us know!
NLP Research Lab Part 2: Skip-Gram Architecture Overview
1
nlp-research-lab-part-2-skip-gram-architecture-overview-1c48491144f7
2018-05-28
2018-05-28 10:21:44
https://medium.com/s/story/nlp-research-lab-part-2-skip-gram-architecture-overview-1c48491144f7
false
1,956
Data science tutorials, thought pieces, and other awesome content.
null
DistrictDataLabs
null
District Data Labs
tojeda@districtdatalabs.com
district-data-labs
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,ANALYTICS,BIG DATA
DistrictDataLab
Machine Learning
machine-learning
Machine Learning
51,320
District Data Labs
Data science consulting firm, research lab, and open source collaborative.
96c976e31f28
DistrictDataLabs
921
471
20,181,104
null
null
null
null
null
null
0
null
0
9c0fa9ca3c8e
2017-11-25
2017-11-25 22:24:34
2017-11-25
2017-11-25 22:34:14
1
false
en
2017-12-01
2017-12-01 21:24:11
19
1c48ba0209a9
7.373585
11
0
0
Some of you liked my computer sciencey take on improvisation so I’m going to present a nerd’s view of another topic close to my heart.
5
Kolmogorov Complexity: the Thing that gives Energy to Dance Music Some of you liked my computer sciencey take on improvisation so I’m going to present a nerd’s view of another topic close to my heart. I’m going to call it musical energy. Perhaps there is a formal term for this, but if so I don’t know it. From a DJ’s perspective, it’s the characteristic possessed by a tune that fills the dancefloor and gets the crowd moving. Of course a tune has to be subjectively good to do these things as well, but that alone is not sufficient: there are plenty of great tunes that don’t have high energy. Nor does every tune in a DJ set have to be high energy — it’s fun, but not always desirable to run at 110% from start to finish. Energy is not the same thing as tempo. Tempo is the number of beats per minute, a completely objective quality. Energy is a subjective modifier of tempo: to me, if I listen to two tunes with identical BPM, the one with higher energy feels faster. Energy is not the same thing as busyness. Having more things happen at once doesn’t necessarily mean higher energy. So what do we mean by energy, then? I gave the answer away in the title In information theory there’s this thing called Kolmogorov Complexity. The complexity of a given piece of information is defined as the shortest computer program, in a given language, that could reproduce the information. For example, take the decimal number consisting of 1 followed by 200 zeros: 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 In Python, I could express that as 10**200 which takes much less space than the original. But how about this one? 553527206377988085028250004204760250426513901637831154329418125409873767123121944013989197021710825182691896946905749882686222926272883076921556894389388071672122002722108452043935934972665104877104693 Good luck finding a way to shorten that one; it’s near enough random. Music doesn’t work quite like this. For one thing, in information theory, white noise has maximum complexity, while to a human, it sounds like a fairly boring hiss. This is because our brains aren’t able to distinguish one white noise from another. Recall the definition of complexity: the shortest computer program (in a given language) that could reproduce the information I think this definition can work for musical energy, if we define (1) the information = our subjective experience, (2) the given language = whatever native code runs in our brains. In other words, this elusive musical energy is the amount of work our brains have to do to understand what they’re experiencing. Once you start thinking like that, it gives an insight into various musical phenomena. Bear in mind that the work your brain is doing can be either conscious or subconscious, but usually it’s the latter. Producers try to become conscious of this stuff, though they might disagree about how it works (other opinions are welcome in the comments below!). Syncopation Syncopation is when two musical themes appear out of phase with one another. Classic example embedded below: God is a DJ. I can’t actually think of an unsyncopated example in dance music because it’s such a common trick to use; not only because it raises energy, but also because it stops your lead synth from clashing with the kicks and snares in the mix, so each part can be mixed louder. ( Also on iTunes / Amazon ) Given the lack of unsyncopated examples, how about I make my own (banging, yes I know)… All three sections above (A, B, C) have the same tempo of kick drums; i.e. the speed at which people would dance. They also contain the same number of sounds. But A has no syncopation while C has a lot. I would say this means they rank in ascending order of energy. My pet hypothesis is that to understand what’s going on in a rhythm, my brain has to create a sort of beat grid on which it can hang the various sounds. In A the beat grid is 120 beats per minute: every sound falls exactly on a beat. In B, the beat grid is 240 beats per minute, as the bongos fall halfway between the kicks and snares. In C — the most interesting — you need a grid of 480 beats per minute to understand the location of the first bongo hit, even though C is no more busy than B, and has the same tempo as A. The bongo hits don’t even need to be very loud to make this work. They just need to be noticeable enough to trigger our brains into processing them. Of course if you shift that bongo hit a touch earlier or later, then theoretically you’d need a much faster beat grid to explain its position in time. But that doesn’t work. Why? Our brains aren’t capable of conceiving that grid. They work by pattern matching, which I suspect means if they can’t process some extra complexity, they will usually match the simplest pattern they can process — and hence won’t do any extra work. Hints of speech There’s a corollary to that last part, which is this: if our music receiving brains can process something they probably will. A common trick in EDM is to throw in some sounds which vaguely resemble speech (using vocoders, formant filters, etc). Our brains have dedicated hardware for speech processing, so this engages more circuitry to process it, compared to equally busy sounds that aren’t speechlike. There are a lot of ways this tactic can be deployed, so here are a few good examples: Hedflux — Music Is My Weapon: iTunes / Amazon / YouTube GMS vs Eskimo — GMS-Kimo: Amazon / YouTube (surprised iTunes doesn’t have this) Bushwacka — Feel It (Original): iTunes / Amazon (James Lavelle remix — slightly more modern dancefloor sound): Amazon CD / YouTube The more escapist end of the dance music world will tend to avoid too much actual speech, because it’s too grounding, an everyday phenomenon that doesn’t excite the listener. Short repeated loops are common, though, as the unnatural repetition cancels the grounding effect. Liminality If your brain isn’t quite sure whether something is there or not, it’s going to expend extra effort to try and find out. So, sounds that exist at the threshold of perception are another way to give music more energy. The obvious way to do this — and most quality music will, especially classical — is to have some quieter things going on beyond the main lead. But there are other ways to confuse us as well. One I particularly like is the perceptual threshold between pitched and unpitched sounds. This can be achieved in a variety of ways: short notes, pitch slides, frequency modulation… the detuned hypersaw (aka the “hoover”, a combination of multiple saw waves not quite in tune) has a sweet spot where it turns from a single note to chaotic noise. High pass filters, by removing the fundamental for our brains to latch onto, can exacerbate this. For a super high energy example of all these combined — and I warn you this is overstimulating in the best possible way — listen to Anmitzcuaca. Száhala is a computational linguist by training; I doubt it’s a coincidence that he makes use of these perceptual tricks. (Sorry no links — possibly a bit too mental for the stores!) Arrangements Arrangements — the broader structures of tunes — tend to work best when they tread a line between predictability and unpredictability. Obviously, the more predictable an arrangement is, the less work your brain has to do to understand it. Of course an arrangement can also be too unpredictable. I’m not quite sure where to fit completely bonkers arrangements into this framework — hello, breakcore! I think most people look for other things besides energy in dance music, like for example a degree of flow. So perhaps it’s correct to say that breakcore has more energy than everything else, but it comes at the expense of other things which I clearly didn’t care about that day I was dancing to breakcore. Several genres of dance music depend on a build-and-drop structure, which again needs to tread the predictable/unpredictable line very delicately. That’s worth a whole other post another day. Real audio versus synthesizers Many, if not most EDM styles make heavy use of synthesizers — to give the music an otherworldly feel. Nonetheless producers often find that including just a small amount of real audio, perhaps heavily processed but nonetheless recorded via a microphone, adds a degree of depth to their tunes. I think the reason for this is also Kolmogorov complexity. The real world has a tonne of nuance that would take a very long time to reproduce by programming even the best synthesizers out there. It strikes the right balance between too predictable for our brains (like straight repetition) and too random for our brains (like white noise); surely not a coincidence either, considering that the real world is the same information stream that our brains evolved to process. More creative ways to occupy the brain I’m going to wind down with a fantastic tune that combines a lot of this stuff: Sasha’s “Who Killed Sparky”. · The arrangement keeps varying things subtly; I suspect a few real world samples. · Syncopation is everywhere; not just in the beat but in the 3-note lead synth line which is coprime with the number of semiquavers in the bar, in other words though the pattern stays the same each bar starts at a different part of it. (I think this may need two mental beat grids, or a single longer one, to keep track). · The lead synth is in places driven through distortion at a level that makes it unstable — I suspect it’s right on the threshold of a tonal change and would sound very different if driven even slightly more or less. Keeping track of tonal changes in that synth definitely adds a lot of interest. · No obvious vocals, but is that a choir-like sound just as the tune peaks? That would be vocals and liminality combined, then. (also on iTunes / Amazon) There’s one more thing that keeps varying in this tune, and that’s the use of space. One moment we have a dry sound with no reverb, the next a small room, eventually I like to imagine some sort of enormous cathedral in the mountains in outer space (if there was any sound in space which there isn’t, but you get the point). The cuts between these are extremely sudden, and not very predictable. I don’t think the untrained listener would consciously notice all this, but we have special hardware in our heads for processing spatial information in audio, and Sasha certainly makes use of it. One of my favourite pieces of prog. Footnote A quick google shows me I’m not the only person to apply Kolmogorov complexity to music. This in particular caught my eye. While I don’t think its methods are sound and disagree with the conclusion (which would imply that simple music can never be beautiful), I like how it posits an evolutionary basis for perceiving beauty. Thanks are owed to all those who introduced me to the wonderful tunes I’ve used as examples here:- if I remember rightly, Philippa, Matt, Rob, Dan/Jen/Josh, Tommo. This blog is copied to Medium from my original which you can follow here. Affiliate and Privacy Policy
Kolmogorov Complexity: the Thing that gives Energy to Dance Music
27
kolmogorov-complexity-the-thing-that-gives-energy-to-dance-music-1c48ba0209a9
2018-03-13
2018-03-13 17:21:48
https://medium.com/s/story/kolmogorov-complexity-the-thing-that-gives-energy-to-dance-music-1c48ba0209a9
false
1,901
Writing about things I want to
null
omnisplore
null
CrispinBob
null
crispinbob
null
omnisplore
Music
music
Music
174,961
Crispin Bob
Writing about things I want to
ebd6a84a9455
CrispinBob
6
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-13
2018-03-13 18:54:40
2018-03-13
2018-03-13 19:27:02
0
false
tr
2018-03-23
2018-03-23 06:20:57
0
1c493fe3b52c
0.928302
1
0
0
Kapalı alanlarda konumlandırma sistemlerinde sıkça kullanılması ve yeni başlayanlar için anlaşılır olması açısından en yakın k komşu…
2
En Yakın k Komşu (KNN) Kapalı alanlarda konumlandırma sistemlerinde sıkça kullanılması ve yeni başlayanlar için anlaşılır olması açısından en yakın k komşu tekniği ile başlamak daha doğru olur diye düşündüm. Bu yazı genel bilgilenme amacı ile yazılmıştır bu yüzden akademik dil içermemektedir ancak benim için zorlu bir yazı oldu. etiket — label , özellik — feature, K Nearest Neighbors tekniği özelliklerin vektörel olarak alındığı veri seti üzerinde uygulanan bir denetimli(supervised) makine öğrenmesi yöntemidir. Sınıflandırılmak istenen vektör seçilen k değeri ile belirlenen en yakın k tane vektöre göre sınıflandırılır(etiketlendirilir). Örnek olarak k değerini 3 alırsak sınıflandırmak istediğimiz vektör için veri setinde vektörümüze en yakın 3 vektör bizim tarafımızdan seçilmiş uzaklık hesaplama yöntemi ile bulunacak ve bu vektörler hangi sınıfa dahilse vektörümüz o sınıfla etiketlenecek. kNN basit bir yöntem olmasına karşın oldukça güçlü bir yöntemdir. Önemli parametreleri k sayısı ve seçilen uzaklık bulma yöntemidir. Dengede olmayan sistemlerde seçeceğiniz uzaklık bulma yöntemi ve k sayısına bağlı olarak genellikle etiketi değiştirme yönelimi olacaktır. Genellikle öklid uzaklık bulma yöntemi kullanılmasına rağmen veri setinizin özelliklerine uygun bir şekilde seçilecek uzaklık bulma yönteminin alacağınız sonuçları ciddi biçimde etkileyeceğini düşünüyorum. Ayrıca python scikit_learn kütüphanesi kendi uzaklık bulma yöteminizi yazmanıza olanak sağlar. Ayrıca bu yöntemin bazı uygulamaları ağırlık hesapları katılarak ta yapılmakta ancak tekrar isimlendirmenin anlamsız kalacağını düşünüyorum. bkz: WKNN Örnek Kod: from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import train_test_split,cross_val_score from sklearn.metrics import mean_squared_error, r2_score import pandas as pd data=pd.read_csv(“Iris_Data.csv”) print(data.head()) features_data=data.drop([‘species’], axis=1) label=data[‘species’] X_train, X_test, y_train, y_test = train_test_split(features_data,label,test_size=0.3) print(features_data.head()) print(label) KNN=KNeighborsClassifier(n_neighbors=12) KNN.fit(X_train,y_train) print(KNN.score(X_test,y_test))
En Yakın k Komşu (KNN)
1
en-yakın-k-komşu-knn-1c493fe3b52c
2018-03-23
2018-03-23 06:20:58
https://medium.com/s/story/en-yakın-k-komşu-knn-1c493fe3b52c
false
246
null
null
null
null
null
null
null
null
null
En Yakın K Komşu
en-yakın-k-komşu
En Yakın K Komşu
1
umut can altın
Anadolu University EEE
229110dbb06a
umutcanaltin1
5
33
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-09-25
2018-09-25 10:12:32
2018-09-25
2018-09-25 10:14:02
3
false
en
2018-09-25
2018-09-25 10:51:17
1
1c49e71a1ce5
4.806604
5
0
0
“Can machines think?” — Alan Turing in 1950
5
Complete overview of the AI startup ecosystem in India “Can machines think?” — Alan Turing in 1950 From a checker playing program written in 1951 to machines that can recognize a human face, drive a car and diagnose cancer, Artificial intelligence has come a long way. According to the World Economic Forum, There is a 9x increase in the number of papers published on AI, 6x increase in the VC funding into AI startups and 14x increase in the number of startups working on AI since 2000. India too has seen significant traction — Indian AI startups have raised over Rs. 600 crores in 2017 alone, the Indian government has also allocated Rs. 3000 crores for AI, ML, and IoT in the recent budget and the Karnataka government has also invested in AI startups and has committed Rs. 40 crores for an AI hub. Telangana government has partnered with NASSCOM to set up a center of excellence for Artificial Intelligence and Data Science with a joint initial investment of Rs. 40 crores. Telangana government also established open data policy in 2016 which has over 50 high quality data-sets open to public for the scope of being used to derive meaningful insights or to build AI based solutions If we look deeper, there is no AI market per se, there is an AI niche in every possible market, slowly changing the way business is done. The startups have picked up specific problems to solve using AI and tried to build a business model around it. Markets that have seen significant traction in the application of AI are - Here are some prominent Indian startups in the AI space- Ed-Tech Embibe — A Bangalore based Ed-Tech startup founded by Adithi Avasthis, has raised over $9 million dollars by Kalaari and Lightbox. Its learning platform is being used by thousands of students. They collect data from students and provide personalized learning recommendations. Students can actually improve test scores by fixing basic mistakes using its AI platform. Lernr — A Gujrat based Ed-Tech startup founded by Arnav and Prashant, has raised seed from celebrated Anand Chandrashekaran. Lernr is a social learning and skill sharing startup, that lets you meet and learn anything from someone interesting near you, for free. It combines machine and human intelligence to curate and delivers personalized learning experiences. Fin-tech FundsIndia — A Chennai based FinTech firm that offers robo-advisory services. It has over 11 lakhs customers investing over Rs. 5,000 crores. They have a robo advisor called Mithr that analyze and help pick the best Mutual Funds and SIP for the users. Active.ai — A Bangalore based firm founded by former bankers, has raised over $3.5 million from Kalaari and IDG Ventures. It is an intelligent interface that allows banks and consumers to connect over chat. Health-care Sigtuple — A Bangalore based Healthcare startups founded by former American Express executives, Raised over $6.5 million dollars from IDG, Pi ventures, Accel partners etc. They are helping hospitals and healthcare centers improve the speed and accuracy of blood reports. Tricog — A Bangalore based healthcare started has raised over $2 million from Inventus capital and Blume Ventures. Tricog set out to help doctors make instant diagnoses of heart attacks and ensure treatment is not delayed. It achieves in few minutes which may take up to 6 hours. Chatbots Niki.ai — A Bangalore based firm founded IIT Kharagpur Alums, raised undisclosed money from Ratan Tata and Ronnie Screwala. It is a chatbot that offers services like hotel bookings, paying bills, tickets reservations etc. Generates revenue from the commission for the services booked through Niki. Haptik — A Bangalore based virtual assistant founded by Aakri Vaish and Swapan Rajev, has raised Series B from Times Internet and entered into a strategic alliance with them. It is one of the world’s largest chatbot platforms. Logistics/Supply chain Rivigo — A Gurgaon based AI enabled logistics services provider founded by Garg and Gazal Kalra, is valued at over $900 million. Rivigo offers pan-India delivery services to e-commerce, pharma, automobiles, cold-chain and FMCG players. Locus.sh — A Bangalore based firm founded by Nishith Rastogi has raised over $2.75 million from Blume, BeeNext, and others. Locus has developed route-planning algorithms so companies can chart the best possible route to deliver an order and allow a salesperson to cover the maximum number of points in the shortest time possible. Services Qubole — A BDaaS(Big-Data-as-a-Service) company founded by Ashish Thusoo and Joydeep Sen Sarma, has raised over $75 million dollars in total. Qubole claims to be the largest cloud-agnostic big data platform in the world. It’s also building the industry’s first autonomous cloud-based data platform — Qubole Data Services (QDS). Gnani — A Bangalore based speech analytics firm founded by former Texas Instruments executives Ananth and Ganesh, has raised seed from the Karnataka IT ministry. They are building solutions for enterprise process automation and machine-powered speech transcription. Startups have enjoyed a lot of interest from the angel investors. Manish Singha has invested in 4 AI startups and founded AI focused fund ‘Pi Ventures’. Anand Ladsaria, one of the most prominent angel investors in India who has backed over 90 startups is also betting on the AI startups. Other prominent Angel Investors investing in AI are Ratan Tata, Sachin & Binny Bansal, Ravi Gururaj and Pallav Nadhani. Kalaari Capital, IDG Ventures, and Blume Ventures are observed to be one of the top VCs backing AI in the country. Interestingly, not many startups have been able to go past the angel round, most of them fail in the early stage. There are some problems these startups usually face - Data — Startups do not have access to large volumes of quality data to train their systems, as opposed to tech giants. It is also quite expensive to retrieve data and reduce it to be usable. Talent — AI is relatively new technology for mass adoption and finding the right talent is a huge challenge for the founders. Market — Some startups build great AI product but unfortunately, the market is often too niche for it to a scalable business. Use Case- AI startups without a proper use case might lead to dilution of startup focus, creating confusion between investors and the management. Some of these challenges are not too far from being written off with investments coming in at early stage. AI’s evolution is currently being steered by the exponential growth in computing power and smart device ecosystems. With low computing and storage costs, advanced algorithms and the increased availability of AI-based talent, we are going to see what is expected to be the fourth industrial revolution. Vyshak Iyengar I write about business. https://www.linkedin.com/in/vyshakiyengar/
Complete overview of the AI startup ecosystem in India
20
ai-startups-in-india-1c49e71a1ce5
2018-09-27
2018-09-27 10:35:43
https://medium.com/s/story/ai-startups-in-india-1c49e71a1ce5
false
1,128
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Vyshak Iyengar
23 years old, super passionate about business
c143f85a7d17
vyshakiyengar
45
60
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-12
2017-12-12 12:44:43
2017-12-12
2017-12-12 14:03:54
1
false
en
2017-12-12
2017-12-12 14:03:54
2
1c4cae5c02f1
3.471698
62
9
0
Artificial intelligence looks tailor-made for incumbent tech giants. Is that a worry?
4
The battle in AI Artificial intelligence looks tailor-made for incumbent tech giants. Is that a worry? Two letters can add up to a lot of money. No area of technology is hotter than AI, or artificial intelligence. Venture-capital investment in AI in the first nine months of 2017 totalled $7.6bn, according to PitchBook, a data provider; that compares with full-year figures of $5.4bn in 2016. In the year to date there have been $21.3bn in AI-related M&A deals, around 26 times more than in 2015. In earnings calls public companies now mention AI far more often than “big data”. At the heart of the frenzy are some familiar names: the likes of Alphabet, Amazon, Apple, Facebook and Microsoft. A similar, though less transparent, battle is under way in China among firms like Alibaba and Baidu. Several have put AI at the centre of their strategies. All are enthusiastic acquirers of AI firms, often in order to snap up the people they employ. They see AI as a way to improve their existing services, from cloud computing to logistics, and to push into new areas, from autonomous cars to augmented reality (see article). Many observers fear that, by cementing and extending the power of a handful of giants, AI will hurt competition. That will depend on three open questions, involving one magic ingredient. AlphaGone The tech giants certainly have big advantages in the battle to develop AI. They have tonnes of data, oodles of computing power and boffins aplenty — especially in China, which expects to charge ahead. Imagine a future, some warn, in which you are transported everywhere in a Waymo autonomous car (owner: Alphabet, parent of Google), pay for everything with an Android phone (developer: Google), watch YouTube (owner: Google) to relax, and search the web using — you can guess. Markets with just a handful of firms can be fiercely competitive. A world in which the same few names duke it out in several industries could still be a good one for consumers. But if people rely on one firm’s services like this, and if AI enables that firm to predict their needs and customise its offering ever more precisely, it will be burdensome to switch to a rival. That future is still a long way off. AI programs remain narrowly focused. Moreover, the ability of the incumbents to perpetuate their advantages is made uncertain by three questions. The most important is whether AI will always depend on vast amounts of data. Machines today are usually trained on huge datasets, from which they can recognise useful patterns such as fraudulent financial transactions. If real-world data remain essential to AI, the tech superstars are in clover. They have vast amounts of the stuff, and are gaining more as they push into fresh areas such as health care. A competing vision of AI stresses simulations, in which machines teach themselves using synthetic data or in virtual environments. Early versions of a program developed to play Go, an Asian board game, by DeepMind, a unit of Alphabet, were trained using data from actual games; the latest was simply given the rules and started playing Go against itself. Within three days it had surpassed its predecessor, which had itself beaten the best player humanity could muster. If this approach is widely applicable, or if future AI systems can be trained using sparser amounts of data, the tech giants’ edge is blunted. But some applications will always require data. How much of the world’s stock of it the tech giants will end up controlling is the second question. They have clout in the consumer realm, and they keep pushing into new areas, from Amazon’s interest in medicine to Microsoft’s purchase of LinkedIn, a professional-networking site. But data in the corporate realm are harder to get at, and their value is increasingly well understood. Autonomous cars will be a good test. Alphabet’s Waymo has done more real-world testing of self-driving cars than any other firm: over 4m miles (6.5m kilometres) on public roads. But established carmakers, and startups like Tesla, can generate more data from their existing fleets; other firms, like Mobileye, a driverless-tech firm owned by Intel, are also in the race. The third question is how openly knowledge will be shared. The tech giants’ ability to recruit AI expertise from universities is helped by their willingness to publish research; Google and Facebook have opened software libraries to outside developers. But their incentives to share valuable data and algorithms are weak. Much will depend on whether regulations prise open their grip. Europe’s impending data-protection rules, for example, require firms to get explicit consent for how they use data and to make it easier for customers to transfer their information to other providers. China may try to help its firms by having negligible regulation. The battle in AI is fiercest among the tech giants. It is too early to know how good that will be for competition, but not to anticipate the magic ingredient that will determine the outcome: the importance, accessibility and openness of data. This article first appeared in the Leaders section of The Economist on December 7th 2017.
The battle in AI
290
the-battle-in-ai-1c4cae5c02f1
2018-06-11
2018-06-11 16:56:03
https://medium.com/s/story/the-battle-in-ai-1c4cae5c02f1
false
867
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
The Economist
Insight and opinion on international news, politics, business, finance, science, technology, books and arts.
bea61c20259e
the_economist
333,655
36
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-19
2018-01-19 17:41:07
2018-01-19
2018-01-19 17:51:00
1
false
en
2018-01-19
2018-01-19 17:51:00
1
1c4d23f44fe7
2.535849
0
0
0
Deep learning and AI is changing the world by the minute and it’s growing at a scary rate! Each discovery and each improvements are…
1
Deep Learning Tensorflow: Artificial & Convolutional Neural Networks, Artificial Intelligence Machine Learning In Python Deep learning and AI is changing the world by the minute and it’s growing at a scary rate! Each discovery and each improvements are important and they are not all made by ‘’geniuses’’, but if you don’t know why deep learning is fundamental, modeling with deep architectures, use reinforcement learning and use TensorFlow, you can’t compete with people already in, discover and create new technologies for the future. What if you could change that? My complete Deep Learning & Tensorflow course will show you the exact techniques and strategies you need to master deep learning, apply artificial & convolutional neural networks, use Boltzmann machines and use Tensorflow easily. For less than a movie ticket, you will get over 4 hours of video lectures and the freedom to ask me any questions regarding the course as you go through it. :) What Is In This Course? Your Deep Learning Skills Will Never Be The Same. Except if you’re an expert at Deep Learning and AI, apply convolutional neural networks, understand self-organizing maps, use AutoEncoders, TensorFlow and Python you are going to lose many job/career opportunities or missed this incredible AI growth trend. Weither it’s self-driving cars, or autonomous robots, AI is here to stay! As what Andrew Ng, a Chinese American computer scientist says “Artificial Intelligence is the new electricity.” This is offered with a 30 days money back guarantee. You can try it with no financial risk. In This Deep Learning Training, You’ll Learn: What’s The Difference Between Artificial Intelligence, Machine Learning, And Deep Learning? Training And Modeling With Deep Architectures Deep Learning With MATLAB Artificial Neural Networks & Convolutional Neural Networks Deep Learning Models Boltzmann Machines Basic Stochastic Structure The Basic Principles Of Unit State Probability And The Equilibrium State Hyper Parameters And Regularization Auto Encoders (Encoder Training) Tensorflow Reinforcement Learning — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — Is This For You? Do you want to learn, master and understand Deep Learning? Are you wondering why you can’t understand how to use Tensorflow, the Boltzmann machines and neural networks? Do you think you will feel proud being able to use and master Deep Learning? Then this course will definitely help you. This course is essential to all data analysts, Entrepreneur, Python students, web designers, photographers and anyone looking to master and learn deep learning, machine learning and Tensorflow. I will show you precisely what to do to solve these situations with simple and easy techniques that anyone can apply. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — Why To Master Deep Learning & Tensorflow? Let Me Show You Why To Master Deep Learning & Tensorflow: 1. You will master deep learning. 2. You will apply artificial neural networks. 3. You will use Boltzmann machines. 4. You will use Tensorflow easily. Thank you so much for taking the time to check out my course. You can be sure you’re going to absolutely love it, and I can’t wait to share my knowledge and experience with you inside it! Why wait any longer? Click the green “Buy Now” button, and take my course 100% risk free now!
Deep Learning Tensorflow: Artificial & Convolutional Neural Networks, Artificial Intelligence…
0
deep-learning-tensorflow-artificial-convolutional-neural-networks-artificial-intelligence-1c4d23f44fe7
2018-04-25
2018-04-25 19:41:35
https://medium.com/s/story/deep-learning-tensorflow-artificial-convolutional-neural-networks-artificial-intelligence-1c4d23f44fe7
false
619
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Martian bazze
null
b921c915f386
martianba103
6
1
20,181,104
null
null
null
null
null
null