audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2018-06-21
2018-06-21 06:47:13
2018-06-21
2018-06-21 06:47:58
0
false
en
2018-06-21
2018-06-21 06:47:58
2
1b20971f9bb0
2.207547
0
0
0
Data Science is considered as an evolutionary statistics which is capable of dealing with a large number with the help of computer science…
4
Useful Difference between Machine Learning vs Data Science Data Science is considered as an evolutionary statistics which is capable of dealing with a large number with the help of computer science technologies. At the same time Machine Learning Course in Chennai is used synonymously with Data Science that is wrong. It is a major area under that one and only Data Science. This covers the major range of data technologies which includes Python, SQL, Spark and R. Here below we have discussed about the useful difference between Machine Learning vs Data Science. The definition of Machine Learning is given as a study of computers the ability to study without explicit programmed. When this is in process a computer works more accurately by collecting and learning from the given data. Take an example; the user gets more text messages on a call. This can be predicted with faster words accurately. The main key difference Machine Learning vs Data Science Here we have listed the difference between Machine Learning and Data Science is as follows: Components As discussed earlier, Data Science systems coats the entire lifecycle of data and typically having few components to cover Distributed computing-This is a horizontal scalable data processing and distribution. Join Data Science Course in Chennai to enrich your skills and knowledge. BI and Dashboards-It is a predefined dashboard with dice and slice having the ability for higher level of stakeholders Production mode has Deployment-The migrating system into the industry of production standard practices. Data engineering-it makes you cold and hot and accessible always which covers disaster recovery and backup security. Profiling of data and Collection- Job Profiling and ETL (Extract Transform Load) comes under this Automating intelligence- This automation is for automated ML models for the responses online (prediction, recommendations) and detection of fraud. Data visualization-This explores the data for a better data intuition which is an integral sectioning the modeling of ML. Automated decisions-Includes running of business logic of data or complex mathematical model by using ML algorithm. Starting of Machine learning models with existing data and typical components as follows · Understand problem · Exploring of Data · Data Preparing · Selecting a model and training · Performance Measure In this ML models the performance measures are considered as crystal clear. Each and every algorithm makes a measure to predict whether it is bad or good about describing the model of data provided. Visualization- In general Data Science represents the data directly by using any popular graphs such as pie, bar etc. In case of ML the visualization is used by a mathematical chart of training data. Development methodology- Projects under Data Science Training in Chennai are aligned more than the engineering project with more clear definitions and milestone. Languages-Languages such as SQL having the syntax is mostly used in the familiar Data Science world. Perl, sed, awk are the popular Data Processing scripting languages. The frameworks are well-supportive and the most widely used category. R and Python is the widely used language in the Machine Learning world. In today’s trend is obtaining more benefits as the new deep learning. In data exploration SQL plays a mandatory role of ML. Conclusion — Machine learning vs Data Science Both Machine learning and Data Science are trying to obtain algorithms for their own learning purpose. Learn with FITA for the best Machine Learning Training in Chennai for a better clarity in your career. The best example for this is the Google’s Cloud Dataprep.
Useful Difference between Machine Learning vs Data Science
0
useful-difference-between-machine-learning-vs-data-science-1b20971f9bb0
2018-06-21
2018-06-21 06:47:59
https://medium.com/s/story/useful-difference-between-machine-learning-vs-data-science-1b20971f9bb0
false
585
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Sharmi Devi
null
33c23319c124
sharmidevi1007
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-11
2017-12-11 10:39:55
2017-12-11
2017-12-11 10:41:17
1
false
da
2017-12-11
2017-12-11 11:32:02
4
1b20a5b57b5a
5.29434
1
0
0
I den seneste tid har jeg besøgt en række store og små industrivirksomheder. Jeg ser, at der er en generel og stor forvirring omkring…
4
Dansk industri taber millioner, selvom vi har talt industri 4.0 i snart 2 år! I den seneste tid har jeg besøgt en række store og små industrivirksomheder. Jeg ser, at der er en generel og stor forvirring omkring teknologi og begreber. Og i særdeleshed om hvordan man benytter teknologien til at skabe mere innovation, afsætte mere eller øge produktiviteten. Vi skal tænke i forretning og være i øjenhøjde med virksomhederne og stoppe brugen af buzzwords. Vi skal bruge teknologien som et middel til at nå virksomhedernes mål. Der er i øjeblikket et meget stort fokus på selve teknologien og ikke så meget på de potentialer, der ligger i at optimere selve forretningen. Det er for mange meget overvældende og uoverskueligt at kaste sig ud i denne “nye” verden, og det er baggrunden for, at jeg føler mig motiveret til at lave en blogserie på tre indlæg. Formålet med indlæggene er gennem eksempler og viden at vise, hvordan teknologi kan inddrages som et værktøj, samt hvad du og din virksomhed får ud af det. De tre indlæg vil handle om datadreven innovation, IoT(Internet of Things) og ML(Machine Learning). Datastrømme Maskiner, systemer osv. er noget, vi alle benytter hver dag. I disse lagres en masse data, som lige nu ikke bliver udnyttet til fulde. Disse værdifulde data kan benyttes til alt. Lige fra at minimere spild i produktion, øge produktiviteten, finde svar på hvorfor vi har succes eller mangel på samme, eller hjælpe til at træffe bedre beslutninger. Når gamle maskiner ikke længere kan opfylde vores krav, har vi en tendens til at skifte dem ud i stedet for at begynde og arbejde med at optimere dem ved hjælp af de data, de indeholder. Det er et skjult og uforløst potentiale. For mange af de virksomheder vi arbejder sammen med, har vi netop kunnet det. Vi har med andre ord forlænget maskinernes levetid. Derfor vil jeg også gennem disse tre blogindlæg komme med vores indsigt og erfaring i håb om at ændre den holdning ”byt til nyt”. De fleste af de virksomheder vi besøger, tænker på en ny maskine eller et nyt system som en investering, men at få noget ud af den data disse producerer er en “IT” udgift. Dette paradigme vil vi ændre på. I disse tider er det at få informationen ud af data nok den vigtigste og billigste investering, man kan gøre. Datadreven innovation Tænk på alle de data der florerer rundet i en virksomhed. Mange er efterhånden elektroniske, men mange er stadig analoge, men kan relativt let gøres digitale. I disse år ser flere og flere virksomheder potentialet i deres data ved at det bliver synligt og hjælper med at træffe bedre beslutninger. Hvis du og din virksomhed ikke følger med i den udvikling, bliver i højst sandsynligt udkonkurreret af dem, der gør. Tænk derfor endnu en gang på potentialet, der er i dine data, og som kan bruges til at skabe nye forretningsmæssige muligheder. Data er i dag fundamentet i mange beslutningsprocesser og forudsigelser af fremtidige behov. Mange beslutninger træffes alligevel på et meget løst grundlag, og der er stor usikkerhed omkring udfaldet. Opgaven med at samle og behandle de nødvendige informationer er nærmest en umulig opgave. De gængse værktøjer til at behandle data og vores viden til at bruge dem er begrænsede. Derfor har det i højere grad været lederne og eksperterne, der har brugt deres mavefornemmelse til at træffe beslutninger på baggrund af et begrænset, smalt og manuelt indsamlet datagrundlag. I dag har vi maskiner, der kan håndtere store mængder data, og adgangen til data har aldrig været mere ligetil end i dag. Datadreven innovation er desuden blevet meget mere tilgængeligt for folk uden tung teknisk baggrund. Softwaren til analysering af data er blevet mere intuitiv og billigere. Det har førhen været en meget dyr affære at investere i, men i dag kan alle være med, også de små virksomheder. Sådan kommer du i gang I dag findes der software, som kan samle de store mængder af data og bruge algoritmer til at finde mønstre i disse. Men vi skal først finde ud af, hvilke spørgsmål vi vil besvare. Det er her den forretningsmæssige forståelse skal afprøves. Hvad enten det handler om at øge salget, forbedre produktiviteten eller noget helt andet. Vi skal fokusere på det, vi ikke ved. Ud fra denne tanke skal vi herefter udarbejde forskellige antagelser, som vi ønsker at be- eller afkræfte gennem data. Dernæst skal vi finde ud af hvilke data, der er behov for til at besvare disse spørgsmål. Her vil vi, Analytics by Innovation Lab, gå ind i en åben dialog og samarbejde med virksomheden og finde frem til de nødvendige data, der skal analyseres. Og hvad kan du bruge det til? Det at bruge data til nye forretningsmæssige muligheder er stadig i sin vordende begyndelse og rummer nærmest uendelig mange muligheder. Derfor er det nødvendigt at kunne prioritere mellem mulighederne. Ved at have adgang til samt en stor forståelse for, hvad dine data viser, kan du f.eks. se hvilke behov dine kunder har og herigennem tilpasse ydelser eller produkter, så der opnås et større udbytte. Endvidere kan der også skabes en større loyalitet og forståelse med henblik på at fastholde dine kunder. Det koster mere at anskaffe nye kunder end at fastholde eksisterende. Ved hjælp af data kan du mere præcist planlægge og forudsige, hvad der kan ske eller finde ud af grundene til tidligere fiaskoer og fremtidige succeser. Du kan også tage data fra en produktion, for at se om der sker fejl, som man kan rette op på og dermed optimere produktionen og minimere spild af ressourcer. Dette har vi allerede gjort for et plastfirma, hvor de endte med at eliminere et spild på 12 tons plast. En undersøgelse viser, at det er muligt at hæve produktiviteten med 5–6% vha. data, men i vores tilfælde har det været helt op i mod 10%. På trods af dette vil jeg vove den påstand, at potentialet er langt større i mange virksomheder. Mange processer er i dag manuelle og er slet ikke organiseret og standardiseret tilstrækkeligt. Med almindelige Lean, Six-Sigma og QRM principper kan man komme langt indledningsvist, men disse metoder begynder også at være tunge at drive, og her kommer den datadrevne tilgang til sin ret. Noget af det man i høj grad benytter data til er at træffe beslutninger. Data giver indsigt, og indsigt leder til beslutninger. Dataene giver evidens for beslutninger frem for en ‘eksperts’ mavefornemmelse. Det gør, at forretninger kan eksperimentere med nye ideer, uden at risikere forretningen på det. Det er dog vigtigt at påpege, at man ikke må fjerne lederens intuition eller mavefornemmelse. Det er der, innovationen sker. Dataen kan dog bakke op om beslutninger, eller modvirke at der tages “forkerte” beslutninger. ‘Eksperter’ har tendens til selvstændigt at træffe valg på baggrund af tidligere erfaringer, fordi de er jo eksperterne, og må derfor være dem, der ved bedst. Alle valg skal begrundes med evidens, for at det er et velovervejet valg. Tidligere erfaring skal kombineres med nuværende data. Så hvad får du ud af det? For at opsummere og gøre det lidt mere overskueligt, hvad dit udbytte bliver ved at investere i ny teknologi, er fordelene listet nedenunder. Du har efterhånden fået et indtryk af, hvordan det at være datadreven kan hjælp på forretningen. Ved at I indfører det i jeres virksomhed kan i f.eks: · Sikre at der bliver foretaget bedre og begrundede valg · Minimere spild i produktionen · Finde ud af hvilke behov kunder/brugere har og tilpasse din virksomhed efter dem. · Fastholde kunder · Være mere eksplorative Men der er en lang række andre muligheder, som vi endnu ikke har taget fat på at beskrive, og det er her, det handler om at være nysgerrig og undersøgende. Dette blogindlæg har forhåbentligt gjort dig klogere på, hvad det at være datadreven har af fordele. Det vigtigste lige nu er måske ikke at investere og kaste sig ud i en stor og overskuelig løsning, men derimod at starte i mindre skala og komme i gang, så du sikrer, at din virksomhed er konkurrencedygtig de næste mange år!
Dansk industri taber millioner, selvom vi har talt industri 4.0 i snart 2 år!
1
dansk-industri-taber-millioner-trods-vi-har-talt-industri-4-0-i-snart-2-år-1b20a5b57b5a
2018-03-23
2018-03-23 21:03:31
https://medium.com/s/story/dansk-industri-taber-millioner-trods-vi-har-talt-industri-4-0-i-snart-2-år-1b20a5b57b5a
false
1,350
null
null
null
null
null
null
null
null
null
Data Driven
data-driven
Data Driven
358
Frederik Bonde
Excited about the opportunities and rapid growth on the Internet of Things and Machine Learning. ilab.dk | #IoT #ML #Analytics #DataScience #Economics #Politics
e2c270ad6ad7
frederikbonde
42
75
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-14
2018-05-14 15:43:02
2018-05-14
2018-05-14 16:52:59
5
false
en
2018-05-14
2018-05-14 16:52:59
7
1b2193781a6a
3.037107
0
0
0
As humans begin to advance and implement artificial intelligence as a widely accepted means of operation and production in the workforce, I…
1
Automation and Mass Leisure As humans begin to advance and implement artificial intelligence as a widely accepted means of operation and production in the workforce, I can’t bother to ask what this will do to the future economy? Will there be enough jobs to support the ever-growing population or will automated industries completely take over. http://www.futureofwork.com/article/fear-of-automation-is-fear-of-the-future-for-asias-digital-economy I know these questions are becoming more obvious as algorithmic codes seem completely capable of carrying out once irreplaceable jobs, but these questions become even more important when companies such as Google and Boeing are working harder every day to implement automated technology into their products and services. http://www.documentarytube.com/articles/self-driving-cars-when-we-will-have-them Yea you heard that right, Boeing, yep in the next decade Boeing will have an AI airliner (robot pilot) called Aurora flying you to every destination in the world. Scary right, I don’t want a robot flying my plane! when turbulence hits I will be longing for the calming voice of that pilot reassuring me that I’ll get to the ground safe. Instead of that reassurance there will be a robot up in cockpit pushing buttons and pulling levers, without any real compassion for the passenger’s safety. And the only reassurance we will get is the hope that all these things that we programmed will function perfectly. What does this mean for pilots though? And even more important than that Uber drivers? If all of this new AI tech is to be implemented in the next ten years or so, we might see changes in one of the most basic human traits, our effort and cognition. James Hewitt, the head of science and innovation at Hintsa Performance, wrote an article for the World Economic Forum that sheds light on the hidden risks of mass automation. Hewitt being a top performance scientist predicts that most jobs in the near future will be replaced with automation, and there are far worse consequences to mass automation than people losing jobs. Hewitt calls this the consequence of effort, or in other words the rise of mass leisure. Instead of explaining this from Hewitt’s point of view, I will show you a picture of a scene from the 2008 movie WALL-E. Humans are interesting in the fact that if they could avoid effort they would. This is troubling when thinking about mass Automation because this basically makes it so that humans can completely avoid physical and mental effort, and still achieve everything we deem productive. This picture of a Human dystopia, were automation and robots supply all the wants and needs of humans beings is what Hewitt and many others are worried about. The fear that once we have automated almost every aspect of life, humans will lose their drive to apply effort to new and old concepts of life. What i’m trying to say by this is that we will lose that good feeling we get when we work hard to accomplish something, and in-turn completely reverse all the hard work humans have done to get to this point of technologically advanced societies. All of this new technology is great and is giving the human race opportunities we have never had, but at what cost. Is the fact that we have so many tools access to social media, and soon to have cars driving themselves a sign that we are becoming lazy? I truly hope not, and would like to see companies think more about the consequences that further automation will have on the world, and peoples mental effort and physical health.
Automation and Mass Leisure
0
automation-and-mass-leisure-1b2193781a6a
2018-05-14
2018-05-14 16:53:00
https://medium.com/s/story/automation-and-mass-leisure-1b2193781a6a
false
584
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Zach Williams
null
5a6b64510dad
zach11kw
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-21
2018-04-21 05:43:19
2018-09-16
2018-09-16 06:10:33
7
false
id
2018-09-16
2018-09-16 06:10:33
11
1b21dca003f0
4.453774
1
0
0
Akhirnya setelah beberapa bulan dari tulisan perdana, konten POV kembali hadir. Mohon maaf atas hiatusnya konten ini dikarenakan banyak…
5
POV #2: Online Course untuk Data Science “white printing paper with numbers” by Mika Baumeister on Unsplash Akhirnya setelah beberapa bulan dari tulisan perdana, konten POV kembali hadir. Mohon maaf atas hiatusnya konten ini dikarenakan banyak hal. Saya akan berusaha lebih aktif dalam memberikan pandangan mengenai matematika dan data science. Beberapa minggu ini saya disibukkan oleh sebuah online course yang memiliki target penyelesaian materi, bila tidak diselesaikan maka materi tersebut tidak akan dapat saya akses kembali. Cukup kejam? Enggak juga, karena dengan seperti itu maka si pembuat konten akan tahu mana yang benar-benar belajar dengan baik dan mana yang cuma musiman saja belajar. “person using silver MacBook on top of brown table” by rawpixel on Unsplash Sebelumnya saya akan ceritakan dulu tentang online course ini. Jadi sebenarnya namanya adalah MOOC atau Massive Open Online Course. Jadi ini adalah sebuah kursus dimana kita bisa mempelajarinya secara online. Jika dulu kita kursus harus datang ke tempat pelatihan, sekarang cukup dengan smartphone atau laptop kita dapat belajar, anyak hal bahkan. Dari IELTS sampai Data Science, dari memasak sampai politik. Semuanya ada. Kali ini saya akan membahas tentang beberapa online course yang cukup populer di dunia dan bisa digunakan untuk tools belajar Data Science. Yuk diliat. Coursera Salah satu yang cukup populer adalah Coursera. Mungkin beberapa dari kalian sudah mengetahui tentang situs ini. Jadi situs ini menyediakan banyak sekali pilihan materi yang bisa kita pelajari. Salah satu yang terbaik adalah Machine Learning with Andrew Ng bisa kalian cek di sini. Salah satu keunggulan dari Coursera adalah mereka bekerja sama dengan universitas terbaik di seluruh dunia seperti Stanford Univeristy, UC San Diego, University of Michigan, dan lain-lain. Dari sanalah kalian akan belajar banyak hal dari dosen serta akademisi dari sana. Dalam Coursera, mereka ada free course dan paid course. Ada banyak yang free, dan tentu saja lebih banyak yang paid juga. Untuk membayar teman teman diharuskan membayar sekitar 40 USD atau sekitar 600 ribu rupiah per bulan. Nah keuntungan dari sistem perbulan ini adalah kalian bebas memilih materi apapun yang tersedia disana disesuaikan dengan seberapa cepat kalian belajar. Tenang, kalian juga akan mendapatkam sertifikat dari sana yang bisa kalian pajang di LinkedIn hehe. Udemy Yang kedua adalah Udemy. Situs ini memungkinkan kalian untuk belajar bermacam-macam hal mulai dari pemrograman sampai desain juga. Pilihan materinya juga bermacam-macam. Yang membedakan antara mereka dan yang lainnya adalah mereka adalah marketplace, artinya kalian harus membeli kursus terlebih dahulu untuk dapat mengakses materinya. Harganya bervariasi, cuma bila kalian jeli kalian akan dapat harga termurah yaitu sekitar 10 USD atau 150 ribu. Bila tidak saat diskon, harga normalnya bisa mencapai 30–50 USD atau sekitar 450–750 ribu. Oleh karena pemilihan waktu sangat penting dalam situs ini. Karena kalian membeli maka kalian dipastikan akan mendapatkan serifikat juga, dan tentunya ilmu yang bermanfaat juga pastinya. Yang saya rekomendasikan dari sini adalah Python for Data Science and Machine Learning Bootcamp dari Jose Portilla, bisa dicek di sini. DataCamp Selanjutnya adalah DataCamp, saya yakin apabila kalian sudah sedikit banyak belajar tentang Data Science maka kalian akan cukup familiar dengan situs ini. DataCamp adalah situs online course yang memang berfokus pada Data Science, disana kalian akan belajar banyak sekali materi tentang Data Science, Data Analyst, dan lain-lain. Sama seperti Coursera, situs ini memiliki free course dan paid course. Ada beberapa materi yang memang dapat diakses secara gratis. Untuk paid course mereka menggunakan sistem bulanan yaitu 29 USD perbulan atau sekitar 435 ribu rupiah. Keunggulannya adalah kalian juga bebas memilih mana saja yang akan kalian pelajari, tanpa batasan. Bila kalian menyelesaikan lebih cepat, tentu biaya yang keluar akan lebih terjangkau kan. Juga ada sertifikat kok. Salah satu yang saya rekomendasikan adalah Data Scientist with Python yang bisa dicek di sini. Cognitive Class Cognitive Class, mungkin agak jarang terdengar walau di kalangan Data Scientist situs ini cukup diketahui. Salah satu alasannya adalah karena mereka juga memang fokus ke materi Data Science dan Big Data. Alasan lainnya adalah karena mereka kerjasama denga IBM dan yang paling menarik adalah, semuanya gratis. Gratis. Kalian tidak salah melihatnya. Keunggulan lainnya adalah kalian juga akan mendapat kernel atau workspaces yang dapat kalian download lalu kalian utak atik sesuai keinginan kalian juga. Mungkin kekurangannya adalah kadang materi dalam video tidak selengkap materi di kernel, jadi dipastikan kalian mempelajari keduanya ya. Rekomendasi saya adalah Introduction to Data Science yang dapat kalian cek di sini. Udacity Terakhir yaitu Udacity. Walau sebenarnya masih banyak sekali situs diluar sana yang dapat kalian gunakan untuk belajar Data Science, cuma sementara saya kerucutkan jadi lima dahulu. Udacity cukup populer karena memang mereka memiliki pilihan materi yang beragam, mulai dari Self Driving Car, Data Scientist, Big Data, dan lain-lain. Serta tidak ketinggalan mereka juga bekerjasama, bila Coursera bekerja sama dengan kampus, maka Udacity bekerja sama dengan perusahaan besar. Seperti contoh Google, Amazon, AT&T, dan lain-lain. Mereka juga memiliki free course dan paid course. Untuk free course sayangnya kalian tidak akan mendapatkan certificate of completion, sedang paid pasti dapat. Disini paid course dinamakan Nanodegree Program, dan ya menjadi kelemahannya adalah harganya mahal (banget) bisa 600 USD hingga 1000 USD atau sekitar 9–15 juta rupiah. Namun jangan risau jangan bimbang jangan gundah. Kalian tetap dapat menikmati free coursenya dan berkualitas juga kok. Salah satu rekomendasi saya adalah Intro to Data Analyst yang dapat kalian cek di sini. Mungkin itu sih sharing tentang online course yang digunakan buat belajar Data Science. Seperti yang saya katakan, ada banyak sekai resources yang dapat kalian gunakan, dan jangan takut apapun background kalian, kalian dapat menjadi Data Scientist. Yang penting no pain, no gain. Viel Glueck !!!
POV #2: Online Course untuk Data Science
1
pov-2-online-course-untuk-data-science-1b21dca003f0
2018-09-16
2018-09-16 06:10:33
https://medium.com/s/story/pov-2-online-course-untuk-data-science-1b21dca003f0
false
902
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Muhammad Sifa’ul Rizky
Dreamer, football, and code
fb7d41f4af98
msifaulkiki
19
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-01
2018-03-01 15:13:57
2018-03-01
2018-03-01 16:19:24
8
false
en
2018-03-01
2018-03-01 16:19:24
15
1b23f10da870
4.895597
47
2
0
— -
3
DeepBrain Chain Monthly Report #1 — - Dear DeepBrain Chain Community Members: Over the past two months since we concluded the Token Sale, we have made significant progress in marketing, community building and technological development. We can’t thank our community enough for their support and constructive feedback. It has helped us grow and strive to do better. Starting from April, we will publish monthly reports of the previous month on the first day of each month. Here is a recap of what we have achieved in the first two months of 2018. Development Progress Our dev team have been hard at work on iteration 0 and the first round of code review . Iteration 0 builds the underlying Core layer, including: Modular structure to make it easier for future large-scale parallel development, by referring to modular design. Support topic-based subscription/release mechanism; Abstract inter-modular communications into message bus to reduce module interdependence. Development of the Configuration Management Module. Development of the Log Management Module. Development of the Asynchronous I/O Communications Module Development of the Environment Management Module. Development of the System Startup and Initialization Self-encoding/decoding style protocol stack encoding/decoding, support forward and backward compatibility of protocol fields. Protocol stack encoding/encoding, support IDL-based automatic generation of protocol stock code to reduce development workload. Development of the P2P Network Service Module. Asynchronous Message (Events)-driven Model. Support business functions call similar to “Reflection”. Message (Event)-driven process scheduling model. Cross-platform code encapsulation, Version 0 supports Windows and Linux. Development of the Timer Service Module. After successful testing of our core layers we have uploaded our first commit to Github. We will be giving frequent updates on our dev progress however we have made the strategic decision to keep our Github private until we launch the public beta testing and open source the mining software. Marketing and Community Feedback So first of all I think the most common feedback we received was that we were lacking in our marketing department. We had been so focused on attending important blockchain conferences and meetups with industry leaders that we slowed down on the marketing side of things. BUT the community has been heard, we have hired 4 new staff so far in our marketing team and added an additional member to our design team. Cooperation DeepBrain Chain becoming a member of the Korean Blockchain Industry Promotion Association(KBIPA) Hyper Pay & ROOTOKEN Supporting DBC DBC is supported by two more wallets, Hyper Pay, focusing on the Australian market and ROOTOKEN. Conferences Blockchain Connect Conference Date:2018–01–26 Feng He, CEO of DeepBrain Chain Delivering a Keynote Speech at the Blockchain Connect Conference Abstract:As one of the few projects that have practical applications in the industry, DeepBrain Chain has attracted a lot of attention since its creation. Nature listed it alongside Baidu, Tencent and iFLYTEK as “AI Projects to Watch”. With more than 7 years of AI experience, the DeepBrain Chain team has a deep understanding of the industry. This, combined with blockchain’s decentralized and temper-proof nature, provides a perfect solution to the two most pressing issues hindering the development of AI companies. 2018 NEO DevCon Date:2018–01–31 Feng He spoke at NEO DevCon, a gathering of around 700 blockchain enthusiasts and industry leaders. 2018 Decentralized AI Summit Date:2018–02–01 DeepBrain Chain was invited to speak at the 2018 Decentralized AI Summit, presenting alongside speakers from Google and Microsoft. Blockchain 3.0 Conference Seoul 2018 Date:2018–02–09 DeepBrain Chain CMO was invited to present at the Blockchain 3.0 Conference Seoul 2018, talking about“Using Blockchain and Token to Unleash the Power of AI”. Community We have had a great community participation in our Telegram group. We’ve more than tripled the number of members since the start of the year and are thoroughly impressed with their participation and feedback. We’ve had great interest in hosting more meetups and helping to make our project easier to understand. First Reddit AMA We held our first AMA on Reddit on February 11th, answering questions regarding technology, partnership, exchanges and the community. The AMA was well-received in the community and we look forward to doing more soon. We’ll likely run one a month. Link to AMA Highlights Extension of the Lockup Period According to our white paper, the DeepBrain Chain team is able to unfreeze 10%, or 350M tokens. However, we are here for the long run and hugely confident about it. We want to grow with the community, so we extended our lockup period by 6 months. Wallet Address Silicon Valley Lab We have started moving on our plans to build an AI and blockchain lab in Silicon Valley, focusing on cutting-edge technology research. The lab will look at making breakthroughs in DeepBrain Chain’s bottom platform architecture, deep learning algorithms and energy efficient training. Summary Since the beginning of 2018, DeepBrain Chain has been covered by more than 10 English and Chinese media outlets. We’ve added two more wallets to store DBC on including Hyper Pay and ROOTOKEN. We joined the Korean Blockchain Industry Promotion Association(KBIPA). We were invited to speak at 4 major blockchain events worldwide. 3 European meetups alongside NEO. We held an interview at the 아시아경제(Aisa Economic TV) We did the first Reddit AMA. Announced the extension of our token lockup. Technology wise, we finished Version 0 development and testing. Things to Come We will do more events and participation in Korean market. We will be exploring the ASEAN market. We will get onto more exchanges and wallets. An increased focus on finding early partners to join our platform Marketing is now a key focus and the team will grow further Website makeover is happening already We will interact more frequently with the community and continue to listen to feedback Community meetups Exchanges we’re currently on: Huobi.pro 、Kucoin 、LBank 、gate.io Wallets: NEO-GUI、NEON、NEOtracker 、 Hyper Pay & ROOTOKEN If you have any suggestions or feedback, feel free to contact info@deepbrainchain.org, or DBC Twitter. Learn more about us: Go to Official Website here WeChat ID: DBChain
DeepBrain Chain Monthly Report #1
774
deepbrain-chain-monthly-report-1-1b23f10da870
2018-05-04
2018-05-04 06:34:47
https://medium.com/s/story/deepbrain-chain-monthly-report-1-1b23f10da870
false
997
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
DeepBrain Chain
AI Computing Platform Driven By Blockchain
379a9e7edef2
DeepBrain_Chain
960
4
20,181,104
null
null
null
null
null
null
0
null
0
2141d97ccc50
2018-03-31
2018-03-31 23:15:07
2018-03-31
2018-03-31 23:38:43
1
false
en
2018-03-31
2018-03-31 23:38:43
9
1b24281d0411
4.362264
0
0
0
Very Real Advances in the Hierarchical Temporal Memory Platform
5
Algorithmic Intelligence Very Real Advances in the Hierarchical Temporal Memory Platform I had the great pleasure of dinning with Dr. John McCarthy, Professor Emeritus of Computer Science at Stanford University, when he received the 2003 Benjamin Franklin Medal in Computer and Cognitive Science at an award ceremony in Philadelphia. In addition to being attributed to coining the phrase “artificial intelligence (AI)” in 1955, Professor McCarthy invented the LISP programming language, the “if-then-else” programming structure, program recursion, and the concept of preemptive multitasking to increase the availability of scarce resources. At the time of the award ceremony, Professor McCarthy was in poor health and not very conversant. Even though he did not regale the members of our table with witty tales of problem solving nor develop a new algorithm in real time, it was obvious we were in the presence of an extremely intelligent human being. Image by kalhh on Pixabay. So the question at hand is: How was a 27-year-old human able to initiate an intense study of learning and intelligence such that a machine can be made to simulate it while, over 50 years later, it appears machines are nowhere close to that goal? Perhaps the answer to this question was offered by Professor McCarthy himself in a paper he published in 1959 wherein he describes the qualification problem, which is concerned with the impossibility of listing all of the preconditions required for a real-world action to have its intended effect. The qualification problem is tightly correlated with the if-then-else structure used in procedural programming and formal logic. In order to select the correct action from the then and else cases, the current state of reality must be first qualified by a series of nested if conditions. This turns into the task of defining all contingency plans before the action can be taken, a task in itself that may be more complicated than the action being contemplated. The War Operation Plan Response (WOPR) AI depicted in the 1983 film, WarGames, was fictionally set to the task of generating contingency conditions for the problem of global thermonuclear war. WOPR’s ultimate conclusion that “the only winning move is not to play” is not only a poetic treatise on nuclear warfare but also an enigmatic solution to the qualification problem; “It will take infinite time to qualify the problem, therefore the ultimate action is no action at all.” While this philosophical exercise is great fodder for the classroom and journals, there exists a very real need for software systems capable of taking action in real-time situations involving myriad sensor inputs, state variables, situation assessments and environmental conditions. Advances in formal and fuzzy logic, expert systems, neural networks, genetic algorithms, state machines, relational databases, and swarm intelligence each have displayed success in limited environments having well-controlled and highly defined preconditions; however, there is a missing, overall system of algorithms that shows promise of integrating each of these AI components into a functioning whole capable of learning and adapting to an evolving environment of newly-discovered preconditions. Jeff Hawkins addressed this missing link in his 2005 book, On Intelligence [4], in which he stated the case for looking to nature’s solution to the qualification problem in the form of the neocortex of the mammalian brain. This thin layer of networked neurons envelopes the brain and serves as an interface between the functional brain subsystems and the outside world, complete with its infinite universe of preconditions. Hawkins and his colleague and Numenta co-founder, Dileep George, describe the importance of studying the neocortex for its ability to overcome the No Free Lunch (NFL) theorem that states a particular learning or optimization algorithm is only superior to all other algorithms because of built-in assumptions that have been placed there by the algorithm’s designer. The NFL is an instance of the qualification problem in as much that it states an algorithm works well when the entire set of unknown variables is set to default values. The biological function of the neocortex as theorized by Numenta is that of a memory-prediction algorithm having the following characteristics: The function of the neocortex is to construct a model for the spatial and temporal patterns to which it is exposed. The goal of this model construction is the prediction of the next pattern of input. The neocortex itself is constructed by replicating a basic computational unit or node. The nodes of the neocortex are connected in a tree-shaped hierarchy. The neocortex builds its model of the world in an unsupervised manner. Each node in the hierarchy stores a large number of patterns and sequences. The output of a node is in terms of the sequences of patterns it has learned. Information is passed up and down the hierarchy to recognize and disambiguate information propagated forward in time to predict the next pattern. The predictions made by the neocortex are compared against the pattern of current sensor input. When there is a mismatch, the neocortex is “surprised” that its assumptions where wrong, and it is tasked with finding an appropriate set of assumptions that corrects the situation in a process we call “thinking.” This process is both generic and ubiquitous, does not require specialized NFL design assumptions, and is used to interface all of the senses. In his Ph.D. thesis, Dileep George describes the algorithmic and mathematical counterparts of the biological memory-prediction framework that have been developed and known collectively as the Hierarchical Temporal Memory (HTM) algorithm. The HTM has been realized in Python to produce the Numenta Platform for Intelligent Computing (NuPIC) API that is available for download under licenses for academic research and commercial systems. Initial applications have been developed for computer vision that are very robust against noise, scale, inversion, rotation, and perspective that would otherwise introduce an explosion of specific preconditions. Some members of the AI community have criticized the HTM approach as a repackaging of existing technology; however, the HTM framework positions itself as an interface between reality and AI algorithms much like the neocortex serves as a liaison between biological sensor-motor control systems and the world. This material originally appeared as a Contributed Editorial in Scientific Computing 25:9 September/October 2008, pg. 14. William L. Weaver is an Associate Professor in the Department of Integrated Science, Business, and Technology at La Salle University in Philadelphia, PA USA. He holds a B.S. Degree with Double Majors in Chemistry and Physics and earned his Ph.D. in Analytical Chemistry with expertise in Ultrafast LASER Spectroscopy. He teaches, writes, and speaks on the application of Systems Thinking to the development of New Products and Innovation.
Algorithmic Intelligence
0
algorithmic-intelligence-1b24281d0411
2018-03-31
2018-03-31 23:38:45
https://medium.com/s/story/algorithmic-intelligence-1b24281d0411
false
1,103
Innovation is Elegance. Complex Explanations are Not. Innovation reduces system complexity. This publication seeks to reduce confusion.
null
williamlweaverphd
null
TL;DR Innovation
williamlweaver@gmail.com
tl-dr-innovation
TECHNOLOGY,SCIENCE,INNOVATION,SYSTEMS THINKING,INTELLIGENCE
williamlweaver
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
William L. Weaver
Explorer. Scouting the Adjacent Possible. Associate Professor of Integrated Science, Business, and Technology La Salle University, Philadelphia, PA, USA
286537bc098c
williamlweaver
183
189
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-26
2018-06-26 20:40:21
2018-06-27
2018-06-27 06:53:21
3
false
en
2018-06-27
2018-06-27 06:53:21
97
1b244cd07564
5.825472
4
0
0
AO functioneren Rijkdienst 14–6, terugblikken symposia robotisering Tweede Kamer en ABD, Data Ethics Framework en 20 questions before using…
1
Datanieuws binnen en buiten het Rijk 26–06–2018 AO functioneren Rijkdienst 14–6, terugblikken symposia robotisering Tweede Kamer en ABD, Data Ethics Framework en 20 questions before using algorithmic decision in the Public Sector Infographic van de week Van CPB over de impact van de vergrijzing Uit de Nederlandse Digitaliseringsstrategie Van het Rijksvastgoed Bedrijf over het Defensie onderhoud vacatures Big Data engineer voor het Rijks ICT Gilde LNV, NVWA: Junior data-scientist, Medior data-scientisten Senior data-scientist BD: Data Beheerder, Data investigation specialist OCW, Inspectie: (Junior) data-scientist en (Junior) wetenschappelijk medewerker data- en informatieproducten EZK, DICTU: SAS Specialist Agenda 27–06–2018: Hackathon Regie op Gegevens 2 juli 13:00–17:00:Bijeenkomst Artificial Intelligence in overheidsdomein 4 juli: When Data Meets Disinformation (Public Keynote Lecture Summer School ‘Data in Democracy’) 18 september: Kick-off bijeenkomst pioniersnetwerk Open Overheid 19&20 september: Big Data Expo 27 september Stationscollege: Big Brother is guiding us (door Nart Wielaard) Datanieuws binnen het Rijk 14 juni was het AO functioneren Rijksdienst. Op de agenda veel onderwerpen mbt de Rijksdienst van #metoo, tot informatiebeveiliging, Masterplannen en onze Jaarrapportage Bedrijfsvoering. oa waardering door de Kamer voor onze Publieksversie van de jaarrapportage waar vorig jaar specifiek om gevraagd werd tijdens het AO en veel aandacht voor Rijnstraat 8. Het debat is terug te kijken voor de liefhebber op DebatDirect JenV: Rapport Commissie Koops — Regulering van opsporingsbevoegdheden in een digitale omgeving Kabinet: Nederland dé digitale koploper van Europa. het Rijk maakte haar ambities kenbaar. In het rapport de Nederlandse Digitaliseringsstrategie van 48 pagina’s komt data 147 voor, dus data gedreven is het! Terugblik tweede ABD-symposium op 22–5: Robotisering — topambtenaar in een wereld vol bots, automatisering en kunstmatige intelligentie. Het E-Zine vind je hier Min Kamerbrief over de fiscale behandeling van de financiering van startups. Mooie special in de herhaling: Innovatie in Toezicht: nieuwe technologie maakt effectiever en proactiever Maar toch: inspecteren blijft mensenwerk Min VWS: op LinkedIn: Tweede Kamer neemt drie moties aan over informatie-uitwisseling in de zorg Min VWS: Beantwoording Kamervragen over medicijnbezorging door drones Op Agenda Stad: Info over initiatief: ‘Zicht op Ondermijning’ een versterking en verbetering van de preventieve aanpak van ondermijnende criminaliteit door gebruik te maken van nieuwe methoden van Data Analytics Terugblik bijeenkomst georganiseerd door het programma Rijk aan informatie over de toekomst van de informatiespecialisten bij het Rijk, over oa de impact van robotisering en dataficering op het vak. Minisymposium over robotisering in de Tweede Kamer was donderdagmiddag op 14 juni 2018. De toespraak van de voorzitter staat online Datanieuws buiten het Rijk Binnen overheid: van Nesta: Public Sector use of Algorithmic Decision Making. 20 questions to answer before using algorithmic decision making in a live environment De Uk publiceerde een Data Ethics Framework De EU stelde een High level group in on Artificial Intelligence en er is een AI Alliancewaarmee je zelf kunt reageren naar de High level group. Achtergrond artikel over de high level group in Emerce. Nieuwe reform plan voor US government, met veel aandacht voor digitalisering:Delivering Government. Solutions in the 21st Century. Reform Plan and Reorganization Recommendations Op Reddit: originele aanpak: Visualisatie van SF Police open data in Google Earth Van governing.com: Government’s Data-Accessibility Challenge. There’s more to transparency than just putting reams of information out there. It needs to be easy to understand and useful. Data en AI algemeen Brancheorganisatie Dutch Data Center Association (DDA) presenteerde haar rapport“State of the Dutch Data Centers 2018 — Always On”, het grootste jaarlijkse onderzoek naar de Nederlandse datacenter sector Op Medium: What every leader should know about AI techniques Op Bloomberg: Amazon’s Clever Machines Are Moving From the Warehouse to Headquarters op datafloq.nl Artificial intelligence and the future of programming Van NOS: Kunstmatige intelligentie: Europa dreigt slag om de toekomst te verliezen Op Forbes.com: The Brilliant Ways UPS Uses Artificial Intelligence, Machine Learning And Big Data van technologyreview.coM: This AI program could beat you in an argument — but it doesn’t know what it’s saying Van The Guardian: What is Google doing with AI? Chips with Everything podcast. Jordan Erica Webber chats to a panel of artificial intelligence experts about what Sundar Pichai’s seven objectives could mean in practice van NY times: Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So Impact op de arbeidsmarkt: Op Lawgeex.com: AI vs. Lawyers: In a landmark study, 20 experienced US-trained lawyers were pitted against the LawGeex Artificial Intelligence algorithm. The 40-page study details how AI has overtaken top lawyers for the first time in accurately spotting risks in everyday business contracts. Van Mc Kinsey: AI, automation, and the future of work: Ten things to solve for van the NYTimes: If the Robots Come for Our Jobs, What Should the Government Do? Some big ideas are starting to percolate. But less dramatic ones might work, too. Van het Roosevelt Insitute: Don’t Fear the Robots: Why Automation Doesn’t Mean the End of Work Uit de FT: Auto bosses accused of failing to train workers for AI revolution Blockchain: BZK: Het gebruik van blockchaintechnologie in het verkiezingsproces. Rapport van samenwerkingsorganisatie Privacy & Identity Lab over het gebruik van blockchain-technologie in het verkiezingsproces. Van Stanford: Can Blockchain Be Used for Public Good? A new Stanford study looks beyond the hype to examine how decentralized transactions can solve social ills. van computable: HBO Drechtsteden legt diploma’s vast in blockchain van Thenews.asia: Korean Government to invest 230 Billion Won in Blockchain Technology Ethiek & Privacy: Over fairness van algoritmen: Op nature.com: Bias detectives: the researchers striving to make algorithms fair: As machine learning infiltrates society, scientists are trying to help ward off injustice. Cathy O’Neill was in nl voor een conferentie en dit artikel in de Volkskrant: Wiskundige Cathy O’Neil waarschuwt voor algoritmen: ‘Rechten van individu worden niet beschermd’ In Vrij Nederland: Algoritmen zijn er niet om je leven beter te maken In The Guardian: Rise of the machines: has technology evolved beyond our control? Van the FT: A. I. is all very well. But can machines be taught common sense In NRC: Alwetend algoritme is funest voor democratie Het is een gevaarlijk misverstand dat er een democratisch bestuur gevormd kan worden op basis van digitale sporen, meent Fleur Jongepier. Op medium: The ethicist in the machine. Should machines be held to a higher standard than humans — at least when it comes to questions of data ethics and bias? Overig ethiek en privacy: In het FD: Over het BKR: Van algemeen nut tot nauwelijks gecontroleerde datahandelaar The economist: The hounding of Greece’s former statistics chief is disturbing THE NEW YORK POLICE DEPARTMENT has quietly expanded its gang database under Mayor Bill de Blasio, targeting tens of thousands of young people of color for increased surveillance even in the absence of criminal conduct. Van tech crunch: Europe’s top court takes a broad view of privacy responsibilities around platforms Van the Bostonglobe: What Facebook can learn from academia about protecting privacy Van the Sun.co.uk, maar niet minder afschrikwekkend: LOGGED OFF Google can predict when you will DIE with 95% accuracy using A Op Theconversation.com: We don’t own data like we own a car — which is why we find data harder to protect Op NPO Radio 1: Uw data is goud waard: en dat is gevaarlijk Bedrijfsvoering & HR analytics: van HBR.org: Before Automating Your Company’s Processes, Find Ways to Improve Them Op Rendement.nl Telefoonlijst op de algemene schijf: mag dat onder de AVG? Op forbes.com: Robotic Process Automation And Artificial Intelligence In HR And Business Support — It’s Coming Op medium.com: Human-Centered A.I. is the Future of Talent Management Op LinkedIn: The best HR & People Analytics articles of May 2018 Voor de nerds Op BBC: Fantastische koppeling tussen data en kaarten: Maps reveal hidden truths of the world’s cities Op de correspondent: Zo voorspel je de winnaar van het WK voetbal (en de rest van de toekomst) Op KD nuggets: The What, Where and How of Data for Data Science Op Datawrapper: How to create text annotations for line charts and scatterplots The ASA’s Statement on p-Values: Context, Process, and Purpose Op Medium: Survivorship Bias in Database Systems Preview van het boek: Fundamentals of Data Visualization Op towardsdatascience.com: The curse of “intuition” in Data Science Op insidebigdata.com: A Data Scientist’s Guide to Communicating Results Op eoda.de: Dear data scientists, how to ease your job Op towardsdatascience: What everyone needs to know about interpretability in machine learning Op towardsdatascience: Going Dutch, Part 2: Improving a Machine Learning Model Using Geographical Data Op medium: Deep Learning: To Forget or Not Forget?
Datanieuws binnen en buiten het Rijk 26–06–2018
15
datanieuws-binnen-en-buiten-het-rijk-26-06-2018-1b244cd07564
2018-06-27
2018-06-27 06:53:21
https://medium.com/s/story/datanieuws-binnen-en-buiten-het-rijk-26-06-2018-1b244cd07564
false
1,398
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Betty Feenstra
Data driven, Head of Policy Information @ DG Public Administration, Ministry Internal Affairs and Kingdom Relations, Amsterdam, NL
6768e21844e9
bettyfeenstra
93
80
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-16
2018-07-16 12:37:54
2018-07-16
2018-07-16 12:45:03
8
false
en
2018-07-16
2018-07-16 12:45:10
39
1b25148d7413
4.737107
2
0
0
The future of the web depends on people like you. Can you help us fight for digital equality?
5
India backs net neutrality & the rise of digital authoritarianism The future of the web depends on people like you. Can you help us fight for digital equality? Digital authoritarianism vs. liberal democracy — AI offers governments tools to monitor, understand and control citizens like never before. This will lead to a new age of authoritarianism argues Nicholas Wright in Foreign Policy. At the same time, he says we’re likely to see a strong response in democratic countries that could actually strengthen liberal democracy. Big tech rules — Everyone is talking about how to tame the risks of AI technology. But who’s driving the regulatory agenda? Corinne Cath from the Oxford Internet Institute says big tech is leading the way, meaning regulation will be shaped in the interests of industry, not driven by the needs of society (New Statesman). Rank & profile — In The Guardian, Zoe Williams warns about the dangers of a world where algorithms divide us into categories which companies and governments use to decide how — or whether — to serve us. Why isn’t open data working for women in Africa? — A new Web Foundation report maps the state of open data for women across Africa. World Cup data design — Using a World Cup stats API, designer Zeh Fernandes has generated beautiful posters based on every game in the tournament. Expect another on Sunday. The all-seeing state — The New York Times wrote about China’s growing surveillance state, with over 200 million cameras, powered by sophisticated facial recognition and AI technology. Facebook fined — The UK’s Information Commissioner’s Office (ICO) handed Facebook a £500,000 fine for breaches of data protection relating to Cambridge Analytica. The charge is equivalent to 10 minutes worth of revenue for the company (Reuters). Jehovah’s witness GDPR — An EU court has ruled that door-knockers — even those from religious communities — are data controllers and therefore need to abide by GDPR when processing personal data (Gizmodo). Choking the net — Earlier this year, Tanzania adopted new rules forcing all publishers, including bloggers, to register for a license and pay a hefty fee. The law has forced bloggers offline and made it all but impossible for most people to create content (The Verge). India’s secret shutdowns — India’s increasing trend of internet shutdowns has been made all the worse by an absence of government engagement and transparency, says Apar Gupta, a trustee at Internet Freedom Foundation (The Wire). Uganda thinks twice about tax — In the wake of protests, the Ugandan government announced it is reviewing taxes recently levied on social media useand mobile money transactions (BBC). REPORT — A new study from the Broadband Commission, focused on Cambodia, Rwanda, Senegal and Vanuatu, details how high-speed internet promotes economic development (Biztech Africa). YouTube’s ‘real news’ push — In response to criticism that it’s a breeding ground for conspiracy theories, YouTube announced a series of changes to its platform and algorithm designed to promote authoritative news. The company is also giving $25 million to news outlets to expand their video capacity (Wired). Malaysia drops law — Malaysia’s new government has said it will drop the country’s ‘fake news’ law which, in reality, was likely to do more to curb free speech than false reporting (TechDirt). Twitter purge — Stepping up its campaign against shady accounts, Twitter is deleting profiles at a faster clip than ever. Don’t be stung if you’re dropping followers, they’re probably fake (Washington Post). The world’s strongest net neutrality rules — India’s telecommunications regulator released its recommendations on net neutrality, endorsing strong protections (CNN). For more detail, see Nikhil Pawa for MediaNama. Court nominee a net neutrality foe — The US President announced Brett Kavanaugh as his nominee for Supreme Court Justice. Kavanaugh has previously argued that net neutrality violates first amendment rights to free speech (The Verge). Nanjira Sambuli, Web Foundation Advocacy Manager, was announced by UN Secretary-General António Guterres as a member of a new High-Level Panel on Digital Cooperation (Foreign Affairs, The East African). Nanjira was quoted in a piece in the Mail & Guardian about the social and cultural barriers to women using technology in Africa. Following last week’s Vanity Fair profile of our founder, Sir Tim Berners-Lee, HuffPo Germany wrote about how he’s working against web centralisation(German). Italian newspaper La Stampa also covered the profile (Italian). An Op-Ed in GQ Magazine discusses an open letter from Sir Tim, published by the Web Foundation. The piece argues that shifting from an ad-based model to a paid-for model would fix some of the internet’s challenges. At an Alliance for Affordable Internet coalition meeting, Nigeria Minister of Communications Dr. Adebayo Shittu pledged his commitment to collaborate with A4AI to continue to improve affordable internet access in the country (Oriental News Nigeria). A4AI Asia coordinator Basheerhamad Shadrach was quoted in a blog post about how we can bridge the digital gender gap, from the UN Human Rights Council. Speaking on a panel on advancing women’s rights in access to ICTs, he explains that technology tends to reflect biases and inequalities that exist offline (Session recording, Shadrach from 29:00). A4AI research on mobile data pricing was cited in articles about Uganda’s social media tax (Tic Mag, ABC News, ChimpReports). Global Voices cited the research in a piece arguing Uganda’s tax threatens to deepen the digital gender gap. Vincent Matinde also cited the A4AI data in a piece he wrote about Uganda’s tax, which he says is part of a growing trend across Africa, with governments using taxation and shutdowns to control access (IDG Connect). Enjoying this newsletter? Share it with a friend! They can sign up here. Donate to help us deliver digital equality.
India backs net neutrality & the rise of digital authoritarianism
2
india-backs-net-neutrality-the-rise-of-digital-authoritarianism-1b25148d7413
2018-07-19
2018-07-19 23:40:20
https://medium.com/s/story/india-backs-net-neutrality-the-rise-of-digital-authoritarianism-1b25148d7413
false
955
null
null
null
null
null
null
null
null
null
Net Neutrality
net-neutrality
Net Neutrality
4,176
The Web Foundation
“All of the people, all of the internet, all of the time”. Working for digital equality #ForEveryone. Founded by Sir Tim Berners-Lee, inventor of the web.
aabc4af7ffad
webfoundation
4,411
306
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-04
2018-09-04 05:40:24
2018-09-04
2018-09-04 06:10:16
6
false
zh-Hant
2018-09-04
2018-09-04 06:10:16
2
1b2543c0298a
1.036792
0
0
0
Line@帳號主要分為兩種,請參考下圖的官方說明
5
如何建立LINE聊天機器人? Line@帳號主要分為兩種,請參考下圖的官方說明 認證帳號是需要審核的,所以我們選擇一般帳號即可。 進入 https://entry-at.line.me/ ,並且登入自己的LINE帳號,選擇"開始使用一般帳號",接著開始填基本資訊等等。 註冊完成後,進入https://admin-official.line.me/ 就可以開始管理機器人了 進入管理頁面後,跟著官方的新手指南,建立自己的聊天機器人吧! #關鍵字回應訊息: 點擊上圖的訊息選項-關鍵字回應訊息-新增 例1 : 查詢高雄天氣 儲存後,每當輸入高雄天氣這組關鍵字,LINE聊天機器人就會自動傳送出中央氣象局的高雄天氣狀況頁面。 例2 : 查詢最新匯率 儲存後,每當輸入匯率這組關鍵字,LINE聊天機器人就會自動傳送出台灣銀行的匯率頁面。
如何建立LINE聊天機器人?
0
如何建立line聊天機器人-1b2543c0298a
2018-09-04
2018-09-04 06:10:16
https://medium.com/s/story/如何建立line聊天機器人-1b2543c0298a
false
23
null
null
null
null
null
null
null
null
null
Line
line
Line
989
Yanwei Liu
null
dc182588576c
yanweiliu
8
30
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-19
2018-02-19 21:58:10
2018-02-20
2018-02-20 00:45:55
5
false
en
2018-02-20
2018-02-20 12:01:30
0
1b257cd2fc8a
4.116352
2
1
0
Visual Analytics - Assignment I
3
A walk in the park I’m taking a class on data visualization. The assignments revolve around a data set of visitor movements in a theme park. In this (simulated) theme park, each visitor’s position is captured once a second. When they check in at a location, this is captured, too. We work with the visitor data from one weekend — Friday to Sunday. There are over 26 million records in the data set, from 3500 (Friday) to 7500 (Sunday) unique visitors per day. There is also evidence of a mysterious crime in the data, but that’s for another blog post. The first assignment is to explore the data. Here are some of my insights and thoughts on the data… Duration of Visit Most visitors arrived before 10am and 75% of visitors stayed for at least 12 hours. The longest stay was just short of 15 hours (which is almost the maximum, given the opening hours, 8am to 11.30pm). The shortest stay was just 10 minutes long (what happened there?). The (simulated) theme park. Check-in locations are marked by numbers. Movements and Check-Ins The park has a number of locations at which visitors can “check in”. Many of these are rides (such as rollercoasters). An average visitor checks in at around 20 locations. It seems that on the day where the least visitors came to the park, Friday, the average number of check-ins was higher (23) than on Saturday and Sunday (17 and 18, respectively). There are a few visitors with just a single check-in — their entry into the park? — and two with 61 check-ins. The average number of unique locations a visitor checks in at is 12–15. Once they check-in at a location, visitors stay for around 7 minutes (Median). We can plot the accumulated time that visitors have spent in each 5x5 meter grid of the park (“movements”) and at each check-in location. Accumulated time spent on each 5x5 meter grid (square-root). Circles indicate check-in locations, diameter indicates time spent in the location. Although the plot does not show whether there are more people in a given area, or people just stay longer, it’s obvious that not all areas are equally frequented. This may be the case because there is a a foodcourt, a resting space, or a particularly fun-to-watch attraction. Visitors don’t necessarily spend more time in a spot because they want to. Congested areas are just harder to pass through. Looking at this I wonder… Do people move slower on days/times when there are more visitors? Do they move slower when there are many other people close-by? Are there areas where movements are particularly slow? If so, why? It’s also clear that some check-in locations are more popular than others. Some interesting question come to mind: Which locations are the most popular? At different times of the day? At different “phases” of the visit? For shorter / longer visits? Which rides are taken multiple times by the same person? Are they taken consecutively? Throughout the day? Do people take longer at a check-in location depending on the number of visitors in the park / the number of visitors that are close-by? That is, can we see waiting times change? Are there locations that people skip more than others, i.e. where people walk by without checking in? Does the “skip-rate” depend on the number of visitors queueing for it, the time of the day…? How many people does each location “service” per unit time? At which rate does each visitor check in to rides. Do people get slower/faster over time? Do some people have a higher frequency than others? Differences between days We can also plot the most frequented areas and check-in locations for each day and time. I don’t see any striking irregularities on any of the days. This is a bit unexpected, since there was a major incident in the park on one of the days, and I would expect this to be visible. I may be looking at the data in the wrong way. Top 30% of areas and locations (circles) per day (top to bottom: Friday, Saturday, Sunday) and two-hour interval (left to right: 8–10am, 10am–12pm, …). Paths If we plot some individual visitors’ path through the park we can see that there are typical patterns as well as individual differences. The paths of 30 random visitors through the park. Shade of blue indicate time of the day, pink dots indicate check-ins. Again, some questions come to mind: What does the typical “flow” of visitors through the park look like? Do people just go through the rides in the order they encounter them or do they take more complex paths? Are there particularly common sequences? Does it make sense to model check-in sequences as Markov chains, or check-in frequencies as Poisson distributed, so we can identify unusual behaviour? Are there peak times throughout the day? At particular rides? Can we identify people who move in groups? Does group size affect behaviour (number of rides taken, duration of pauses, length of stay)? If groups split, why, for how long, and where do they reunite? Are there clusters regarding which areas of the park people go to? E.g. rollercoaster-maniacs, water-freaks, foodcourt-chillers, families hanging out in kiddie-land all day…? Can we see people resting? When and where do visitors rest? This is a simulated data set, so not all of these questions may have interesting answers, but it will still be fun to investigate them further. I’m running of time, so I will have to leave them open for now.
A walk in the park
2
a-walk-in-the-park-1b257cd2fc8a
2018-02-21
2018-02-21 16:21:23
https://medium.com/s/story/a-walk-in-the-park-1b257cd2fc8a
false
870
null
null
null
null
null
null
null
null
null
Data Visualization
data-visualization
Data Visualization
11,755
Tobias Hotzenplotz
null
c2f5f3ca422f
HerrHotzenplotz
3
29
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-30
2018-04-30 06:05:41
2018-04-30
2018-04-30 06:21:41
1
false
en
2018-04-30
2018-04-30 06:21:41
4
1b25d606c119
2.562264
3
0
0
Chatbots are the most popular application of Artificial Intelligence. While some may say that ‘chatbots are nothing but hype’, other could…
5
Are Chatbots Challenging the Existence of Mobile Apps? Image Courtesy: Callr Chatbots are the most popular application of Artificial Intelligence. While some may say that ‘chatbots are nothing but hype’, other could be of the view ‘applications are going to die because of chatbots’. Simply put, ‘chatbots are an alternative to empty human conversations that operate on Artificial Intelligence algorithms. Apps: Surprising Statistics & Facts Mobile apps took over websites owing to the convenience and flexibility it offers to users. Before moving on to compare whether chatbots are challenging the existence of apps, let’s analyze the performance of apps: As per the report of App Annie, the average user makes use of 9 to 10 apps on a daily basis out of 35 or 40 downloaded. As per the report by Sensor Tower, among top 1% of publishers that produce paid apps (or apps with in-app purchases) collect 94% of revenue from the App Stores Apple’s App Store claims to have over 100 billion application downloads However, the major challenge faced by developers is small apps that usually get unnoticed and it becomes quite competitive to get noticed in the App Stores. Mobile app development services provider are thinking of innovative ways to build unique apps and get it featured in the store. The Future is Chatbots Chatbots are taking a toll over mobile applications and emerging as the future trend. Chatbots are thought of as the next-gen mobile apps with its voice commands being the new UI. To give a recent example of how apps are taking off the usefulness of mobile apps, it is now possible to book a taxi or share a ride by mentioning address on the messenger. No need to visit the app and book it manually. Advancements in AI have enabled chatbots to proliferate in the market. Voice platforms are redefining the software and mobile applications. Chatbots are the future of mobile apps as conversation is the new age UX. Chatbots are good at interacting with end users, yielding task-specific resourceful inputs and delivering good customer satisfaction. Having a distinctive interface with powerful voice commands, chatbots are programmed to execute as messenger apps along with possessing functionalities just like other utility-based mobile apps. The functional standpoint of chatbots is focused on taking voice and text inputs from user, processing it to analyze its context and generating an appropriate response. Companies are focusing on building human-like chatbots that generate friendly replies. Chatbots operate on following technologies: Natural Language Processing: This ensures that chatbots ingest what is said to it, breaks it down to understand its inherent meaning. Based on this understanding, chatbots determine an appropriate action and generate a response in the language that the user understands. Natural Language Understanding: This is the narrowed version of Natural Language Processing that handles unstructured inputs and converts them into a structured format for the machine to understand and act upon. Natural Language Generation: This is the process that coverts machine language and structured data into understandable text. With the evolution of chatbots, users are expecting the arrival of new platforms focused towards automating the several elements such as research, discovery, monitoring, testing and security. How to Make Your Chatbot Interesting to Humans? On a Concluding Note… Mobile applications are still in use, but chatbots have started overtaking them. Changes do not happen overnight! It may take some time for chatbots to replace apps completely, but this would surely happen. Businesses are coming up with a set of expectations about the how a bot must behave and developers are finding unique ways to build chatbot as humanly as possible. Chatbots are expected to contribute a lot into the customer service segment and would soon be an integral part of enterprise software ecosystem.
Are Chatbots Challenging the Existence of Mobile Apps?
61
are-chatbots-challenging-the-existence-of-mobile-apps-1b25d606c119
2018-05-14
2018-05-14 09:53:56
https://medium.com/s/story/are-chatbots-challenging-the-existence-of-mobile-apps-1b25d606c119
false
626
null
null
null
null
null
null
null
null
null
Chatbots
chatbots
Chatbots
15,820
The Apps Maker
An expert at search engine optimization and technical enthusiast, Marie frequently writes on revolutionary technologies that shape the future of industries.
43760b436858
theappsmaker
180
1,328
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-17
2018-07-17 06:56:09
2018-07-17
2018-07-17 08:55:22
6
false
en
2018-08-28
2018-08-28 10:02:27
4
1b2686099260
3.251887
7
0
0
Now, a day’s people are so busy with their work that nobody wants to spend extra time in texting. Google translation is a great tool where…
5
How to convert your speech voice to text data speech to text Now, a day’s people are so busy with their work that nobody wants to spend extra time in texting. Google translation is a great tool where you can translate text by voice. When we use voice as a medium to translate to text, it uses the same technology called speech to text conversion. First will see, How it will works and convert speech to text data. The first step in speech recognition is obvious — we need to feed sound waves into a computer. Everybody knows,the sound is transmitted as waves but the computer knows only numbers. so First think, we need to convert to numbers.Sound waves are one-dimensional. At every moment in time, they have a single value based on the height of the wave. Let’s zoom in on one tiny part of the sound wave and take a look: To turn this sound wave into numbers, we just record of the height of the wave at equally spaced points: Sampling wave This is sampling. It takes a reading of thousand words a second and recording a number representing the height of the sound wave at that point in time. Lets sample our “Hello” sound wave 16,000 times per second. Here’s the first 100 samples: Each number represents the amplitude of the sound wave at 1/16000th of a second intervals Recognizing Characters from Short Sounds Now that we have our audio in a format that’s easy to process, we will feed it into a deep neural network. The input to the neural network will be 20 millisecond audio chunks. For each little audio slice, it will try to figure out the letter that corresponds the sound currently being spoken. We’ll use a recurrent neural network — that is, a neural network that has a memory that influences future predictions. That’s because each letter it predicts should affect the likelihood of the next letter it will predict too. For example, if we have said “HEL” so far, it’s very likely we will say “LO” next to finish out the word “Hello”. It’s much less likely that we will say something unpronounceable next like “XYZ”. So having that memory of previous predictions helps the neural network make more accurate predictions going forward. Wait for a second! You might be thinking “But what if someone says ‘Hullo’? It’s a valid word. Maybe ‘Hello’ is the wrong transcription!” Try it out! If your phone is set to American English, try to get your phone’s digital assistant to recognize the world “Hullo.” You can’t! It refuses! It will always understand it as “Hello.” Not recognizing “Hullo” is a reasonable behavior, but sometimes you’ll find annoying cases where your phone just refuses to understand something valid you are saying. That’s why these speech recognition models are always being retrained with more data to fix these edge cases. flow of speech to text converter For a company like Google or Amazon, hundreds of thousands of hours of spoken audio recorded in real-life situations is gold. That’s the single biggest thing that separates their world-class speech recognition system from your hobby system. The whole point of putting Google Now! So if you are looking for a start-up idea, I wouldn’t recommend trying to build your own speech recognition system to compete with Google cloud speech to text API. Instead, figure out a way to get people to give you recordings of themselves talking for hours and try with deep speech. The data can be your product instead. In next post, I will write about how to use Google cloud speech API to convert speech to text. $……………….………… Happy learning…………………………….$ If you enjoyed this article, feel free to hit that clap button 👏 to help others find it.
How to convert your speech voice to text data
32
how-to-convert-your-speech-voice-to-text-data-1b2686099260
2018-08-28
2018-08-28 10:02:27
https://medium.com/s/story/how-to-convert-your-speech-voice-to-text-data-1b2686099260
false
610
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
BalA VenkatesH
I have a passion for understanding things at a fundamental level and Sharing trending technology concepts, ideas, and code.
7d2dbe2d619d
venkateshpnk22
90
367
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-10
2018-08-10 06:35:28
2018-08-10
2018-08-10 06:42:28
0
false
en
2018-08-10
2018-08-10 06:43:19
1
1b26a266346
2.826415
2
0
1
Read part 1 here:
3
Semantic Correspondence Part 2 Read part 1 here: Semantic correspondence via PowerNet expansion start with cartpole network, 4 inputs, 2 outputs, train networkmedium.com From now on, let’s call Semantic Correspondence via PowerNet Expansion as simply SemCo. The first part did not address how to use semantics and memory module to control human-like behaviors like one-shot learning or any of our mental capabilities. This part will address them. By the way SemCo makes sense: once trained, when deployed, it can be used directly. E.g. babies’ eyes and color vision, ears and etc. These raw inputs and direct muscle memory don’t require abstract mental faculty, and they work right out of the box, because the design had been evolved and reproduced using our DNA. For example we can see red and we can hear “red” once were born. But of course the first instance we learn the concept “red”, when it is first noticed, the neural module must draw the boundary and create identification to say “okay, the thing in attention is red”. Then we have created a category for the concept “red” using our born-ready modules. The same goes for sound, when someone tells us that the red thing seen is called “red”. When this happens, we also internally link the visual and audio categories as different aspects of the same thing. * Essentially, SemCo serves as the foundation for forming intelligence. It provides the tool to start forming categories, and category formation requires some also born-ready modules such as memory, spatial awareness, emotions etc. These non-IO modules can be trained with a lot of data like how we evolved ours, or be coded directly if we know how to design them. Together they allow full category creation and some set of computational functions which is Turing complete (that’s why we can have general intelligence). Then, SemCo is complete: first we form base concepts that can be easily categorized, such as “apples”, “red”, “water”. As we collect more categories, the pressure for informational efficiency, which are consequent of our memory size and the need of communicative efficiency, will force us to start creating compositional categories, e.g. “fruits”. As the snowball keeps rolling, we create more higher level, abstract categories, such as numbers, counting, and eventually hit the critical mass for categories formed. These categories form the entirety of human culture and knowledge. We did not learn to count overnight. In fact, every bit of our knowledge is the produce of some painstaking journey of accidents and discovery, one building on another that came before it. But we can learn them quite rapidly in comparison, say in school. So, I expect SemCo, when constructed with the right modules and capacities, and taught with the right curriculum like human children are, it can develop intelligence as an emergent consequence of the categories formed due to pressure for informational efficiency and survival. Notice how humans evolved through a long chain of selection from single cell organisms: first the basic modules for metabolism, locomotion, then more advanced organs like eyes, ears, nose, skin, limbs, etc. and later, mental capabilities, cultures, knowledge, and general intelligence, are evolved. SemCo will follow a quite similar path, although we can skip the slow process of evolution using some classic manual design. We already have cameras, mic, speakers, robotic arms etc. which have the potential to create categories like we do, e.g. cameras can see full colors. The first step of SemCo is to evolve, train or code those basic born-ready modules, like vision, movement, spatial awareness, ability to see red, etc. I.e. we must train the modules to the stage where it can ground categories, in separate or integrated channels. Then we are ready for the second stage of SemCo, that is to use these modules to learn bootstrap and learn more categories by exposing the agents to a rich and properly designed curricula, and subjecting it to some efficiency or survival pressure. This stage is cumulative, where we try to add modules or curricula for whatever mental capacity it is missing, e.g. the ability to count, do arithmetics, then algebra; or the ability to memorize things, to utilize memory, the ability to generalize, then to meta-generalize, etc. The curricula serve to transfer as much of our culture and knowledge base to the agent, to the point where it can start learning autonomously, and start snowballing its own categories. Eventually, we will hit the categorical critical mass and get general intelligence.
Semantic Correspondence Part 2
3
semantic-correspondence-part-2-1b26a266346
2018-08-10
2018-08-10 06:43:19
https://medium.com/s/story/semantic-correspondence-part-2-1b26a266346
false
749
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Wah Loon Keng
Deep Reinforcement Learning. Semantics. Rock Climbing. https://github.com/kengz
7f13442b032e
kengz
69
35
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-06
2018-07-06 12:47:19
2018-07-06
2018-07-06 13:13:21
4
false
en
2018-07-06
2018-07-06 13:13:21
7
1b26d35b6e6a
3.096226
0
0
0
In today’s competitive world most marketer understand the need for tools to measure their efforts, but as social media market is growing at…
5
Disruptive social media analytics tool for the thriving businesses. In today’s competitive world most marketer understand the need for tools to measure their efforts, but as social media market is growing at a rapid pace, the number of tools which are available to analyse the social media. Social media analytics the word itself tells that is the practice of collecting information from different social media sources and the main part is analysing this gathered information to make most informed and crucial decisions. There are different types of social media analytics tools which help business to know about their customer needs and wants and business can produce accordingly. · Sprout social- With Sprout’s social media analytics, you can measure performance across Facebook, Twitter, Instagram and LinkedIn, all within a single platform. Having all your analytics in one place makes it easier to track and compare your efforts across multiple profiles and platforms. · Snaplytics- Snaplytics gives you data on the performance of your snaps, audience growth and more. Another unique feature of Snaplytics is that it also gives you insights on your Instagram Stories as well. · Iconosquare — Iconosquare is a social media analytics tool specifically for Instagram. One of the standout features that separates Iconosquare from other tools is that in addition to analysis of your normal photos and videos. It also gives you insights into Instagram stories. With higher level plans, you can also get influencer analytics as well. · Buzzsumo- Buzzsumo is different than the other social media analytics tools on our list. Instead of analysing your brand’s individual social media performance, Buzzsumo looks at how content from your website performs on social media. For instance, if you want to see how many shares your latest blog post received on Facebook and Twitter, Buzzsumo can provide you with that data. · Tailwind While Instagram and Snapchat are currently the most talked about players in the visual social media landscape, Pinterest is still very active. And just like with any other social network, you need to measure your performance. Tailwind is arguably the most popular third-party Pinterest analytics tool. Through Tailwind, you can track trends in followers and engagement, analyse your audience and they even provide some analytics as well at certain plan levels. · Google Analytics While it’s not technically a “social media analytics tool,” Google Analytics (GA) is one of the best ways to track social media campaigns and even help you measure social ROI. You likely already have GA setup on your website to monitor and analyse your traffic. But did you know you can access and create reports specifically for social media tracking? For instance, you can see how much traffic comes to your website from each social network, or use UTM parameters to track specific social media campaigns. · ShortStack Have you ever run a social media contest before? Did you stop at picking a winner, or did you take the time to analyse how the contest went? ShortStack is a social media contest app that also provides performance analytics. Social media contests can be great for growing your following quickly, but if you’re not careful you could wind up just giving away free stuff with nothing to show for it. By analysing your contest’s performance with a tool like ShortStack, you’ll be able to see engagement metrics and identify which types of contests work best with your audience. · TapInfluence With influence marketing becoming one of the most commonly used social media tactics, there’s a growing need for analytics tool to measure your efforts. Tap Influence is a complete influence marketing platform that research potential influences you want to work with, as well as track campaign performance. For brands that only work with influences every now and then, a tool this robust might not be necessary.
Disruptive social media analytics tool for the thriving businesses.
0
disruptive-social-media-analytics-tool-for-the-thriving-businesses-1b26d35b6e6a
2018-07-06
2018-07-06 13:13:21
https://medium.com/s/story/disruptive-social-media-analytics-tool-for-the-thriving-businesses-1b26d35b6e6a
false
635
null
null
null
null
null
null
null
null
null
Social Media
social-media
Social Media
143,805
Rishabh Bora
null
e242c5d6107c
daphnis.rishabh
6
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-16
2018-06-16 05:49:21
2018-06-16
2018-06-16 06:35:03
1
false
en
2018-08-14
2018-08-14 19:34:03
5
1b28e1c83d45
2.415094
11
0
1
From zero to creating a model to predict bike sharing usage
4
Deep Learning ND: First Project From zero to creating a model to predict bike sharing usage I am currently taking part of the Deep Learning Nanodegree at Udacity. This program goes through 5 different projects in order to become familiar with neural networks and apply them in a practical way. I tried to learn Machine Learning in the past, but never fully succeed. I started the Introduction to Machine Learning from Coursera but quit after week 8 because it was impossible to keep following. I also did some small projects using TFLearn (a library that helps using TensorFlow) but always without a clear idea of what I was doing. Before doing the Nanodegree, I also did the free Introduction to Data Analysis course from Udacity, which I think is essential for anyone unfamiliar with Numpy and Pandas (and Python in general) since the program uses them a lot. The program recommends you to follow a tight calendar that they provide, and you will also have a final deadline. I started the program on June 12 and my deadline is October 20. For the first project, Udacity gives you two weeks, but I was able to complete it in three days. That was possible thanks to my previous background, both the Data Analysis and the Machine Learning courses I did helped me a lot to with the first lessons, which felt more like a refresher to me than learning something new. The content from Udacity is way nicer than any other introductory course I did on neural networks. I can see how this program has improved a lot in the last years since it was launched, because it received lots of criticism regarding how hard was for a beginner to follow. However the content is easy to follow, very visual and with lots of practical examples. If you didn’t previously play with Numpy and never read anything about neural networks before, it will definitely take you longer than three days to complete the first project. But that’s OK because you will have two weeks. Hi! This is Miquel, the author of the post. I hope you like what you are reading! If you are looking for a freelance Android developer, look no further! Check: http://beltran.work/with-me and I’ll be happy to chat with you! Project: Your First Neural Network In the first project, you will build a neural network that predicts the hourly usage of a bike sharing service. You will be given two years of data and a project template that implements everything, except the feed-forward and back-propagation algorithms. And as well, a series of hyperparameters for you to tune up. The project requires less work than what I thought. Most of the code is already provided, and everything you need to apply is already explained in the previous lessons. My advice: Keep all the code examples you will do during the first weeks at hand, because you will be using them. Finally, you can visualise your solution and see how your prediction performs: My prediction works well on weekdays, but not so good on vacation days The project submission process will run a series of tests to auto-evaluate you and generate a file that you will have to submit. In less than an hour after submission I got a review in my inbox telling me that I passed. My next project deadline is on July 28, so I have plenty of time to look at the optional content, learn about Keras, sentiment analysis and implement a dog breed classifier for the second project. See you then!
Deep Learning ND: First Project
41
deep-learning-nd-first-project-1b28e1c83d45
2018-08-14
2018-08-14 19:34:03
https://medium.com/s/story/deep-learning-nd-first-project-1b28e1c83d45
false
587
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Miquel Beltran
Freelance Android Developer http://beltran.work
fe4cbfc9fac2
Miqubel
1,836
35
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-03
2017-10-03 14:41:57
2017-10-03
2017-10-03 14:52:12
0
false
en
2017-10-09
2017-10-09 14:01:26
8
1b2d18dd0b67
2.49434
0
0
0
By Nate Nichols, Product Architect at Narrative Science
5
Natural Language Processing vs. Natural Language Generation By Nate Nichols, Product Architect at Narrative Science The field of Artificial Intelligence (AI) is equal parts exciting and bewildering right now. Major advances are being made in a variety of areas, but following along is difficult because there are so many technical terms and acronyms. And don’t even get me started on how many of the terms are similar. For instance, there’s Deep Blue, Deep Learning, Deep Forest, Deep Voice, and DeepStack. Anyone would be lost. Given the nature of our business, we often encounter confusion between Natural Language Processing (NLP), Natural Language Generation (NLG), and Natural Language Understanding (NLU). Let’s Start with NLP and NLG Setting aside NLU for the moment, we can draw a really simple distinction: Natural Language Processing (NLP) is what happens when computers read language. NLP processes turn text into structured data. Natural Language Generation (NLG) is what happens when computers write language. NLG processes turn structured data into text. Until the last few years, NLP has been the more dynamic research area; the focus was on getting more data into the computer (e.g. teaching the machine how to “read” an email and determine if it’s likely to be spam). The problem has now flipped. Our computers have access to vast repositories of data, and the problem is trying to get actual value and insights back out from all that data. (This, of course, is the exact business problem that Quill, our Advanced NLG platform, helps solve.) This distinction doesn’t mean that NLP and NLG are completely unrelated. Reading and writing are separate but related challenges for computers, just like for humans. For instance, we’ve had projects in the past that used NLP to generate structured data from text (e.g. assigning a topic to a tweet), and then used NLG to write text from that structured data (e.g. “You tend to tweet about politics…”) We also use a variety of NLP techniques internally to help test and tune our NLG engine. It’s worth mentioning here that the private sector and academia have slightly different definitions of NLP. To most folks, NLP is “Computers reading language.” But in academia, the “Processing” part of NLP is taken more seriously and NLP basically means “Computers doing things with language.” In academia, then, NLG is a subfield of NLP, not its inverse. At Narrative Science, our view is that NLG is a separate category of its own within the AI ecosystem. What about NLU? I mentioned NLU earlier; NLU stands for Natural Language Understanding, and is a specific type of NLP. The “reading” aspect of NLP is broad and encompasses a variety of applications, including things like: Simple profanity filters (e.g. does this forum post contain any profanity?) Sentiment detection (e.g. is this a positive or negative review?) Topic classification (e.g. what is this tweet or email about?) Entity detection (e.g. what locations are referenced in this text message?) A more advanced application of NLP is NLU, ie. genuinely understanding what the text says. NLU is used by conversational agents including Alexa, Siri and Google Assistant. Each of these agents is able to digest spoken text like, “What’s the weather forecast tomorrow?” and then understand it as a request for the forecasted weather in the current location one day hence. (Of course, if you’ve spent much time with these types of bots, you’ll understand that there is still a significant amount of progress to make in Natural Language Understanding.) The Breakdown Like so many things in technology, NLP, NLG, and NLU are pretty straightforward concepts dressed up in jargon and acronyms that make them seem more complex than they really are. To reiterate: NLP is computers reading language NLG is computers writing language NLU is computers understanding language I hope this helps clarify the differences between NLP, NLG, and NLU! Our goal is to educate AI newcomers on the terms as we believe that widespread adoption is best enabled by widespread understanding.
Natural Language Processing vs. Natural Language Generation
0
natural-language-processing-vs-natural-language-generation-1b2d18dd0b67
2018-05-03
2018-05-03 05:46:02
https://medium.com/s/story/natural-language-processing-vs-natural-language-generation-1b2d18dd0b67
false
661
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Narrative Science
We help enterprises maximize the impact of their data with automated, insightful narratives generated by Natural Language Generation.
44cc3f3b5131
narrativesci
19
13
20,181,104
null
null
null
null
null
null
0
null
0
1de9a8b799ad
2018-04-26
2018-04-26 17:53:16
2018-04-26
2018-04-26 17:54:19
1
false
en
2018-04-26
2018-04-26 17:54:19
2
1b2da26e7206
3.977358
6
0
0
by Lida Tunesi
4
Shivani Agarwal: Looking at Machine Learning from All Angles by Lida Tunesi Shivani Agarwal It is now an everyday occurrence to see customized recommendations while shopping online, and uncannily personalized sidebar ads while browsing a website. Both of these marketing tools are powered by machine learning, a field of study that extends to many other parts of society as well. Machine learning now powers advancements in speech recognition, drug development, detection of fraudulent transactions, self-driving cars, and myriad other applications. “Today, machine learning has become a force of its own,” says Shivani Agarwal, Rachleff Family Associate Professor in Computer and Information Science. “Almost every application that requires discovering patterns in data or building models from data makes use of machine learning methods.” Agarwal studies many sides of the discipline, from exploring the fundamental strengths and weaknesses of machine learning methods, to discovering its connections to other disciplines, like economics and psychology. The goal of machine learning is for a computer to “learn” to perform a task without specific instructions. Programmers can create algorithms that take in data sets and, using that information, figure out how to find patterns or pick out certain traits. For example, an algorithm can learn to identify spam emails by analyzing examples of both spam and non-spam messages. The algorithm’s ability to correctly distinguish junk mail keeps improving as it looks at more and more examples. Research in machine learning largely arose from the desire to have computers learn to solve increasingly complex problems in computer science, and deal with bigger and bigger data sets in statistics. Nowadays, it is the “engine” behind modern data science, Agarwal says. In other words, tools from this field are useful for any discipline that wants to turn massive piles of data into meaningful information — including disciplines that might not seem related to computer science, such as biology. Even in the late 1990s, Agarwal says, scientists had already bridged machine learning to the life sciences. Researchers used machine learning methods to analyze newly available genomic data in order to, for instance, identify genes involved in certain diseases or find patterns in gene regulation. Since then, the connection to biology has only grown stronger. “Today, most life sciences laboratories produce vast amounts of data that simply cannot be analyzed by hand or eye,” Agarwal says, “and machine learning methods are increasingly becoming a central part of their toolbox.” Engaging in applications of machine learning, such as with the life sciences, is another theme of Agarwal’s research. “We collaborate with scientists and practitioners in other disciplines, and help them identify or develop machine learning methods that can be used to solve the problems they care about,” she says. For instance, in a joint effort with startup Mitra Biotech, Agarwal used machine learning methods to predict how patients would respond to a certain anti-cancer drug. The team’s results turned out to be more accurate than current biomarker-based methods. This was good news for researchers at Mitra, but the project was valuable for Agarwal as well. “It is important to be able to test how well our methods perform on real-life, human problems,” Agarwal says. “This collaboration both helped to solve the problem faced by my life-scientist collaborators, and helped to validate the machine learning methods we had developed.” Experiments in the life sciences can also bring up new problems for machine learning to attempt to solve, Agarwal says. These types of challenges help to push the field forward. “Over the years, many new machine learning methods have been developed in order to solve a data-based problem in the life sciences for which no standard method was applicable,” Agarwal says. In her research at Penn, Agarwal also meshes machine learning with a host of other academic fields. Her group recently brought together ideas from theoretical computer science, spectral graph theory, operations research and statistics to study pairwise comparisons, a type of choice made in recommender systems and marketing. The group hopes to expand on their results to study more types of machine learning choices. Choice data, Agarwal says, is an emerging topic that sits in the overlap of machine learning and econometrics. Some of Agarwal’s other work gets back to the fundamentals of the field. Researchers can evaluate how “good” a machine learning model is through various kinds of performance measures. If the performance measures are complex, extra thought and care must go into designing the model. Agarwal’s group has been developing design principles to help with this process, and plans to continue building on their work in the coming years. They hope to make the tools that result from the work more easily available to machine learning users. Penn’s effort to become a leader in both the teaching and research of machine learning was, for Agarwal, a big part of the University’s appeal. “There is a huge need for centers of excellence in machine learning across the country,” Agarwal says, “and I believe Penn is well-positioned to play a major role in this direction.” As part of this endeavor, Agarwal co-directs Penn Research in Machine Learning, a joint effort between Penn Engineering and Wharton to bring together the University’s large and diverse machine learning community. There were also reasons outside of academics to come to Penn. “I am fortunate to have a very supportive set of colleagues here, as well as terrific support staff and students,” Agarwal says. “It also helps that Penn is located in Philadelphia — a historic, modern, and cosmopolitan city.” Agarwal foresees no shortage of interesting questions and ideas to investigate in her research. Despite modern advances in machine learning, there are still some missing links between the field’s theory and practice. “Even today, many machine learning methods are used without a clear understanding of why they work or when they might fail,” Agarwal says. “This gap motivates a lot of the work we do in my research group, and I hope we will see the gap narrow in the years ahead.”
Shivani Agarwal: Looking at Machine Learning from All Angles
6
shivani-agarwal-looking-at-machine-learning-from-all-angles-1b2da26e7206
2018-05-08
2018-05-08 14:46:48
https://medium.com/s/story/shivani-agarwal-looking-at-machine-learning-from-all-angles-1b2da26e7206
false
1,001
University of Pennsylvania’s School of Engineering and Applied Science
null
null
null
Penn Engineering
elerner@upenn.edu
penn-engineering
null
PennEngineers
Machine Learning
machine-learning
Machine Learning
51,320
Penn Engineering
null
af9f8605d39a
PennEngineering
2,000
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-14
2017-11-14 02:35:08
2017-11-14
2017-11-14 17:00:14
6
false
en
2017-11-15
2017-11-15 17:12:49
7
1b2f3ef4e393
4.304717
149
5
0
Eric Kim | Pinterest engineer, Visual Search
5
Building Lens your Look: Unifying text and camera search Eric Kim | Pinterest engineer, Visual Search In February we launched Lens to help Pinners find recipes, style inspiration and products using the camera in our app to search. Since then, our team has been working on new ways of integrating Lens into Pinterest to improve discovery in areas Pinners love most–particularly fashion–with visual search. What we’ve learned is some searches are better served with text, and others with images. But for certain types of searches, it’s best to have both. That’s why we built Lens your Look, as an outfit discovery system that seamlessly combines text and camera search to make Pinterest your personal stylist. Launching today, Lens your Look enables you to snap a photo of an item in your wardrobe and add it to your text search to see outfit ideas inspired by that item. It’s an application of multi-modal search, where we integrate both text search and camera search to give Pinners a more personalized search experience. We use large-scale, object-centered visual search to provide us with a finer-grained understanding of the visual contents of each Pin. Read on to learn how we built the systems powering Lens your Look! Architecture: Multi-modal search Lens Your Look is built using two of Pinterest’s core systems: text search and visual search. By combining text search and visual search into a unified architecture, we can power unique search experiences like Lens your Look. The unified search architecture consists of two stages: candidate generation and visual reranking. Candidate generation In the Lens your Look experience, when we detect the user has done a text search in the fashion category, we give them the option to also take a photo of an article of clothing using Lens. Armed with both a text query and an image query, we leverage Pinterest Search to generate a high-quality set of candidate Pins. On the text side, we harness the latest and greatest of our Search infrastructure to generate a set of Pins matching the user’s original text search query. For instance, if the user searched for “fall outfits,” Lens your Look finds candidate results from our corpus of outfit Pins for the fall season. We also use visual cues from the Lens photo to assist with candidate generation. Our visual query understanding layer outputs useful information about the photo, such as visual objects, salient colors, semantic category, stylistic attributes and other metadata. By combining these visual signals with Pinterest’s text search infrastructure, we’re able to generate a diverse set of candidate Pins for the visual reranker. Visual reranking Next, we visually rerank the candidate Pins with respect to the query image, such as the Pinner’s article of clothing. The goal is to ensure the top returned result Pins include clothing that closely match the query image. Lens Your Look makes use of our visual object detection system, which allows us to visually rerank based on objects in the image, such as specific articles of clothing, rather than across the entire image. Reranking by visual objects gives us a more nuanced view into the visual contents of each Pin, and is a major component that allows Lens your Look to succeed. For more details on the visual reranking system see our paper recently published at the WWW 2017 conference. Multi-task training: Teaching fashion to our visual models Now that we have object-based candidates, we assign a visual similarity score to each candidate. Although we’ve written about transfer learning methods in the past, we needed a more fine-grained representation for Lens your Look. Specifically, our visual embeddings have to model certain stylistic attributes, such as color, pattern, texture and material. This allows our visual reranking system to return results on a more fine-grained level. For instance, red-striped shirts will only be matched with other red-striped shirts, not with blue-striped shirts or red plaid shirts. To accomplish this, we augmented our deep convolutional classification networks to simultaneously train on multiple tasks while maintaining a shared embedding layer. In addition to the typical classification or metric learning loss, we also incorporate task-specific losses, such as predicting fashion attributes and color. This teaches the network to recognize that a striped red shirt shouldn’t be treated the same as a solid navy shirt. Our preliminary results show that incorporating multiple training losses leads to an overall improvement in visual retrieval performance, and we’re excited to continue pushing this frontier. Conclusion Since launching our first visual search product in 2015, the visual search team has developed our infrastructure to support a variety of new features, from powering image search in the Samsung Galaxy S8 to today’s launch of Lens your Look. With one of the largest and richly annotated image datasets around, we have an unending list of exciting ideas to expand and improve Pinterest visual search. If you’d like to help us build innovative visual search features, such as Lens Your Look, join us! Acknowledgements: Lens Your Look is a collaborative effort at Pinterest. We’d like to thank Yiming Jen, Kelei Xu, Cindy Zhang, Josh Beal, Andrew Zhai, Dmitry Kislyuk, Jeffrey Harris, Steven Ramkumar and Laksh Bhasin for the collaboration on this product, Trevor Darrell for his advisement and the rest of the visual search team.
Building Lens your Look: Unifying text and camera search
689
building-lens-your-look-unifying-text-and-camera-search-1b2f3ef4e393
2018-06-20
2018-06-20 21:00:37
https://medium.com/s/story/building-lens-your-look-unifying-text-and-camera-search-1b2f3ef4e393
false
889
null
null
null
null
null
null
null
null
null
Computer Vision
computer-vision
Computer Vision
2,375
Pinterest Engineering
Inventive engineers building the first visual discovery engine, 175 billion ideas and counting. https://careers.pinterest.com/
ef81ef829bcb
Pinterest_Engineering
16,302
27
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-26
2018-07-26 13:21:38
2018-07-26
2018-07-26 13:26:16
3
false
en
2018-07-26
2018-07-26 13:27:33
15
1b30d04366fc
6.406604
5
0
0
The intellectual currents that accompany advances in technology can be as fascinating as the technologies themselves. Right now, the belief…
5
Hello. Are you still human? The intellectual currents that accompany advances in technology can be as fascinating as the technologies themselves. Right now, the belief that humanity is on the cusp of a big breakthrough is taking hold. What could it be? For many people, artificial intelligence (AI) holds the promise of better, easier and longer lives. Perhaps even more. The unfolding AI revolution, complete with machine learning, deep learning and cognitive computing (i.e., machines capable of learning from their own mistakes, or from patterns discovered in large databases), and the smart technologies posed to permeate our daily lives, such as autonomous vehicles and the Internet of Things (with its capability to combine devices into one intelligent organism), promises to change our world profoundly. Also, and importantly, the computational power of computers has grown exponentially, supported by complex algorithms and neural networks that enables machines to understand and respond to language, conversing with humans and automating peer-to-peer transactions, and other business processes. All this sounds impressive, heralding a qualitative change in our daily lives. Sophisticated technologies not only gradually transform business models and learning tools, but also penetrate our daily lives. With smartphones in hand and virtual “versions” of ourselves on the Internet, we, as humans, are integrating with the growing digital, virtual space. Can this ongoing makeover affect our biological, spiritual and intellectual makeup? Will AI continue to become ever more autonomous. And if it does, what will this mean for our collective consciousness as a species? Are we on the verge of an existential breakthrough? Some scientists believe so, claiming that the change is evolutionary and that the key driver of human evolution today is modern technology. A cosmic brain-to-brain connection I can still remember the question a friend asked me in my first year of high school: How do you think the ape that became the first man felt? The question amused me, but it was, at bottom, philosophical. I recalled it recently while reading the article, “Humanity is about to transition to ‘evolution by intelligent direction.’” Its author argues that “we are rapidly heading towards the next evolutionary step into what I call ‘meta-intelligence,’ a future in which we are all highly connected, brain to brain. It will become possible thanks to the sharing of thoughts, knowledge and activities, and the technological basis of this process will be ‘THE CLOUD’.” Having announced the transformation of humanity, and its transition to a different level of existence, the author lists the drivers of this process: · The wiring of the planet. · A brain-computer interface. · The emergence of AI. The wiring: Everything connected, always The wiring part of the process is largely in place. The whole of humanity is becoming connected to the global web and, within years, every inhabitant of the planet will enjoy full access. This will place new communications options and unlimited digital data, products, services and content at our fingertips. Never have humans we faced a change so widespread, and so democratic. Thanks to the wiring of the planet, each of us today can (or soon will be able to) access the entire global, intellectual and cultural achievements of humanity. Granted, high quality content may only be accessible at a cost, and therefore available only to some of us. In general, though, all of mankind will benefit. Within a mere two decades, the web has driven the rise of a global economy with its new business models, tools and communication platforms. The freely flow of data sets of cosmic proportions (big data) that the web has enabled is gradually becoming a solid, unbreakable, technological foundation for the global economy, offering business or social products of all kinds to any interested customers, anywhere. The internet has helped tear down barriers to the growth of civilization. Like the web, AI is a game-changer. The brain: This is you, in the cloud The author of “Evolution by intelligent direction” also noted the emergence of a direct link between the human brain and the computer. Science fiction fans are very familiar with this idea, pioneered by William Gibson in his novel “Neuromancer”. Neuromancer by William Gibson — First eddition as of 1984 What for so long has existed only in the imaginations of artists and futurologists, is now becoming reality. Until now, the way we interacted with computers has largely relied on our hands. Soon, however, an interface that directly transmits impulses (thoughts, commands) from the human brain to the computer could deliver significant efficiency gains, time savings, and even a sense of closeness with the content we create. Reports on research laboratories developing broadband connections between digital machines and the cerebral cortex no longer blow our minds. One example of such efforts is the Neuralink projectby Elon Musk, who has invested millions into better and more efficient interactions between the human brain and the machine. Or take the American company Kernel, which does research into how the human brain works. Today, Kernel scientists are designing software to help alleviate neurological conditions and disorders such as epilepsy and Alzheimer’s disease. Tomorrow, Kernel’s goal is to implant a chip in the human brain to link people to a cloud, expanding our memory and enhancing our cognitive functions beyond imagination. We could connect with other human brains just as we communicate with other people’s computers or telephones. In this way, we could explore all human thought without external devices. As fantastic as this may sound, Johnson argues that the ability to implant chips that connect with other chips in other brains is no longer in question; an affirmative answer has already been given. The only question that remains is, when will it happen? And when it does, are we going to witness the birth of an unprecedented cosmic meta-intelligence? The vision is very exciting, but its consequences are hard to predict. I believe that providing every individual with access to all human knowledge, retrievable almost instantly, would inevitably lead to the birth of meta-intelligence. AI: Catalyst for evolution AI is essentially an opportunity to fast-forward evolution and, perhaps, merge with our machines. For the time being, the merger is about creating a better connection between the user and a computer (with its rapidly growing computational power) that engages all five human senses. IBM’s Watson already sifts through billions of data sets per minute to improve health care, performing jobs that would be impossible without algorithms based on neural networks (machine learning) that could never be done by medical and scientific personnel relying on classical computers. Watson is proof of how the cognitive abilities of our species can be enhanced to improve our psychophysical performance. In a study on the future of AI, the McKinsey Global Institute outlines the possible evolution of medical diagnostics over the coming years. Andrew Ng, once key specialist at the Chinese technology company Baidu, has said, “Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.” Andrew Ng while giving this talk. Link Let’s get down to earth Supported by continually-developed deep learning models funded by companies such as Google, Facebook, IBM, Samsung and Alibaba, AI will grow exponentially. The total meta-intelligence (artificial and human combined) can be the most critical success factor for companies and nations. Therefore, along with a new arms race in AI, we will soon see a race to boost the combined human intelligence of entire nations. Critics of the idea of ​​meta-intelligence and, more broadly, of merging humans with machines, abound. Skeptics in the medical world argue we don’t fully understand how the human brain works. We only have a faint idea of where the brain stores (if, in fact, it does) the information it later retrieves. We are befuddled by how the brain performs its calculations. So far, chips to improve memory have only been implanted in rat brains. Therefore, linking the brain to a cloud for common access to the intellectual achievements of all, well… it will take a while. Personally, however, considering the speed at which technology is driving evolution, I would not bet against the possibility of combining advanced technology with the human body. I think, being homo sapiens, we will do what we have always done: move fast, break things, fix them. Works cited Peter H. Diamandis, Humanity is about to transition to “Evolution by Intelligent Direction”, Futurism.com, link, 2017. Willam Gibson, Neuromancer, 1984. Chantal Da Silva, Elon Musk startup ‘to spend £100m’ linking human brains to computers, Independent, link, 2017. Jacques Bughin, Eric Hazan, Sree Ramaswamy, Tera Allas, Peter Dahlström, Nicolaus Henke, Monica Trench, Artificial Intelligence The next Digital Frontier, McKinsey Global Institute, link, 2017. Andrew Ng while giving this talk, Youtube, link, 2017. Shana Lynch, Andrew Ng: Why AI Is the New Electricity, Insights by Stanford Business, link, 2017. Related articles - The “sharing economy” was envisioned nearly 100 years ago - Who will gain and who will lose in digital revolution? - When will we cease be biological people - Artificial intelligence is a new electricity - Robots awaiting judges - Only God can count that fast — the world of quantum computing - Machine Learning. Computers coming of age - Big Data: New player in sport
Hello. Are you still human?
103
hello-are-you-still-human-1b30d04366fc
2018-07-26
2018-07-26 13:27:33
https://medium.com/s/story/hello-are-you-still-human-1b30d04366fc
false
1,552
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Norbert Biedrzycki
Technology is my passion. VP Mckinsey Digital. Private opinions only
ba5b91d4b474
n.biedrzycki
368
45
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-04
2017-10-04 08:18:06
2017-10-04
2017-10-04 08:19:00
0
false
en
2017-10-04
2017-10-04 08:21:53
0
1b314e899fa3
2.271698
0
0
0
There are expected to be over 165,000 attendees this year at CES and with 3,800 exhibitors over three venues. There is much to see and…
5
A CEO’s Guide to CES 2018 There are expected to be over 165,000 attendees this year at CES and with 3,800 exhibitors over three venues. There is much to see and experience over the four day event. How do CEOs and executive leaders navigate this event. Adaptability, understanding disruption and the accelerated pace of change is a business imperative for CEO, CTO, CMO, BOD today. A company’s past success and business model may not translate to future success. Constant experimentation at the edges of the organization is required to stay competitive. The business implications of sensors growing from 8 billion today to 50 billion sensors by 2020 will impact most businesses. Everything will have sensors from wearable’s, food & consumers goods. New business models based on connected products are already immerging. Many business models will be transformed from hardware to software to services. Innovation has many forms beyond such as product innovation. There is also process, social, organizational, management; political, and business model innovation. Here are the areas of focus for CES important for executive leadership: Sensors & IOT Artificial Intelligence Virtual Reality & Augmented Reality Connected cars, robotics, drones Machine learning & deep learning algorithms Neuroscience and neuroscience marketing Data Science & analytics as a service 3D printing Hardware as the new software Bitcoin & block chain There are three ways to navigate CES 2018: 1 CES App is very helpful for a self-guided experience. 2 Generic themed tour options (http://www.ces.tech/Show-Floor/Show-Floor-Tours). 3 Custom Curated Experiences https://consumersinmotion.com/#events based on our experiences at Mobile World Congress since 2012. Custom Curated Experiences is an immersive learning and business development program for executives who want to drive business development, revenue and innovation in their organizations. The program facilitates a view of changing technologies and consumer behaviors critical for successful planning, business growth and innovation. Our team custom curates cutting edge start-ups, identifies strategic partnerships and technologies, saving time and resources allowing you to focus on the big picture at CES. Our four-step process ensures that you have a productive outcome. Please see story of our 2017 CES program (http://www.geomarketing.com/how-agency-marketers-can-get-the-most-out-of-ces/amp) from GeoMarketing. Luxury Daily did a feature story on Custom Curated Experiences (please see this link https://www.luxurydaily.com/custom-curated-experiences-to-bring-guidance-to-innovation-seeking-executives/. CES Highlights: CES will feature Augmented Reality, Cyber & Personal Security, Drones, eCommerce & Enterprise Solution, Gaming & Virtual Reality iProducts, and Self-Driving Technology. In 2018,CES marketplaces include 3D Printing, Accessibility, Baby Tech, Beauty Tech, Education & Technology, Eureka Park, Family & Technology, Fitness & Technology, Global Technology, Health & Wellness, Kids & Technology, Robotics, Sleep Tech, Smart Home, Sports Tech, University Innovations and Wearables. Key Locations to visit at CES: Tech East: Las Vegas Convention and World Trade Center (LVCC), Westgate Las Vegas (Westgate) and Renaissance Las Vegas (Renaissance) Where innovations in audio, drones, gaming, augmented and virtual reality, vehicle technology, video, wireless devices, wireless services, digital imaging/photography or anything “i” come to market. It’s also home to many international exhibitors. Tech West: Sands Expo (Sands), The Venetian, The Palazzo, Wynn Las Vegas and Encore at Wynn (Wynn/Encore) Features the innovative power behind the industry’s emerging technology, including revolutions in fitness and health, the Internet of Things, wearables, smart home, sensors and other high-growth technologies changing the world. It’s also home to Eureka Park, the startup community at CES. Tech South: ARIA, Cosmopolitan and Vdara. The CES epicenter for the advertising, content, marketing and entertainment communities, hosting a variety of C Space activities, including conference sessions, networking events, exhibits and hospitality suites
A CEO’s Guide to CES 2018
0
a-ceos-guide-to-ces-2018-1b314e899fa3
2018-03-03
2018-03-03 08:17:29
https://medium.com/s/story/a-ceos-guide-to-ces-2018-1b314e899fa3
false
602
null
null
null
null
null
null
null
null
null
Tech
tech
Tech
142,368
Consumers In Motion Tours - CIM Tours
Proven innovation guide for global organizations translating your objectives into new technologies and partnerships
16e5408c9d20
consumersmotion
107
149
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-28
2017-12-28 21:43:44
2017-12-28
2017-12-28 22:11:58
1
false
id
2017-12-28
2017-12-28 22:11:58
0
1b34928270c7
1.177358
0
0
0
Kita sebagai manusia mempunyai sesuatu yang istimewa yang disebut otak. yang katanya masih belum “aktif” sepenuhnya. kita dapat mengenalis…
1
Bagaimana Mesin Mengenali Apa yang Kamu Tuliskan? Kita sebagai manusia mempunyai sesuatu yang istimewa yang disebut otak. yang katanya masih belum “aktif” sepenuhnya. kita dapat mengenalis sebuah tulisan dan mengetahuai konteks dan maknanya. lalu bagaimana komputer dapat mengenali sebuah teks. Text mining adalah salah satu bidang yang mempelajari tentang bagaimana komputer dapat mendapatkan pengetahuan dari sebuah text. Text merupakan data yang tidak terstruktur, karena itu komputer harus mengubahnya menjadi sebuah data yang terstruktur agar dapat diolah lebih lanjut. salah satu hal yang merepresentasikan maksud sebuah kalimat adalah kata itu sendiri. Dalam sebuah search engine yang sederhana frekuensi kata digunakan sebagai parameter dalam pencarian artikel. sebelum membentuk sebuah data terstruktur. perlu dilakukan pra-processing yang bertujuan untuk mempermudah komputer dalam mengenali data. pada text-mining pra-processing sendiri terdiri dari : Tokenization : memecah kalimat menjadi kumpulan kata Filtering : membuang kata yang tidak merepresentasikan sebuah kalimat (dan,atau) Stemming dan tagging : mengubah kata menjadi kata-dasar setelah melakukan proses diatas maka kita dapat membentuk sebuah matriks. matriks yang paling sederhana adalah term-frequency. yaitu jumlah kemunculan suatu kata pada kalimat tersebut. Contoh pembentukan matriks term-frekuensi Setelah matrik terbentuk maka kita dapat mengolahnya dengan metode-metode machine learning tergantung kepada apa tujuan yang ingin kita capai. Contoh beberapa penerapan text mining adalah : Search engine Analisa sentiment pada suatu text Analisa Topik pada suatu text Berbagai macam pengolahan data yang berupa text Text mining sendiri mempnyai beberapa tantangan untuk diselesaikan. seperti : tingginya dimensi data kompleksitas dan hubungan konsep antar kata ke-ambiguan kata masyarakat masih belum menggunakan bahasa baku. terutama di media sosial
Bagaimana Mesin Mengenali Apa yang Kamu Tuliskan?
0
bagaimana-mesin-mengenali-apa-yang-kamu-tuliskan-1b34928270c7
2017-12-28
2017-12-28 22:11:59
https://medium.com/s/story/bagaimana-mesin-mengenali-apa-yang-kamu-tuliskan-1b34928270c7
false
259
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Rangga Rizky A
Data-Driven Human , Bachelor of Science Fiction
24ad0fec8834
ranggaantok
13
24
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-08
2018-05-08 17:54:52
2018-05-12
2018-05-12 08:29:28
44
false
en
2018-05-12
2018-05-12 08:48:08
21
1b34949fc516
7.134906
5
0
0
Summary
4
Practical OpenCV 3 Image Processing with Python Preview Summary Learn how to implement practical end-to-end applications of computer vision using opencv3 The course is divided into sections for ease of understanding of concepts This material consists of 3 sections Chapter 1 : Building an Image Search Engine from Scratch Chapter 2 : Finding Targets and Number Plate Recognition in Video Chapter 3 : Scene Understanding and Automatic Labeling from Images Source Code https://github.com/riaz/Practical_OpenCV3_Python Contents Description of contents of each sections and the coding exercises for each sub-section with screenshot of the final results. Chapter 1: Building an Image Search Engine from Scratch In this section, we learn about the various image transformation techniques like Hough Transformation , which are all based on scoring probabilities of existence of points of interests and converging on the output. We will learn techniques to stretch, shrink, rotate and warp and image and in later sections we will see how such transformations don’t effect when trying to do Object Recognition using homography. We next learn about how Image Histograms are built and how we can use techniques like Histogram Equalization to De-noise a image effectively and we will further delve into properties of a histogram and how it can be used to build a image search search of some point of accuracy. 1.1 Learning about Hough Transformations Learn what is hough transform, and techniques to detect lines and circles in an image using hough transform. Bonus: Implementing a randomized method to detect circles in an image, which is run faster than hough based methods. Hough line detection to detect the horizon (blue line). Hough line detection to detect the horizon (blue line). 1.2 Stretch, Shrink, Warp, and Rotate Using OpenCV 3 Learn about simple Image Transformation techniques like Stretch, Shrink, Warp and Rotate. Image Stretch and Shrink Transformations Affine Transform Perspective Transform Image Rotate 1.3 Image Derivatives Introduction to image derivatives and examples of how we can calculate the image derivatives using Sobel and LaPlacian kernels. Sobel Filter — x and y components Sobel Filter — Magnitude and Angle Laplacian Derivative 1.4 Histogram Equalization Histogram Equalization as a technique to dynamically modify the expose in images and improve contrasts. 2D Histogram Equalization Project 1: Reverse Image Search Building a Reverse Image Search Engine to find related images based on Image Histograms Project 1 : Reverse Image Search Chapter 2 : Finding Targets and Number Plate Recognition in Video Stream In this section, we learn about the Image Segmentation methods and methods to extract region of interests (ROIs) or contours on which we can apply any type of image processing pipeline to work with the contours. We also learn a technique called as template matching which can be used to detect a pattern a an image in a linear way. We also learn about Background Subtraction, which can be useful to segment away foreground from background and manipulate them individually. We also learn techniques. We will also learn about how Computer Vision is used in the field of Medical Imaging and we conclude this section by learning how to train a application to be able to detect predefined targets and also to be able to detect number plates, even though we would dive into the details of how the SVM implementation works to detect number plates. Number Plate Segmentation 2.1 Extracting Contours from Images Learn about extracting contours from images, approximating a polygon on the contours and computes Hu Moments. Detecting Binary Contours in an Image Polynomial Approximation of Binary Contours Computing Hue Momemts for the Contours 2.2 Template Matching for Object Detection Template Matching for finding the location of a template image in a another image, that may contain the template. Template Matching to detect “Kalpana Chawla” — https://en.wikipedia.org/wiki/Kalpana_Chawla 2.3 Background Subtraction from Images Its is the name for a set of techniques that can be used to separate static background from the non-static foreground. Creating a mask by subtracting moving humans from the background Using connected components and computing hue moments to detect the number of coins in an image. Detecting the number of coins in a image using background substraction 2.4 Delaunay Triangulation and Voronoi Tessellation Delaunay Triangulation is a technique for connecting points in space into triangular groups such that the minimum angle among all the edges forming the triangle is maximized. Delaunay Triangulation of a set of points Voronoi Tessellation creates partitions between points that are the nearest neighbors to each of the Delaunay vertex points. Also, Voronoi Tesselation is a dual of Delaunay Triangulation. Voronoi Tesselation of the same set of points 2.5 Mean-Shift Segmentation It is an algorithm to segment an image by finding peaks of color distribution over a space (superpixel). Mean-Shift Tracking — is a method to track a shifting change in mean of a region between frames (video). Tracking a Bicyclist, flying his drone 2.6 Medical Imaging and Segmentation Using OpenCV to segment MRI scans of a human head. Human Head Segmentation Project 2: Automatic Number Plate Recognition in Video Detecting the number plates of cars in a driving scene. Automatic Number Plate Recognition in Video Chapter 3 : Scene Understanding and Automatic Labeling from Images In this section, we learn about what features means in terms of OpenCV and what are the elements of good features in an image which may include edges, corners etc. We later explore on the most common corner detection algorithm which is Harris Corner Detection Algorithm. We also learn about SIFT, SURF et al, which are scale and rotation invariant corner detections and have application in object tracking. We then learn about optical flow which is the pattern of apparent motion of image objects between two consecutive frames caused by the movement of object or camera.We will also diving into the application of Deep-Learning for Feature Extraction on a greater scale of accuracy. This Section has a very challenging project where we try to write deep-learning algorithms to understand scene’s and label objects and classify them accordingly. We could further extend this concept by paraphrasing the objects and their actions and coming up with a beautiful prose that summarises these elements in a image and is converse to a Text-to- Imagery Storytelling which is very popular nowadays especially in VR world. 3.1 Harris Corner Detection Using Harris Corner Detection to detect corners in an image Detecting Corners in the right image but we can do better Using Harris-Shi-Thomasi Corner Detector as an improvement to above. A better corner detection, but still some corners are missed with ‘no’ blue dots in right. Refining Harris Corner Detection by using the cv2.cornerSubPix() function. Even better result than using Harris-Shi-Thomasi Corner Detector 3.2 SIFT, SURF, FAST, BRIEF, and ORB Algorithms Using the various feature descriptor algorithms that comes with OpenCV3. Extracting SIFT Features Extracting SURF Features Extracting FAST Features Extracting BRIEF Features Extracting ORB Features 3.3 Feature Matching and Homography to Recognize Objects Discussing techniques to match features among images which can be used to obtain Translation, Rotation matrices or Homography Matrix in case of Homography. Using Brute Force Matcher Using K-Nearest Brute Force Matcher Using FLANN-based matcher , which is a significant improvement over Brute-Force Marcher Computing Homography 3.4 Mean-Shift, Cam-Shift, and Optical Flow Both Mean-Shift and Cam-Shift techniques are used for tracking objects in a video, but cam-shift is more robust as it can handle the changing size of the target as they move forward. Mean-Shift Tracking Cam-Shift Tracking Video Stabilization by correcting Optical Flow 3.5 Feature Extraction Using Convolutional Neural Nets (CNNs) Introduction to Deep Leaning and training simple Neural Nets like LENET-5 for identifying handwritten characters. Lenet-5 Architecture Lenet-5 Tensorflow Implementation 3.6 Visual Object Recognition and Classification Using CNNs Deep Learning methods to train a network to learn recognize objects in images and also form meaningful sentences that help describe the scene given the words. Client testing — Scene Understaning Scene Action Prediction Source Code : https://github.com/riaz/Practical_OpenCV3_Python
Practical OpenCV 3 Image Processing with Python
15
practical-opencv-3-image-processing-with-python-1b34949fc516
2018-05-22
2018-05-22 04:08:02
https://medium.com/s/story/practical-opencv-3-image-processing-with-python-1b34949fc516
false
1,109
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Riaz Munshi
null
9b332bcb0314
riazmunshi
23
227
20,181,104
null
null
null
null
null
null
0
null
0
33272dbbd858
2018-01-04
2018-01-04 02:59:16
2018-01-04
2018-01-04 07:03:03
10
false
en
2018-03-20
2018-03-20 10:02:30
1
1b352391172a
6.861321
43
2
0
How can you tell whether your shop assistant is a person or a robot?
5
Behind the Chat: How E-commerce Robot Assistant AliMe Works How can you tell whether your shop assistant is a person or a robot? The most significant innovation in AI these recent years, smart chatbots, personal assistants, are only a glimpse of what the future holds. Technology companies such as Google, Facebook, Microsoft, Amazon and Apple are at the forefront of personalized interactive products where intelligent human-computer interactions (IHCI) technology will continue to play a central role in automated messaging, task assistance and the Internet of Things. As the market matures, chatbots are becoming more and more specialized according to their specialized intended purposes, such as customer service, entertainment, personal assistance, or education. Launched in July 2015, AliMe is an IHCI-based shopping guide and assistant for e-commerce that overhauls traditional services, and improves the online user experience. During 2017’s Double 11 shopping festival, AliMe successfully responded to 9.04 million queries, and accounted for 95% of the customer services rendered by Alibaba’s e-commerce platforms. Intelligent human-computer interaction (IHCI) systems are commonly referred to as chatbots or bot systems. Natural language understanding (NLU) is the very foundation of IHCI, a dialogue system that processes users’ questions and generates answers in natural language. This in itself is quite a feat as computers are built on logic-heavy cognitive bases that are not suited for processing dynamic human languages. The first step in creating AliMe required setting up abstract frameworks for different fields, strata, and scenarios. Standard IHCI flow AliMe’s Stratified Framework The majority of intelligent matching processes in use today fall into three main categories- rule-based matching, retrieval, and DL. The technology behind AliMe is based on a combination of all three. The dialogue system is thus divided into the following strata: 1. Intention identification stratum This stratum identifies the underlying intention for each message, classifying them and then extracting their attributes. Since intentions determine the subsequent domain identification flow, the intention stratum is a necessary first step in initiating contextual and domain data model processes. The technical framework for AliMe’s intention and matching stratification 2. Answering stratum: Questions are matched and identified to generate answers; AliMe’s dialogue system employs three answering strategies according to different intentions: a. FAQs such as “What should I do if I’ve forgotten my password?” trigger a query on knowledge graph or retrieval model. The knowledge graph is constructed by mining entities and phrases, the relations of which are predefined, from the vast pool of data available. Though knowledge graph-based methods accurately identify answers, they also accrue higher maintenance costs and looser initial data structures AliMe’s Q&A design overcomes this by integrating traditional retrieval models. Mining data for creating knowledge graphs b. Tasks such as “I’d like to book a one-way flight from New York to Paris for tomorrow” can be solved by the intention commitment + slot filing matching or deep reinforcement learning (DRL) model. c. Chitchatting, such as “I’m in a bad mood”, pulls up a method that marries the retrieval model with deep learning (DL). The chitchat domain mainly involves two kinds of models- the retrieval-based model and the deep generative model. The former makes selections from a fixed corpus of answers relevant to a given query, while the latter is more advanced, generating answers without relying on any corpus. The integrated merits of the two models form the core of AliMe’s chat engine. First, the candidate data sets are brought up using the traditional retrieval model; then, candidate sets are re-ranked through the Seq2Seq model; the top answer candidate is chosen when the ranking score is higher than the preset threshold, failing which the seq2seq model is activated to generate an answer. AliME’s chatting module flow The Deep Learning Practices of AliME’s Intention Identification AliME’s identification and extraction of intentions is reliant on the classification results. AliME incorporates features of both traditional textual and user behaviors to analyze incomplete user intentions. The user behavior-based DL model’s classification of intentions During the process of creating DL-based prediction systems, the team came up with two specific modeling options. The multi-classification model, though faster, required retraining with every new label added to the class family, whereas the binary classification model, a clear underperformer which needed constant dichotomization, allowed for unfettered field expansions on the original platform. It was apparent that both models, with their specific drawbacks and strengths, serve very distinct sets of scenarios. AliME’s DL-based intention classification embeds behavioral factors and textual features, and concatenates different vectors before multi-classification or binary classification processing. Textual features can be represented as bag of words or word embedding. Classification of intentions by DL accounting for user behavior How AliMe Works as an Intelligent Shopping Guide Intelligent shopping guide systems interact with users to analyze their intentions with the goal of providing a better shopping experience. The interactions serve two main purposes- helping machines understand user intentions, and optimizing recommendation rankings and the interactive process itself. Standard technical framework for the AliMe intelligent shopping guide Intelligent shopping guide systems are created to deduce what users want, and the attributes of those goods. This brings with it a new set of issues: Challenge 1: Users tend to express themselves in short sentences, therefore, identifying intentions accurately requires multiple rounds. Challenge 2: Users often interact inconsistently, detailing or modifying parts of their intentions. Challenge 3: Shoppers’ intention may not always be semantically correct or accurate. Challenge 4: Relations between intentions are very complex. AliME can accommodate phrasal expressions, intention boundary switches and logical modifications owing to the intention stack and product knowledge graph Due to the vast variety of goods, knowledge graphs are combined with semantic indexes to make identification extremely effective. Under intelligent shopping guide scenarios, category management consists of category identification and calculation of category relations. Category relations framework Category Identification AliME’s identification plans are built on knowledge graphs, semantic indexes and DSSM (deep semantic similarity model). The semantic indexes are built on textual information as well as search and click data. Similarities between word segmentations and candidate categories are calculated using word embedding. AliME’s goods identification plan based on semantic indexing and DSSM Calculation of Category Relations The calculation of category relations addresses intentions arising from the intelligent shopping guide. Two important examples of these relations are hyponymy relations and similarity relations. For example, when a user first intends to buy some clothes but later changes their mind to buy a cup, the attributes associated with clothes should not be passed down to the cup. On the other hand, if the user changes his mind and buys a shirt, a hyponym of clothes, the attributes associated with clothes should be passed down to the shirt. Hyponymy relations can be calculated through the following two options: a) Knowledge graph-based relation calculation b) Extraction from users’ queries Similarity relations can be calculated through the following two options: a) Use of the same hypernym: For example, both Xiaomi and Huawei share the phrase ‘mobile phone’ as a hypernym b) Semantic similarity based on embedding computation The Road Ahead for IHCI Technologies Though the technological progress observed in the 21st century is significant, the current phase of AI and its application are definitely nascent. Fields ranging from perception to cognition require vast levels of improvement in order for IHCI to continue enabling industry. Efforts in gathering data and refining knowledge graphs will contribute to IHCI’s development. Task-oriented bots across industrial verticals are poised to provide explosive economic growth; interactive bots targeted at open domains, however, require higher scrutiny and experimentation in the long-term future. Following its successful adoption in computer vision and voice recognition, DL will continue to be applied in the domain of natural language processing (NLP). Fortunately, the urgency of development in AI has been met with equal enthusiasm from various stakeholders, from private enterprises to governments, and from academic circles to industrial communities. Given this, we can expect IHCI to fulfill our expectations and visions for the near and long-term future, where even the wildest of science fiction movies and books pale in comparison to the actualized level of technology. (Original article by Zhou Wei, Chen Haiqing) Alibaba Tech First hand, detailed, and in-depth information about Alibaba’s latest technology. Follow us: www.facebook.com/AlibabaTechnology References: [1]: Huang P S, He X, Gao J, et al. Learning deep structured semantic models for web search using click through data[C]// ACM International Conference on Conference on Information & Knowledge Management. ACM, 2013: 2333–2338. [2] Minghui Qiu and Feng-Lin Li. MeChat: A Sequence to Sequence and Rerank based Chatbot Engine. ACL 2017 [3] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015 [4] Matthew Henderson. 2015. Machine learning for dialog state tracking: A review. In Proceedings of The First International Workshop on Machine Learning in Spoken Language Processing. [5] Mnih V, Badia A P, Mirza M, et al. Asynchronous Methods for Deep Reinforcement Learning[J]. 2016 [6] Li J, Monroe W, Ritter A, et al. Deep Reinforcement Learning for Dialogue Generation[J]. 2016. [7] Sordoni A, Bengio Y, Nie J Y. Learning concept embeddings for query expansion by quantum entropy minimization[C]// Twenty-Eighth AAAI Conference on Artificial Intelligence. AAAI Press, 2014: 1586–1592.
Behind the Chat: How E-commerce Robot Assistant AliMe Works
213
behind-the-chat-how-e-commerce-bot-alime-works-1b352391172a
2018-05-21
2018-05-21 14:47:32
https://medium.com/s/story/behind-the-chat-how-e-commerce-bot-alime-works-1b352391172a
false
1,487
Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an Enthusiasts.
null
null
null
ML Review
medium@mlreview.com
mlreview
MACHINE LEARNING,DATA SCIENCE,COMPUTER VISION,ARTIFICIAL INTELLIGENCE,DEEP LEARNING
ML_review
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Alibaba Tech
First-hand & in-depth information about Alibaba's tech innovation in Artificial Intelligence, Big Data & Computer Engineering. Follow us on Facebook!
69f6dde768a1
alitech_2017
1,157
14
20,181,104
null
null
null
null
null
null
0
null
0
d800127b34b8
2018-05-17
2018-05-17 14:19:54
2018-05-15
2018-05-15 12:00:06
1
false
en
2018-05-17
2018-05-17 20:21:05
7
1b361ae7ce90
3.298113
1
0
0
By Nav Dhunay
3
Don’t Worry — AI Isn’t Better Than Humans At Everything By Nav Dhunay The Economist recently published an irreverent piece about the one of the latest advancements in AI technology: assembling IKEA furniture. “Now that machines have mastered one of the most baffling ways of spending a Saturday afternoon,” the magazine joked, “can it be long before AIs rise up and enslave human beings in the silicon mines?” Kidding aside, the article goes on to note that not only did these Singapore-based researchers train a robot to take care of a task that most of us would more than happily hand off, but that when it came down to brass tacks, the robot didn’t actually perform the task all that well. “It took a pair of [IKEAbots], pre-programmed by humans, more than 20 minutes to assemble a chair that a person could knock together in a fraction of the time.” Personally, I’d happily let the robot work at its own pace if it meant I didn’t have to bother with a bunch of Allen keys and Swedish instructions. But the larger point is clear: for all the fear surrounding AI and its potential to dramatically disrupt the workforce and make human beings redundant, we should remember that our machines aren’t necessarily good at the same things we are. For all of their intricate circuitry and polymer sheen, the IKEAbots simply lack the manual dexterity that, after a hundred-thousand-odd years of human evolution, most of us more or less take for granted. A computer might be able to humiliate you at chess, but this has less to do with your lack of skill than it does the fact that chess requires exactly the sort of pattern recognition, probability analysis, and algorithmic processing that computers are built to excel at. Do you remember back in 2013 when Watson, a computer developed by IBM’s DeepQA project lab won on Jeopardy? I, for one, was not at all surprised. After all, what hope did those measly humans have going up against such a machine? Not even a fair contest! However, if you dig a little deeper, you come to realize what a remarkable accomplishment Watson’s victory truly was, because computers don’t “think” in the same way we do. Computer cognition is rooted in a database of elements, and forging links between those elements. A computer can only answer a question if the right information has been programmed into its database. In that regard, Watson is not so different from its human counterparts. This becomes even trickier when you start to consider the complexities of the language in which Jeopardy’s questions are asked. Like many languages, English is rife with subtlety and slang. Often words make sense because of the context in which they are placed. If I asked you if such and such a person was born in the fifties, you would likely infer that I meant the 1950s, as in the decade. But why should Watson be able to recognize that kind of shorthand? In fact, Watson didn’t — at least, not until his programmers programmed him to. Part of the fear surrounding AI stems, I would say, from our tendency to confuse human and machine capacity. We live in an age rife with incredible gadgets designed to maximize our comfort and convenience, from a smartphone to a coffee maker. This has bred a misconception that machines can do anything we can, if not better. If there isn’t a machine that can do that thing now, well, it’s only a matter of time. Add to this the way in which we tend to think of ourselves in machine-based metaphors. We talk about our brains like they’re flesh and blood computers. We talk about our “circuitry,” and “storing data.” As with our physical bodies, however, human and machine cognition are very different animals. We may not be able to process every possible upcoming move in a chess match (something that is still pretty tricky for computers to pull off), but no currently existing computer can associate memory and anticipation with a sensory repertoire while navigating three-dimensional space. We shouldn’t take it as a given that computers will ever be able to think or move as dynamically as we do. As we forge full steam ahead into the AI era, we need to keep in mind that, in all likelihood, the greatest results will stem from collaboration between people and computers. The best partnerships will take advantage of, and strike a balance between, our distinct abilities: AI efficiency and programming paired with human problem solving and agility. A camera’s AI might be able to take care of a lot of grunt work on our behalf — focus on a subject, judge the lighting conditions, and generate the optimum settings all in under a second — but it can’t make that subject smile. Originally published at aibusiness.com on May 15, 2018.
Don’t Worry — AI Isn’t Better Than Humans At Everything
45
dont-worry-ai-isn-t-better-than-humans-at-everything-1b361ae7ce90
2018-05-17
2018-05-17 20:21:06
https://medium.com/s/story/dont-worry-ai-isn-t-better-than-humans-at-everything-1b361ae7ce90
false
821
Build better AI. Faster. Together.
null
imagineaai
null
Imaginea Ai
marketing@imaginea.ai
imaginea-ai
ARTIFICIAL INTELLIGENCE,AI,DATA SCIENCE,DATA ANALYSIS,MACHINE LEARNING
imagineaai
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Nav Dhunay
Serial #Entrepreneur #Innovator #Mentor #CompanyFounder #Inventor #Startups #Entrepreneurship | Member of @The_A100, Founder of @Ambyint, Founder of @ImagineaAi
46bf25e46dbd
ndhunay
1,872
1,449
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-07
2017-12-07 02:55:41
2017-12-07
2017-12-07 17:46:05
0
false
en
2018-01-14
2018-01-14 10:42:51
0
1b36252b5f36
1.554717
1
0
0
Hey! Are we all set to learn something about Ensemble methods?! Hahaha! We sure are! Well, I am assuming you guys have some background or…
5
Introduction to Scikit-learn’s Ensemble Methods Hey! Are we all set to learn something about Ensemble methods?! Hahaha! We sure are! Well, I am assuming you guys have some background or prior knowledge of basic Data Mining classification, clustering and regression algorithms, like Naive Bayes, Decision Trees, KNN, K-Means etc. So, without further ado, lets get to it. Ensemble Methods, talk about combining predictions from several “base estimators”. Now, this base predictor could be a K Nearest Neighbor classifier, a Decision Tree or I guess practically anything under the sun! (Kidding! Wont that be great though?!) There are 2 broad types of Ensemble Methods- Averaging Methods : Yep, you guessed it! These build the estimators independently and then compute an average score from those estimators and predict that! Eg : Random Forests Boosting Methods : Here, we sequentially build estimators and try to reduce the bias of the combined estimator. What we are trying to achieve is to combine several weak models to produce a powerful ensemble. Eg : AdaBoost, Gradient Tree Boosting RANDOM FORESTS So how do Random Forests work?! As the name suggests, these are forests of randomly constructed Decision Trees. Why are these called random you may ask? Because when splitting a node during construction of the Tree, the split that’s chosen is no longer the best among ALL the features. Instead the split that is chosen os the best split among the RANDOM subset of features. Stuff to keep in mind about RFs :- Bias of the forest increases But due to averaging, its variance decreases Overall due to decrease in variance, we get a better model. In order to get a Random Forest up and running, make sure you have sci-kit learn up and running along with numpy. You can test out the following code to see how RF is built in python in just 6lines of code!! from sklearn.model_selection import cross_val_score from sklearn.ensemble import RandomForestClassifier classifier = RandomForestClassifier(n_estimators=10, max_depth=None,min_samples_split=2, random_state=0) scores = cross_val_score(classifier, X, y) scores.mean() Here, n_estimators is the number of Decision Trees that will be used(#trees in the forest). The more the merrier :P LOL Kidding! The more, implies more time complexity, better variance. Though obviously results will stop getting better beyond a critical number of trees. Have my end semester examinations from Monday 11th December’17, so gotta go(though this has been fun! :)). Will write a post soon on AdaBoost and GradientTreeBoosting! Please like and share if you enjoyed this! :D
Introduction to Scikit-learn’s Ensemble Methods
5
introduction-to-scikit-learns-ensemble-methods-1b36252b5f36
2018-06-06
2018-06-06 09:12:43
https://medium.com/s/story/introduction-to-scikit-learns-ensemble-methods-1b36252b5f36
false
412
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Zoya Khan
A girl is No One!
64e4e0c4461c
zoya1996k
1
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-30
2017-12-30 10:05:57
2017-12-30
2017-12-30 21:57:29
0
false
en
2017-12-30
2017-12-30 21:57:29
19
1b3abb33e8fd
0.784906
1
0
0
Curated list of technology articles and more
5
Fleeting Reads 1 Curated list of technology articles and more 31Dec2017. This will be a weekly (hopefully!) roundup of the most thought-provoking articles, videos that caught my attention. Data science, computer programming and human behavior will be some of the recurring themes. Hope you get something out of these. Read [Long] [Tech] Netflix: What Happens When You Press Play? (highscalability.com) [Tech] Google Maps’s Moat (justinobeirne.com) Meet the man behind the most important tool in data science (qz.com) Whale Watching: Many companies earn a huge portion of sales from a few customers (secondmeasure.com) [Long] [Tech] An Interview with an Anonymous Data Scientist (logicmag.io) [Tech] The mythical 10x programmer (antirez.com) Brilliant Jerks in Engineering (brendangregg.com) 10 Lessons of an MIT Education (tamu.edu) [Long] Taming the Mammoth: Why You Should Stop Caring What Other People Think (waitbutwhy.com) [Long] [Fiction] Cat Person (newyorker.com) Watch [13 min] Nietzsche and Morality: The Higher Man and The Herd [4 min] Steve Jobs on how to hire, manage, and lead people [26 min] Ever wonder how Bitcoin (and other cryptocurrencies) actually work? Learn Machine Learning 101 slidedeck: 2 years of headbanging, so you don’t have to (google.com) Learn just enough Linux to get things done (alexpetralia.com) Introduction to R Programming (github.io) See you next time!
Fleeting Reads 1
1
fleeting-reads-1-1b3abb33e8fd
2018-01-02
2018-01-02 19:17:26
https://medium.com/s/story/fleeting-reads-1-1b3abb33e8fd
false
208
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Chirag Khatri
In it for the lulz
cba980709199
zvovov
2
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-05
2018-04-05 11:03:17
2018-04-06
2018-04-06 01:31:01
3
false
en
2018-04-06
2018-04-06 01:31:01
4
1b3abec929d3
3.00283
3
0
0
A take on systematic cognitive dissonance through doctored social media
5
Socially Wasted with Machine Learning A take on systematic cognitive dissonance through doctored social media Photo by Tim Bennett Social Media has been instrumental in shaping our culture in recent years. So much so, people have started using them as the source to catch up on news and current affairs. It’s not an exaggeration if we say “We can’t live without Social Media in this day and age, if you want to stay relevant”. With Great Power Comes Great … With technology getting stronger in the field of Data Analytics, Machine Learning and Artificial Intelligence, we are at the cusp of something great. You can sense it coming, but we don’t know what to expect. With all the data we are generating, it is impossible for a human being to make sense of it all: • YouTubers upload 300 hours of video every minute • Facebook users contribute around 31.25 million messages a minute • Twitter users tweet around 8K tweets a second • Instagram users submit 50K images a minute • Blogs writers submit around 3 Billion articles a day Tech companies behind popular Social Media platforms have taken to AI and ML to help moderate and curb inappropriate and disturbing content. SageMaker by Amazon, Cloud AutoML by Google are some of the examples that help scan Video, Images and other content. However, they are far from perfect today. Photo Courtesy: Markus Spiske What happened with: • Cambridge Analytica siphoning sensitive user information from Facebook • Systematic Fake news penetration into all social media platforms that arguably influenced voting decision of US elections in 2016 • Hate speech spike on Facebook that added fuel to Rohingya crisis in Myanmar during 2017 • Fake images uploaded by self-proclaimed fitness gurus on Instagram • ISIS actors using social media to brainwash young people for militancy • Blue Whale phenomenon • Tide pod eating challenge, the list goes on…., There is a rise in both negative and positive discussions on social media platforms. How did we get here in the first place? Have you noticed how you look for a video or an image, or you liked or shared something, or commented on any social media platform, and you start seeing a lot of people talking and sharing similar content on your news feed. This is not by chance, it’s by design. Most of us are aware that companies use Machine learning to understand what you like, parse through your preferences and search history, maybe access your cookies etc., and feed you with the most relevant information to you. Photo by Karim Ghantous While Machine learning (unsupervised learning to be precise) helps you find things important to you, it also makes you believe the world around you is talking about the same thing and shares the same views you do. You start living in this bubble and the rabbit hole goes only deeper after that. This leads to a placebo affect that you need to understand and appreciate. The fact is, the whole world doesn’t share your views, even if you are Gandhi, Mother Teresa or Martin Luther King Jr. The affect ML based recommendation is quite wide spread. As long as it is used for selling you a product / service or finding you something useful, we can see a lot of merit in it. However, if used for pushing someone’s agenda or conspiracy theories, to someone who doesn’t understand they are being systematically brainwashed, it can rip through the fabric of society. No, we don’t have to desert Social Media platforms to save ourselves from this. What we have to do is, be conscious. We shouldn’t get carried away by what you see in our news feeds, whether it is news, politics, religious, entertainment, sports, etc., If you are a parent, please help your kid(s) to understand how things work on social media and help them be more cautious. “Freedom is the will to be responsible for ourselves.” ― Friedrich Nietzsche
Socially Wasted with Machine Learning
53
socially-wasted-with-machine-learning-1b3abec929d3
2018-04-08
2018-04-08 07:40:40
https://medium.com/s/story/socially-wasted-with-machine-learning-1b3abec929d3
false
650
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Chetan Pal
Believer in thought experiments that reshapes culture, Automation Evangelist, Photographer, Listener.
5f988d7faf49
ChetanPal
14
7
20,181,104
null
null
null
null
null
null
0
null
0
99edadf89480
2018-04-24
2018-04-24 04:54:55
2018-04-30
2018-04-30 03:10:13
2
false
en
2018-04-30
2018-04-30 03:11:25
8
1b3b014c1012
1.941824
8
0
1
Weekly Reading List #2
4
TPU, Listing Embeddings, Pyception Weekly Reading List #2 Source Issue #2: 2018/04/23 to 2018/04/29 This is an experimental series in which I briefly introduce the interesting data science stuffs I read, watched, or listened to during the week. Please give this post some claps if you’d like this series to be continued. Hands-on with the Google TPUv2 It’s been a while since Google made TPU available on their cloud platform in beta in February. I’m curious if there are already some people sharing their experience using TPU on the Internet. So I did some Googling… Hands-on with the Google TPUv2 Google's Tensor Procesing Unit (TPU) has been making a splash in the ML/AI community for a lot of good reasons…blog.paperspace.com The article above then led me to… Benchmarking Google’s new TPUv2 UPDATE: Thanks for all your ideas on improving the benchmark! We are currently collecting all feedback and already…blog.riseml.com Taken from RiseML blob So it seems the released TPU (TPUv2) is a bit more cost effective then . However, the fact that TPU supports only mixed precision training may become an issue sometimes. The downside is that there are a lot of hoops to jump through to be able to use TPU, according to the Paperspace blog post. And some of them are quite intimidating. Also, only Tensorflow supports TPU so far. Update on 2018/4/26 Comparing Google’s TPUv2 against Nvidia’s V100 on ResNet-50 Google recently added the Tensor Processing Unit v2 (TPUv2), a custom-developed microchip to accelerate deep learning…blog.riseml.com This new post by RiseML showed that TPU might be even more cost-effective than we thought, and the top-1 accuracy (on the validation set) is bit better coming from TPU than from GPU. Airbnb Listing Embeddings This post by Airbnb describes how they embeds every listing on their platform to improve similar listing recommendations and later real-time search personalization. Its is well-written and easy to read. The model evaluation parts are particularly interesting. The methodology should be applicable to other similarity problems, too. Listing Embeddings for Similar Listing Recommendations and Real-time Personalization in Search Authors: Mihajlo Grbovic, Haibin Cheng, Qing Zhang, Lynn Yang, Phillippe Siclait and Matt Jonesmedium.com Pyception A freaking hilarious and surprisingly educational parody by Anaconda. A Nice Telegram Channel about Data Science (I’m not affiliated with the channel.) I’ve found the links posted in this channel relevant and informative: Data Science Boom Boom! Data Science, Machine Learning, Artificial Intelligence news and learning resources. Our Data Science discussion…t.me
TPU, Listing Embeddings, Pyception
10
tpu-listing-embeddings-pyception-1b3b014c1012
2018-05-22
2018-05-22 14:51:24
https://medium.com/s/story/tpu-listing-embeddings-pyception-1b3b014c1012
false
413
Pretending to write about data science, deep learning, and some others (a.k.a. the whole AI package).
null
null
null
The Artificial Impostor
ceshine@ceshine.net
the-artificial-impostor
DATA SCIENCE,DEEP LEARNING,PYTHON,R LANGUAGE,MACHINE LEARNING
ceshine_en
Reading
reading
Reading
22,440
Ceshine Lee
Humanist. Data Geek.
a50580c33120
ceshine
646
89
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-27
2018-07-27 16:43:37
2018-07-18
2018-07-18 00:00:00
0
false
en
2018-07-27
2018-07-27 16:45:04
21
1b3b9ecece19
4.581132
0
0
0
By Cheryl Porro and Katharine Bierce, Salesforce
5
What CSR Professionals Should Know about Artificial Intelligence By Cheryl Porro and Katharine Bierce, Salesforce Like many employees today, CSR professionals are facing the challenge of “too much data and not sure what to do with it.” If you’re interested in using data for your impact work with community partners and personalizing the employee experience of how they can live their purpose through the company…. Keep reading. Good intelligence needs good data Artificial intelligence (AI) is a tremendous opportunity, but also comes with the responsibility to monitor for data quality, and around processes for collecting data and how data impacts social justice. Because they need lots of examples in order to be trained, artificial intelligence and machine learning algorithms are dependent on the quality of the data they are fed. In a nutshell: garbage in, garbage out. When automated systems “learn” from non-representative or poorly curated data, they come to biased results. For example, an article by ProPublica demonstrated that when biased data was fed to an algorithm, it was more likely to incorrectly categorize black defendants as having a high risk of reoffending and more likely to incorrectly categorize white defendants as low risk. Machine learning apps that use biased data amplify those biases — unless we take steps to keep our algorithms honest. “If your data is bad, your machine learning tools are useless,” as Harvard Business Review notes. Transparency matters in keeping AI honest. As such, CSR professionals should carefully consider the data behind predictive models, to enable both accurate and helpful predictions, and thereby improve trust with constituents. Social impact reports central to brand trust Trust is a key Salesforce value. In an interview on CBS This Morning, CEO Marc Benioff called for a national privacy law in the U.S. Improving trust requires regulating data quality, including processes for collecting, collaborating, and improving data so that it can be used to help people more effectively. If you want to create a predictive model, you might start in a spreadsheet, but you can’t just stay there. Collecting social impact program data in one place is the first step to being able to leverage the fancy math that powers machine learning and artificial intelligence. As any IT systems administrator knows, when data lives in different disconnected systems, it’s harder to derive insights than when it’s on one platform. We have outlined an introduction to AI for Good, principles to follow, organizations leading in artificial intelligence for social impact and How to Build Ethics into AI: Part I and Part II. There are examples of organizations using AI for nonprofit fundraising, marketing and program management as well as in higher education and donor engagement. Employees love AI; personalization is IN In addition to applications outside the company, AI can also be applied internally, to enhance the employee experience. While having a database of volunteer opportunities through which employees can search is great, it’s even more compelling to have personalized recommendations on how to help based on an employee’s volunteering interests. CSR professionals think a lot about rising above the noise to gain employees’ attention and getting them to take an action. Personalization is one way to do that! With new technology, CSR professionals can help people see the impact of their work. Salesforce.org Philanthropy Cloud* doesn’t require anything external to personalize your impact. Launching in late June, Philanthropy Cloud will apply machine learning and predictive analytics techniques to giving and volunteering! Plus, it’ll integrate with existing single-sign-on systems, have a full API that companies can tap for integration purposes, and other exciting enterprise-ready features. The big picture is: We can move from a “people like you also bought this item online” predictive algorithm to “people like you also volunteered/donated to this cause.” Algorithms can help match people to meaningful ways to create positive change at scale. AI recommendations form communities of like-minded employees I (Cheryl Porro) started off my pro bono volunteering journey from first being asked “hey, can you help our nonprofit with Salesforce?” to being personally asked to become a board member of First Graduate. People often become involved with a cause because someone recommends a volunteer project specific to their interests. Personal recommendations matter. But with recommendations powered by machine learning, the “tap on a shoulder” effect can scale to millions. At Salesforce, 84% of our employees volunteered at least once last year. In the year since we’ve launched personalized recommendations, 78% of that group have volunteered more than once! By connecting people to causes that are personally relevant and meaningful, we can drive deeper, lasting engagement. We’re building a platform for scaling impact: “Salesforce.org Philanthropy Cloud empowers people to take action and improve the state of the world. Einstein-powered recommendations help people connect with causes that are personally meaningful and make an impact where they are most passionate. Philanthropy Cloud brings Salesforce.org innovation together with the content and services necessary to build a thriving CSR employee engagement program. Through our partnership with United Way, the world leader in workplace giving, any company can help their employees work with purpose.” -Nick Bailey, Salesforce.org VP of Innovation and Products AI unlocks potential and progress We foresee an era where artificial intelligence can power personalized recommendations to connect people and causes to make more of an impact. Additionally, CSR professionals can think about how to engage their communities to provide corporate data scientists with more representative training data. Technology can improve workforce development, as Southern New Hampshire University is doing with refugees, and connect individuals to opportunities to be ready for future jobs. Another example of empowering individuals with artificial intelligence to unlock human potential is AI4ALL, which works to educate and support the next generation of diverse leaders in AI. In conclusion: think about what we are working for, rather than what we are working against. What kind of world do you want to live in? Working on what we call “AI for Good” is an effort that asks us to bring forth our best selves in service of equality and well-being for all. TAKE THE NEXT STEP IN AI FOR CSR About the Authors Cheryl Porro is senior vice president of technology and products at Salesforce.org, a social enterprise that gets technology in the hands of nonprofits and education institutions so they can connect with others and do more good. Cheryl leads her team to deliver transformational technology impact to tens of thousands of organizations through nonprofit and higher education products, including Nonprofit Success Pack and Salesforce Advisor Link. She also leads the enterprise business systems team that supports and streamlines Salesforce.org operations with enterprise grade technology solutions. Follow her on Twitter: @cporro_sfdc. Katharine Bierce serves as editor-in-chief of the Salesforce.org blog and helps create e-books and other digital content at Salesforce.org. She is a lifetime member of Net Impact, a StartingBloc fellow, and has volunteered in producing “tech for good” events and content with the SFTech4Good Meetup (a NetSquared community) since 2014. A self-described “full-stack human,” she is an avid meditator and yogi. When she’s not managing digital content, you can find her teaching or taking yoga classes around the San Francisco bay area. Follow her on Twitter: @kbierce Originally published at cecp.co on June 29, 2018.
What CSR Professionals Should Know about Artificial Intelligence
0
what-csr-professionals-should-know-about-artificial-intelligence-1b3b9ecece19
2018-07-27
2018-07-27 16:45:04
https://medium.com/s/story/what-csr-professionals-should-know-about-artificial-intelligence-1b3b9ecece19
false
1,214
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
CECP
CECP is a coalition of CEOs who believe that societal engagement is an essential measure of business performance.
bcb6606e32b2
cecptweets
816
730
20,181,104
null
null
null
null
null
null
0
git clone https://github.com/pjreddie/darknet.git cd darknet make mkdir -p obj ./darknet output: usage: ./darknet <function> classes= 2 train = data/train.txt valid = data/test.txt names = data/obj.names backup = backup/ 1 0.716797 0.395833 0.216406 0.147222 0 0.687109 0.379167 0.255469 0.158333 1 0.420312 0.395833 0.140625 0.166667 data/obj/img1.jpg data/obj/img2.jpg data/obj/img3.jpg Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000, count: 8 Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000, count: 8 9002: 0.211667, 0.060730 avg, 0.001000 rate, 3.868000 seconds, 576128 images Loaded: 0.000000 seconds ./darknet detector test data/obj.data yolo-obj.cfg yolo-obj_xxxx.weights ./darknet detector test data/obj.data yolo-obj.cfg yolo-obj_xxxx.weights images/test.jpg -thresh 0.6
8
null
2018-02-09
2018-02-09 05:03:51
2018-02-09
2018-02-09 05:05:07
1
false
en
2018-02-09
2018-02-09 05:05:07
7
1b3d5d15e824
3.441509
1
0
0
You only look once (YOLO) is a state-of-the-art, real-time object detection system. It comes with a few pre-trained classifiers but I…
4
Face detection with Darknet Yolo You only look once (YOLO) is a state-of-the-art, real-time object detection system. It comes with a few pre-trained classifiers but I decided to train with my own data to know how well it’s made, the potential of Image Recognition in general and its application in real-life situations. If you are a fan of HBO’s Silicon Valley TV series, you might be aware of the famous Not Hotdog app that Jìng-Yáng built. This is similar, very basic, and detects if an image has me or not. To get started, we need to install Darknet with two dependencies — OpenCV and CUDA for faster computation. The following were on an Ubuntu 16.04 machine with Nvidia GTX 1060. Installing Darknet To verify your installation, run darknet with If you get the above output, you’re good to go to the next step! Compiling with CUDA (optional) Compiling with your GPU is many times faster than your CPU. To install CUDA, you’ll need a compatible Nvidia gpu. For installation, download CUDA (make sure it is version 8) and follow the instructions on the website. To enable CUDA, change the first line of the Makefile in the base directory to GPU = 1 and ‘make’ in the terminal Compiling with OpenCV (optional) To support multiple formats of media install OpenCV. Check instructions here Similar to CUDA, change the Makefile to read OPENCV=1 to enable OpenCV and then ‘make’ in the terminal to build the darknet application. Create file yolo-obj.cfg with the same content as in yolo-voc.2.0.cfg (or copy yolo-voc.2.0.cfg to yolo-obj.cfg) and: change line batch to batch=64 change line subdivisions to subdivisions=8 change line classes=20 to your number of objects change line #237 from filters=125 to: filters=(classes + 5)*5, so if classes=2 then should be filter=35 Create file obj.names in the directory darknet\data\, with objects names — each in new line Create file obj.data in the directory darknet\data\, containing (where classes = number of objects): Put image-files (.jpg) of your objects in the directory darknet\obj\ Create .txt-file for each .jpg-image-file — in the same directory and with the same name, but with .txt-extension, and put to file: object number and object coordinates on this image, for each object in new line: <object-class> <x> <y> <width> <height> Where: <object-class> — integer number of object from 0 to (classes-1) change line subdivisions to subdivisions=8 change line classes=20 to your number of objects change line #237 from filters=125 to: filters=(classes + 5)*5, so if classes=2 then should be filter=35 Use the BBox-Label-Tool to get the face coordinates from your images. For example for img1.jpg you should create img1.txt containing: Create file train.txt in directory darknet\data\, with filenames of your images, each filename in new line, with path relative to ./darknet, for example containing: Download pre-trained weights for the convolutional layers (76 MB): here and put in the main directory. Start training by using the command line: ./darknet detector train data/obj.data yolo-obj.cfg darknet19_448.conv.23 After training is complete — get result yolo-obj_xxxxx.weights from darknet\backup\ After each 1000 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just copy yolo-obj_2000.weights from darknet\backup\ to main directory and start training using: ./darknet detector train data/obj.data yolo-obj.cfg yolo-obj_2000.weights During training, you will see varying indicators of error, When you see that average loss 0.xxxxxx avg no longer decreases at many iterations then you should stop training. Test your trained weights using the command After over 40000 iterations I found my results to be fairly accurate. By default, YOLO only displays objects detected with a confidence of .25 or higher. You can change this by passing the -thresh <val> flag to the yolo command. I dont know if an application exists based on darknet yolo. Building one will make it complete and useful for real-life applications. Something that can iterate through multiple images and save the results for easy insights should be the next step. If you have built one already, let me know. But doing this small exercise made me appreciate the power of Image Recognition. If you’re a mechanical engineer like myself, you can instantly build a tool for manufacturing companies to study the material flaws in industrial radiography. Or a sign language translator for the hearing-impaired people. The possibilities are endless. Originally published at ashishkhan.com.
Face detection with Darknet Yolo
2
face-detection-with-darknet-yolo-1b3d5d15e824
2018-06-14
2018-06-14 14:22:22
https://medium.com/s/story/face-detection-with-darknet-yolo-1b3d5d15e824
false
859
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ashish
null
1c608b3d5ad9
ashishkhan
66
66
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-03
2017-10-03 18:53:43
2017-10-03
2017-10-03 19:00:30
1
false
en
2017-10-05
2017-10-05 00:35:20
0
1b3d8c46398e
8.539623
1
0
0
The business of property and casualty insurance — assessing risk, collecting premiums and paying claims — hasn’t changed much since 1861…
5
Focus on the end-to-end Customer Experience (CX) INSURTECH: Top 10 Digital Business Transformation opportunities in Property and Casualty Insurance The business of property and casualty insurance — assessing risk, collecting premiums and paying claims — hasn’t changed much since 1861, when a group of underwriters sold the first policies to protect London homeowners against losses from fire. Recently, though, the insurance industry is now being challenged by technological and behavioral change and has embarked on a radical transformation, one spurred by a series of digital innovations whose widespread adoption is just a few years away. Seven key technologies have already begun to disrupt the industry — infrastructure and productivity, online sales applications, advanced analytics, machine learning, the Internet of Things, distributed ledger and virtual reality — and their impact will accelerate in the next three to five years. These new technologies will be a boon for consumers, bringing more choice, better service, and lower prices. Companies stand to benefit too, especially those that use the digitalization as an opportunity to rethink all their operations, from underwriting to customer service to claims management. Customers now expect their insurers to offer simple, transparent and flexible products and services — all online. In Australia, for example, you can use your smartphone to snap a photo of something you want to insure, such as a bicycle; upload the picture into an app called Trov, and then request a policy for a specific period — say, a month. Trov uses available data about you and your bicycle and, within seconds, comes back with an offer. If you like the terms, you press the “I accept” button, and you’re covered. Claims are also handled online, with a rapid exchange of photos and texts. In the future, insurers won’t need to dispatch human adjusters to gather facts and evaluate accident damage. Using machine learning, automated advisers will draw on virtual reconstructions of the accident and a wealth of background data. They’ll enter into a digital dialogue with customers and immediately inform them where any damage can best be repaired. Insurance firms are disrupted; disrupting and going through important digital transformation projects, which de factor often revolve around the need to optimize the end-to-end customer experience. Insurers can improve their policy, distribution and claims functions, and benefit from analytics, mobility, digital marketing, cloud and social media. 1.Customer Experience (CX) on top priority — The customer experience challenge starts with the way policyholders communicate. They are used to communicate via email, text and a whole range of new channels and formats. Furthermore, if they’re not satisfied they use social media to tell the world. This challenge affects many industries, also in other financial services industry areas. However, in the insurance industry, dissatisfaction is easily voiced, as customers expect almost immediate accuracy and responsiveness. This is a huge challenge, especially as 84% of customers trust other consumer’s experiences. On top of that, there is the very nature of the insurance business whereby incidents can cause a sudden rise in claims with even more customer expectations. When such incidents happen on a larger scale, they can also be closely monitored by the media and regulators, adding more pressure to the equation. Customer experience expectations, customer service demands, and customer behavior have of course changed in virtual industries as consumers expect the same levels of quality and speed as they are used to in various markets. This behavior is traditionally attributed to Generation Y but is showing across segments of more digital-savvy consumers. Insurers deciding which digital technologies to pursue should ask themselves a simple, and fundamental, question: Will it enhance the customer’s experience? Putting the customer first is more than a platitude. An improved customer journey — one built on ultra-precise information, greater transparency, more flexibility and simplified interactions — is good for business. Growth in a digital world requires P&C executives to continually think from the customer’s perspective — that is, outside-in rather than from the company outward. With the help of Digitalization, P&C industry can benefit in Claims, Product, Policy & Underwriting and Distribution & Marketing of Insurance. Insurers have to transition from product-centricity to customer-centricity. Take the typical experience of a customer calling an insurer today. It’s likely an automated answering service will tell the customer to press buttons 1, 2 or 3 for various options. With machine learning, though, insurers will be able to serve a customer much faster and effectively, without all the button-pushing. The system will instantly analyze the customer’s flow of communications across all channels, including past phone calls, letters, emails and even public social media postings. When the customer starts speaking, the computer can analyze the tone of voice, determining whether the caller is confused or angry or both. Armed with all this information, a virtual agent can assess the customer’s needs and suggest a solution. 2.Direct link between claims processing and customer retention — Today when we focus on how digital technology can transform claims, the driver for transformation is clearly how to deliver compelling customer experience. Claims transformation has often been driven by a need to improve efficiency or productivity and claims leaders clearly understand the link between claims and direct impact on their customers. Many insurance carriers struggle to efficiently handle a request that includes inputs from different communication channels and provide this instant response their customers expect. As customer expectations continue to change, insurers must reimagine the role of claims. This requires an end-to-end design thinking approach and embracing digital. Customers expect more efficiency and transparency with claims, and they expect to have several channels for submitting and settling claims. Insurers can use automation, artificial intelligence, and advanced analytics to transform the claims process and identify trends and potential problems. 3.Claims Prevention — The claims process within property and casualty insurance is on the cusp of benefiting from digital transformation. Digital technologies will also help with claims prevention, thanks to the Internet of Things. In the future, for example, a sensor will be able to monitor a household’s water consumption patterns, detecting potential leaks and interrupting the flow before the basement is flooded, thus preventing major damage and a costly claim. For Fraudulent Claim reduction, several technologies can be leveraged to improve insurance claims fraud detection. These include more traditional information management approaches but also more advanced claims fraud detection systems like Predictive Analytics, Social or sentiment analysis tools and data visualization tools. 4.Automation, Machine Learning, and AI — Insurers are adopting the technologies for better outcomes. Today, digital technology has become top of mind for claims leaders and the potential of artificial intelligence and machine learning to transform claims has become unnoticed. The industry is moving past simple automation and starting to have use cases of more advanced technology. 5.Robotic Process Automation (RPA) for Information management — In the highly competitive insurance environment, there is an enormous pressure to increase efficiency and streamline operations. Most carriers struggle with manual process steps — including many repetitive tasks, such as checking a claim input for completeness and requesting missing information like an auto accident police report. The handling of an underwriting request or claim requires matching the request with customer information that in most cases resides in legacy systems. Finding this data, such as customer status and associated entitlement to reimbursement, is often a bottleneck but with the use of RPA process can be automated to deliver a seamless claims process. 6.RPA to cut compliance costs — The pressure is rising for leaders to find new ways of cutting costs in compliance, which can be huge. Strategic CXOs are using robotics as a solution to this issue. Automating rote, rule-based compliance tasks allows them to reduce costs and reinvest in other digitization efforts. Robotic process automation (RPA) is a natural fit in this new InsurTech environment because change can be delivered with speed and agility to realize benefits quickly. Further, RPA can automate the end to end lifecycle by integrating new front end digital technologies with back-office environments. 7.Internet of Things & Big Data — By 2020, an estimated 50 billion devices will connect the 8 billion people on our planet to their cars, homes, communities, medical information, and work. This will generate a huge amount of data to be analyzed and monetized. Insurers that make use of better data and embrace digital transformation will become leaders in the industry. The Internet of Things will provide immense amounts of data that can inform new ways of assessing risk and tailoring pricing. Insurers will be able to obtain better data for claims settlements and develop new, personalized risk protection services. They can also offer safe driving and home incentives. 8.Overcoming Data silos — Digital transformation requires breaking down the classic silos of front, middle and back offices to transform each into a unified, seamless customer experience. That means leaders must go from running the back office to using digital to integrate the front, middle and back offices. It’s more than just a customer-facing app or even fully automated back-office processes. Digital transformation requires contextual data and a middle office that can move from explaining the past to predicting the future. Claims executives will need to focus on this area to fully leverage the opportunities of digital. Data exists everywhere, and visualization tools let critical information and insights be presented in a clear, understandable manner. This not only provides more data visibility but also the ability to look for historical patterns and trends. Better data visualization improves the leader’s abilities to communicate their strategies and make better decisions. Using more transactional data enables the advisors to continue to leveraging analytics and business insights to drive increased performance. 9.Data Extraction through Automation — Data collection has been at the heart of insurance business processes since the birth of the industry. As insurers advance into the digital age and tackle the vagaries of reshaping business processes to increase success, they need to re-evaluate how they access their data. The new technologies combine advanced machine computing and analytics with the contextual recognition powers of the human brain to accurately recognize printed and handwritten text and transform it to structured digital data. By embracing these offerings, digital insurers can eliminate the complexity and inefficiency associated with the process of creating a structured, digital data set. 10. Commoditization in the Insurance Industry — Consumers generally don’t want a relationship with their insurance company. Many insurance offerings in the P&C (Property and Casualty) area are seen as commodities, something consumers need to pay for — and would even prefer to avoid. Cost is the differentiating factor in this regard for many consumers but it’s also a dangerous one for insurance companies. The narrative and messaging regarding such insurance products regularly revolve around the pricing aspect. When typing in the term “car insurance” in Google at the time of writing this, the first organic (not paid for) result I get is a web page that allows me to compare cheap car insurance quotes. The paid results use terms such as “cheap”, “pay less”, “cheapest rates” and “40% less”. The cost aspect is not just key in the messaging of usual suspects such as insurance comparison websites, of which many effectively allow to also “buy” the insurance as they team up with various insurance providers. Some insurance companies have responded by launching online platforms — often as separate brands — themselves. Others have taken over the same narrative of the price. Wrap Up Tackling the challenges: Focus on the end-to-end customer experience Consumers don’t want relationships with insurance companies for commoditized products but they do want to protect what is dear to them and when push comes to shovel the claim in case of an incident is an important moment of truth. It’s then that consumers think about their insurance provider: when the unforeseen happens. InsurTech is not a silver bullet — the real challenge for insurers is to become more innovative in their everyday business. Gradually, there has been a shift to look at the overall customer lifecycle and be more customer-centric in order to reduce costs and increase revenue. The general change potential of digitalization is subject to intense market discussions. For P&C insurers battling in a fiercely competitive marketplace, digitalization is a multibillion-dollar opportunity. Using digital tools, insurers can lift profits, while delivering new services, lower premiums and an all-around better experience to their customers. ABOUT THE AUTHOR Vartul Mittal is an Independent Director — Technology & Innovation and a Global Business Transformation & Automation leader. He has 11+ years of strong Global Business Transformation experience in Management Consulting and with GICs with a remit to drive understanding and deliver Business & Operations Strategy solutions globally. He is always looking for new ideas and ways that can make things simpler. A Mechanical Engineer and MBA by education, a Digital Business Transformation & Automation Consultant by profession, he is essentially a Technology Evangelist by passion. He lives his life around technology and is particularly keen to explore the intersection of technology and human behavior. The ease with he can explain the most complex stuff impresses people around him. Vartul is a notable keynote speaker on Digital Automation and Innovation among Top Universities and International Conferences.
INSURTECH: Top 10 Digital Business Transformation opportunities in Property and Casualty Insurance
1
insurtech-top-10-digital-business-transformation-opportunities-in-property-and-casualty-insurance-1b3d8c46398e
2018-05-21
2018-05-21 10:13:03
https://medium.com/s/story/insurtech-top-10-digital-business-transformation-opportunities-in-property-and-casualty-insurance-1b3d8c46398e
false
2,210
null
null
null
null
null
null
null
null
null
Insurance
insurance
Insurance
13,823
Vartul Mittal
Global Business Transformation & Automation Leader | Customer Experience Designer | Social Evangelist | Traveler | Observer | Thinker | Author | Speaker | Coach
fa236c8982c0
vratulmittal
186
123
20,181,104
null
null
null
null
null
null
0
null
0
null
2016-04-19
2016-04-19 20:39:49
2016-04-19
2016-04-19 20:42:58
13
false
en
2017-09-25
2017-09-25 16:10:41
58
1b400dcee412
5.366038
2,013
51
2
With the recent advances in affordable, reputable online education, going back to college/university seems irresponsible
5
By DAVID VENTURI I Dropped Out of School to Create My Own Data Science Master’s — Here’s My Curriculum With the recent advances in affordable, reputable online education, going back to college/university seems irresponsible I dropped out of a top computer science program to teach myself data science using online resources like Udacity, edX, and Coursera. The decision was not difficult. I could learn the content I wanted to faster, more efficiently, and for a fraction of the cost. I already had a university degree and, perhaps more importantly, I already had the university experience. Paying $30K+ to go back to school seemed irresponsible. Udacity Over University: Why I Chose Online Education A stroke of luck helped me discover MOOCs and my career path.medium.com Here are my curriculum choices and the rationale behind them. Using thousands of course ratings and reviews from Class Central, I selected the best computer science, data science, and machine learning courses from world-class institutions like Harvard, Stanford, MIT, Berkeley, Google, and Facebook. You can read my detailed reviews for most of these courses here on Medium or on my personal website — davidventuri.com. My curriculum covers both Python and R, which are the two most popular programming languages for data science. Note 1: if you’re looking for an online data science curriculum to follow, the link below contains my most up-to-date recommendations. I started creating this project midway through my personal data science master’s. Note 2: In May 2017, I paused my progress in this program because I joined Udacity as a Content Developer. Another benefit of personalized online education — flexibility! The best Data Science courses on the internet, ranked by your reviews Here are the best overall courses for each subject within data science. Together these form a comprehensive data science curriculum.medium.freecodecamp.com Bridging Module A solid computer science foundation ✔ Intro to Programming Nanodegree (Udacity) (REVIEW) ✔ CS50: Introduction to Computer Science (Harvard/edX) (REVIEW) ✔ Mathematics for Computer Science (MIT) Bridging Module Why a bridging module? I wanted a solid computer science foundation before I started learning data science. My engineering background gave me a head start on the math and stats. Completing these three courses means I will have completed a standard first-year computer science curriculum, plus the full mathematical and statistical core. The following courses from my undergrad chemical engineering program are also core computer science courses: ✔ Linear Algebra ✔ Calculus ✔ Multivariable Calculus ✔ Statistics I ✔ Statistics II Data Science Core The fundamentals ✔ Data Analyst Nanodegree (Udacity) (REVIEW) Listed below are the individual courses contained within the Nanodegree. The estimated timeline for graduation is 378 hours. ✔ Intro to Inferential Statistics ✔ Intro to Descriptive Statistics ✔ Intro to Data Analysis (Using NumPy and Pandas) ✔ Data Wrangling ✔ SQL for Data Analysis ✔ MongoDB for Data Analysis ✔ Data Analysis with R ✔ Intro to Machine Learning ✔ Data Visualization and D3.js ✔ A/B Testing Three courses from the Udacity Data Analyst Nanodegree Why the Udacity Data Analyst Nanodegree? First and foremost, it received stellar reviews. Second, I wanted a consistent learning experience for my introduction to the field. The Data Analyst Nanodegree offered a combination of breadth, depth, and cohesiveness that a combination of content from various providers would be hard pressed to provide. I am also a fan of their “less passive listening (no long lectures) and more active doing” approach to education. What is a Udacity Nanodegree? Machine Learning Learning from data ✔ Machine Learning (Stanford University/Coursera) Creative Applications of Deep Learning with TensorFlow (Kadenze) (IN PROGRESS) Distributed Machine Learning with Apache Spark (University of California, Berkeley/edX) Stanford University, TensorFlow (Google’s open source software library for machine learning), and The University of California, Berkeley Software Engineering Best practices ✔ Software Testing (Udacity) Software Debugging (Udacity) ✔ How to Use Git & GitHub: Version Control for Code (Udacity) Mastering Software Development in R Specialization (Johns Hopkins University/Coursera) (IN PROGRESS) Listed below are the individual courses contained within Johns Hopkins University’s “Mastering Software Development in R Specialization” on Coursera: ✔ The R Programming Environment Advanced R Programming Building R Packages Building Data Visualization Tools Johns Hopkins University’s “Mastering Software Development in R Specialization” on Coursera Why software engineering? The role of software engineering in data science is covered in great detail here by Alec Smith (a data science recruiter) and here by Roger Peng (Johns Hopkins University professor and “Mastering Software Development in R Specialization” creator). A quote from the former: A lot of data science work is software engineering. Not always in the sense of designing robust systems, but simply writing software. A lot of tasks you can automate and if you want to run experiments, you have to write code, and if you can do it fast, it makes a huge difference. And from the Mastering Software Development in R Specialization page: As the field of data science evolves, it has become clear that software development skills are essential for producing useful data science results and products. You will learn modern software development practices to build tools that are highly reusable, modular, and suitable for use in a team-based environment or a community of developers. Back End Development Storing and manipulating data ✔ Intro to Backend (Udacity) ✔ Developing Scalable Apps in Python (Google/Udacity) ✔ Configuring Linux Web Servers (Udacity) ✔ Linux Command Line Basics (Udacity) ✔ Introduction to Databases (Stanford University) Why back end development? This Quora page and this Udacity article suggest that back end development and data science can be a useful combination. These Udacity courses, which are the back end courses in their Full Stack Web Developer Nanodegree, along with Stanford’s top-ranked databases course, add an aspect of data engineering to the curriculum. Additional Resources Filling in the gaps. Suggestions welcome! ✔ Intro to Hadoop and MapReduce (Cloudera/Udacity) ✔ Using Python to Access Web Data (University of Michigan/Coursera) ✔ Building a Data Science Team (Johns Hopkins University/Coursera) This section is fluid. Additional resources will be added as I progress through the curriculum. That’s it! Many thanks to Dhawal Shah of Class Central, as the ratings and reviews from his online course search engine (plus a few insider tips) helped guide the above curriculum choices. If you have any recommendations for the curriculum, the above subject material in general, or would like to chat about your own educational goals, please don’t hesitate to contact me. Originally published at davidventuri.com. David Venturi (@venturidb) | Twitter The latest Tweets from David Venturi (@venturidb). Creating my own data science master’s degree. @queensu chem eng/econ…twitter.com
I Dropped Out of School to Create My Own Data Science Master’s — Here’s My Curriculum
6,118
i-dropped-out-of-school-to-create-my-own-data-science-master-s-here-s-my-curriculum-1b400dcee412
2018-06-21
2018-06-21 04:01:01
https://medium.com/s/story/i-dropped-out-of-school-to-create-my-own-data-science-master-s-here-s-my-curriculum-1b400dcee412
false
1,051
null
null
null
null
null
null
null
null
null
Programming
programming
Programming
80,554
David Venturi
Curriculum Lead, Projects @ DataCamp. I created my own data science master’s program.
b3eb78490b02
davidventuri
14,662
263
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-08
2018-01-08 10:21:43
2018-01-08
2018-01-08 11:32:43
3
false
en
2018-01-08
2018-01-08 11:32:43
5
1b4153e0cdc9
3.636792
1
0
0
Artificial Intelligence has been one of the hottest topics in the field of technology in the past couple of years. The debate has been…
2
How Small Businesses Can Benefit from AI Artificial Intelligence has been one of the hottest topics in the field of technology in the past couple of years. The debate has been centered on its usability and the type of benefits it can offer. It’s a known fact that the big players such as Amazon and Microsoft have already invested substantially in developing the technology. However, what needs to be seen, if these benefits could also help small and medium-sized businesses (SMBs). Discussed below are some of the areas in which small businesses could benefit from AI. Optimal Resource-Utilization One of the significant issues faced by small businesses is the lack of sufficient resources. This scarcity of resources in critical verticals such as Marketing, Sales, Customer Service, etc., can be a huge setback, especially if they have to compete with big players in the market. Proper utilization of available resources could well be the difference between profitability and extermination for SMBs. Here is where AI can help, by ensuring optimal use of resources across various verticals. Marketing: When it comes to Marketing, SMBs usually work on spreadsheets to collect and collate customers’ data. Running marketing campaigns and processing leads by manually tracking these spreadsheets can be a monotonous task that results in depletion of considerable resources. However, by automating the task through AI tools, small companies could be able to add efficiency and value to their marketing initiatives. For instance, identifying the right segments of target audience providing maximum return on investment can be done much more easily by integrating the CRM with AI. Similarly, quarterly and annual sales forecasts, as well as predictions on weekly or monthly sales leads, could get much more accurate, which could help the management in optimal deployment of the marketing and sales resources. Customer Service: Most queries from the prospects and customers are simple questions that are frequently asked. However, these trivial calls could add an extra burden on the call volume being handled by a small company’s contact center. Subsequently, there could be a shortage of resources when it comes to handling critical issues, which could leave the customers with a bad experience. However, by deploying AI tools and chatbots, SMBs can track the type of query coming in through IVRs, and direct the simpler issues to the chatbots. On the other hand, issues those are complicated or require the expertise of qualified service personnel can be handled accordingly. Moreover, machine learning techniques can also be used to predict the time and duration having maximum and minimum call volumes, which could help in the optimal use of available resources. Lead generation: Artificial Intelligence can be used to nurture and capture new leads through social media. The AI algorithms can be used to track prospects through their social networking activity. Personalized content can be targeted at them at a specific time to ensure they do not miss the new offers and discounts. Once their details are captured through simple forms, the new leads can be followed up by providing relevant comparisons with competing offerings, which would eventually convert them into customers. Further opportunities for up-selling and cross-selling can also be created through constant engagement. AI algorithms can also be used to define the detailed buyer personas with much more accuracy. By taking these personas into account, unique product features can be used to generate targeted content that could encourage leads to respond positively. Analytical tools equipped with machine learning capabilities can be used to identify the right channels to match the content. Minimizing Cost AI can be highly helpful in crunching huge volumes of data and derive valuable information from it. If performed manually, these tasks could require a considerable amount of resources, which in turn would increase the cost for the small business, and bring down the ROI. In the micro business environment, where even a minor decrease in returns can be disastrous, AI can prove to be a game changer. The technology would not only help in bringing down the overall cost of the process but could also add extra efficiency to it. Also, the valuable resources can be diverted towards more critical tasks such as innovation and improvements, which could help the SMBs in maximizing their growth and profitability. Thus, AI can help small business face the usual challenges of optimally using their scant resources and budget while improving their computing capabilities. By helping them identify the right insights, they can also improve their chances of sustaining in the market and achieving continuous growth. Key Takeaways One of the significant issues faced by small businesses is the lack of sufficient resources, and it is the area in which AI can play a significant role. By helping in optimizing the use of resources, AI can bring down the overall cost of the process and also add maximize the efficiency. Originally published at: www.orchestrate.com/blog/small-businesses-can-benefit-ai
How Small Businesses Can Benefit from AI
4
how-small-businesses-can-benefit-from-ai-1b4153e0cdc9
2018-01-08
2018-01-08 21:11:40
https://medium.com/s/story/how-small-businesses-can-benefit-from-ai-1b4153e0cdc9
false
818
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Orchestrate Technologies, LLC
Orchestrate is a US based business process management organization with comprehensive services in IT, ITeS, finance, mortgage, and contact center.
d337f31e03f9
Orchestrate
31
192
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-21
2017-11-21 05:23:35
2017-11-21
2017-11-21 05:25:37
4
false
zh
2017-11-21
2017-11-21 05:25:37
5
1b415d5aed01
9.102
2
0
0
现在我关注的公众号大约有10个都在讲深度学习,不管是engineer还是data scientist都在学怎么用TensorFlow,PyTorch等框架。大家都能实现深度学习模型,但是想把这些模型应用到真正的产品当中并不是一件容易的事情。
4
浅析Google和Uber的深度学习“系统” 现在我关注的公众号大约有10个都在讲深度学习,不管是engineer还是data scientist都在学怎么用TensorFlow,PyTorch等框架。大家都能实现深度学习模型,但是想把这些模型应用到真正的产品当中并不是一件容易的事情。 TensorFlow这类框架对研究者来说已经很好了,但是生产过程中你还需要面对各种异常情况和部署时可能出现的问题。如果你想将深度学习应用到你的业务当中,那么一定需要一个能够不断进行改进的系统架构,并且及时的处理程序或数据异常。然而目前的深度学习框架仅仅是这个完整解决方案的一小部分。 幸运的是,最近一些大公司都开始介绍他们自己的机器学习平台架构,Uber发表了engineer blog介绍他们的系统Michelangelo(以下简称“米”),Google发表了他们的TFX架构。 今天我们就拿这两篇为样本,简单的谈谈深度学习系统的产品级实践。 Uber — Michelangelo 在Uber搭建Michelangelo之前,他们在部署机器学习模型时遇到了很多问题。举几个例子,data scientists用很多不同的工具(R,Scikit-learn)建立模型,程序员也会写各种一次性模型。在Open Source界没有一个系统可以让他们重复试验他们的算法,并且适应他们的数据规模。 所以他们搭建了“米”,仔细看一眼这个“米”,其实严格来说并没有强调深度学习,而是像一个包含各种机器学习算法的大数据系统。“米”建立在很多开源工具上,HDFS,Spark,Samza,Cassandra,MLLib,XGBoost和Tensorflow。 这些开源工具各司其职,HDFS存储着Uber所有的事务和记录数据,Kafka用来汇集Uber各项服务的日志信息,Samza用来进行流计算并且从中计算一些实时features,Cassandra是用来提供实时数据访问的工具,并且Uber开发了一系列自己的服务部署工具。 总的来看,Uber的机器学习系统架构有以下几个模块: 1. Manage data 2. Train models 3. Evaluate models 4. Deploy, make predictions and monitor Michelangelo几处细节 我们跳过一些大数据的基本架构问题,着重看一下和机器学习更加相关的部分。 首先是对于数据和feature的管理,“米”将数据分为在线数据(online data)和离线数据(offline data)两部分分开管理,最后采用Hive作为feature store来实现不同team之间的数据分享。 根据Uber的文章描述,目前feature store里大约有10000个不同的学习特征,公司里的不同团队都在实时的增加新的特征。每一个特征都会保存一些相应信息以便维护,比方说作者,描述,以及SLA。整个feature store有两种更新方式,一种是batch precompute批量处理,一种是通过Samza进行实时更新。 在模型方面,Uber设计了自己的DSL来对数据特征进行选取,变形,通过特定的DSL,“米”可以更好地进行模型的训练以及可视化。目前支持的机器学习模型包括decision trees,linear和logistic models,k-means,time-series和deep neural networks。 我知道Facebook,Pinterest等公司也都开发了自己的DSL帮助机器学习开发者更好地关注在特征抽取和机器学习算法上,不用再写不擅长的系统代码。 在工业界另一个人人都关注的问题是机器学习模型的可复现性,所以“米”在存储模型配置,参数和评测上下了不少功夫。在模型训练后都会有一个自动评测过程,所有的参数和一些Metadata都存在数据库中用来进行重新分析和部署,这些信息包括:模型的训练者,开始训练和结束训练时间,完整的模型配置,使用的训练测试集,每个特征的分布和重要性,模型的准确性,完整的模型参数和可视化性息。 这些关于“米”的设计思路初衷和Uber本身的业务很像,都希望人们能够更好的访问不同的机器学习模型,与他人分享知识以便提高整个团队的能力。不管是“feature store”还是模型的重复实验都方便每一个团队分享他们学习得出的数据和模型。 Google TFX Google的系统架构在业界一直以来都是领头羊的地位,他们有着深厚的深度学习基础,还有流行的TensorFLow框架,2017年在KDD他们发表了论文“TFX: A TensorFlow-based production scale machine learning platform”,介绍了他们内部的学习平台。 从大方向来看,Google论文的结构也分四部分: 1. Manage Data 2. Train Models 3. Evaluate Models 4. Model Serving Google是一个非常重视engineering的公司,非常重视系统的可靠性,可用性。Google在搭建机器学习平台的时候着重考虑以下几个方面: - 平台的适应性,不同的学习任务都可以在平台上运行。 - 连续训练,很多训练任务非常复杂,我们需要在之前的模型上快速引入新数据进行训练优化。 - 简单的配置和工具,帮助每一个程序员快速上手,方便共享知识。 - 产品级的可靠性和扩展性,在Google的级别,每一个功能每一秒都在经历成千上万次测试,可靠性,扩展性异常重要。 下面就让我们简单阅览一下TFX的一些要点,看看Google是如何划分问题重要性和设计的。 TFX的设计要点 TFX论文里最先提到的就是“Machine learning models are only as good as their training data”,所以系统的第一部分就在描述如何分析数据。 第一,TFX提供了一系列工具帮助用户理解数据。在数据进入系统后,TFX会对特征的数值分布,分位数,均值,标准差进行统计。这个功能可以帮助程序员快速了解数据的形状,同时给连续训练和持续部署提供帮助。 其次,TFX提供了很多特征变换方法,节省了很多工程师的工作量。模型本身会记住这些变换过程,以便能够在训练和测试时达到一致的效果。 第三,TFX会对数据进行非常严格的异常检测。他们设计了一个schema文件,包含数据中出现的特征,每一个特征的类型,出现分布,数值分布等等。我们会对模型的训练测试数据,在线数据根据schema进行检查,预防异常。这个想法来自于设计程序语言时,编译器对变量类型进行静态检查。在推行异常检测的过程中,TFX团队遇到了一定的阻力,因为提供schema会有很多额外工作,但是经过几次意外后,很多engineer team都愿意推行自动检测。 有了数据,接下来要考虑的就是模型本身。这一方面TFX主要强调了“Warm-Starting”,也就是用别的任务训练出的泛化能力较强的特征,面向某一个问题再进行针对性训练。 另一方面模型设计也强调使用抽象的比较好的API框架,拿TensorFlow来说就是他们的Estimators。采用高级框架可以让程序变得更加容易编写,并且容错性较强,比较不太容易出现训练和测试时数据转换,目标函数不一致的情况。 TFX文章的最后就是模型的验证和部署过程,这两部分考虑的不仅仅是模型的准确性,还有性能,异常处理等多方面的指标。 要说一个模型“好”,我们需要检查它是不是safe to serve,会不会是系统过载,CPU或者RAM占用过高,另外会不会运行出错奔溃。其次才是检查模型是不是有更好的“prediction quality”,这既需要在离线数据集上进行测试,又需要在canary环境上进行在线测试,收集实时反馈。 在产品上,TFX对模型效率和服务框架做了不少改进。文中举了两个例子,一个是对模型进行“软隔离”,帮助多个用户对多个模型同时进行部署而不互相影响。另一方面他们对模型的序列化做了性能优化,这一个特殊的protocol buffer可以将性能提高2~5倍。 小结 阅读Uber和Google的机器学习平台架构可以帮助自己更好的理解构建公司内部平台时可能面对的难点和解决方案。 与TensorFlow这样的深度学习框架相比,Uber和Google的架构更多的强调了如何管理和共享学习特征和模型。除此之外,Google的平台还花了很大精力保证数据的质量和一致性,测试的自动化和稳定性,部署的稳定性和性能,这思路就好像传统软件工程中提到的Test Driven Development。 持续部署,自动回滚和恢复功能,这些都是软件工程常见的问题,现在被引入到先进的机器学习系统实践当中。作为一个普通程序员,我个人认为机器学习不是让专家用Python和一个深度学习框架搭建模型,然后扔给开发团队进行实践和测试。我们需要的是端对端的研究和开发过程,每一个用户都能方便的使用,我也期待有朝一日自己能参与到搭建这样的系统过程中去。 阅读材料 Michelangelo:https://eng.uber.com/michelangelo/ TFX: A TensorFlow based Production Scale Machine Learning Platform: http://www.kdd.org/kdd2017/papers/view/tfx-a-tensorflow-based-production-scale-machine-learning-platform TensorFlow Serving: https://www.tensorflow.org/serving/ Hidden Technical Debt in Machine Learning Systems: https://papers.nips.cc/paper/5656-hidden-technical-debt-in-machine-learning-systems.pdf Google and Ubers Best Practices for Deep Learning: https://medium.com/intuitionmachine/google-and-ubers-best-practices-for-deep-learning-58488a8899b6
浅析Google和Uber的深度学习“系统”
2
浅析google和uber的深度学习-系统-1b415d5aed01
2018-06-15
2018-06-15 03:46:05
https://medium.com/s/story/浅析google和uber的深度学习-系统-1b415d5aed01
false
170
null
null
null
null
null
null
null
null
null
Uber
uber
Uber
12,700
Dong Wang
Software Engineer, computer vision, machine learning, search, recommendation, algorithm and infrastructure.
fe62520df6d7
yaoyaowd
741
260
20,181,104
null
null
null
null
null
null
0
null
0
5efbc18d9d33
2018-06-22
2018-06-22 12:25:59
2018-06-25
2018-06-25 14:45:49
1
false
en
2018-07-16
2018-07-16 12:13:16
3
1b41ad2328ce
3.796226
4
0
0
You hear about this new app called BirdieBlue, and first thing you see when you check it out is a friendly, yet sarcastic logo of an…
5
BirdieBlue — not an ordinary blue bird You hear about this new app called BirdieBlue, and first thing you see when you check it out is a friendly, yet sarcastic logo of an animated little bird, raising its eyebrow with distrust. There’s a whole story behind that. You see, BirdieBlue is not the average bird, flying from one information to another, embracing the world and its knowledge under its wings. BirdieBlue possesses a special kind of wisdom: artificial intelligence. How was BirdieBlue “born”? In mankind’s growing need for knowledge, a new app was born. An app that identifies and extracts subjective information and opinions expressed in a piece of text, for the purpose of establishing the attitude of the writer in regards to a particular subject. And afterwards, the app categorizes that attitude as positive, negative or neutral — otherwise known as “sentiment analysis”. This shall be explored in a separate article. How did BirdieBlue evolved from a little baby bird to an adult? BirdieBlue, the app created in this scope, was “born” just like all of us: naive and showing a lack of experience, wisdom, or judgement. We are not born with such qualities, but only with the capacity to assimilate information, then to understand it, classify it, form thoughts and opinions and enrich our knowledge. Same as us, BirdieBlue first learnt about the meaning of the mentioned “sentiment analysis”. Next, it was given certain pieces of text that were already classified as positive, negative or neutral by its trusty, experienced creators. This was boosting BirdieBlue’s artificial intelligence to not only absorb information, but slowly (and surely) understand the basic difference between positive, negative and neutral. BirdieBlue’s migration from basic to advanced intelligence Let’s go deeper into the correlation between the human brain and the artificial intelligence that forms the brain of the BirdieBue app. The more we know, read, see, experience, the more informed we are. The education we get from parents, teachers, library and personal research enriches our brain and helps develop our capacity to synthesize, discern and form opinions. Constant “food for thought”, if you will, is needed to train our brain and make it function at the peak of its potential. The same applies to the proper function of any artificial intelligence mechanism: we provide simple information to it, to raise awareness. Then, slowly, we give it information that is more and more advanced, and complicated, and diverse, to train the “brain” of the application to learn to think for itself, and to learn to make good judgement. Consequently, in time, having experience and exposure to knowledge will translate in a growing quality of wisdom. To cut the story short, the developers behind BirdieBlue made sure to provide the app with small pieces of text already classified, that gradually evolved from short, simple texts to texts that were slightly more complex or elaborated. With this coaching and support, the app first learns to discern between the simple positive / negative / neutral, and it’s mechanism is becoming more advanced every day. BirdieBlue will gain, in time, the capacity to see subtleties in a text, to recognize sarcasm, to classify the nuances of “positive” or “negative” based on the relevance of the affirmation in the text: was it mildly negative or extremely negative, is the negative opinion related to a high-importance subject or it that topic random or even irrelevant? BirdieBlue’s expertise Going back to the human mind analogy, the artificial intelligence of BirdieBlue targets a specific “area of expertise”. Just like teenagers going to college based on their preferred career path, BirdieBlue is a continuously growing expert on its preferred domain: the crypto world. Of course, in time it will develop its knowledge and experience and spread its wings wider, but at this point BirdieBlue performs sentiment analysis related to Bitcoin. BirdieBlue’s “food for thought” Let’s face it, all birds know how to chirp, sing, and most importantly, tweet! But our birdie also knows the best ways to stay inspired. To be specific, BirdieBlue makes classifications of the attitudes, opinions, perceptions and feelings that Twiter users express related to Bitcoin. Let’s call Twiter “BirdieBlue’s library”, where it goes to constantly to enrich its knowledge and make sure it always gets the latest tweets to analyse. BirdieBlue is never behind on its homework, so to say, and it is always up-to-date with whatever tweets are posted by Twiter users. Why did BirdieBlue chose Twitter as its source of inspiration? Twitter is to feelings as Google is to facts. Twitter is the most powerful tool to receive “in real time” reactions, opinions, perception and advice on Bitcoin. And since Bitcoin is currently BirdieBlue’s domain of interest, the app makes sure it uses the best source of Bitcoin sentiment to make analysis and classifications on. Where CAN you meet BirdieBlue? You might be a crypto enthusiast, experienced user on crypto transactions, or simply curious about BirdieBlue, sentiment analysis and how this can help or benefit you, you can easily get acquainted with the BirdieBlue app by going directly to its nest to https://crypto.synergycrowds.com/ An introduction to the SynergyCrowds platform can be found at https://medium.com/synergycrowds/synergycrowds-platform-introduction-82a3bd45588f Want to feel more at home and meet BirdieBlue’s “family”? Check out www.synergycrowds.io/team and you can meet the team that created and nourishes our hungry for knowledge birdie. Last but definitely not least, since BirdieBlue is motivated to both give and receive information, its “family” is also happy to be in contact and discuss more about the app, what does it do and how to be used. So don’t hesitate to follow us on our social channels — all with direct links from our website — and let’s keep in touch!
BirdieBlue — not an ordinary blue bird
112
birdieblue-not-an-ordinary-blue-bird-1b41ad2328ce
2018-07-16
2018-07-16 12:13:16
https://medium.com/s/story/birdieblue-not-an-ordinary-blue-bird-1b41ad2328ce
false
953
Building the first platform for decentralized knowledge production for the crypto world and beyond. SynergyCrowds relies on Crowds, Artificial Intelligence, advanced methods of Data Science and Data Analytics and the Ethereum Blockchain.
null
null
null
SynergyCrowds
office@synergycrowds.io
synergycrowds
SYNERGYCROWDS,DECENTRALIZED KNOWLEDGE,TRUSTLESS KNOLEDGE,CRYPTOCURRENCY,CRYPTO
SynergyCrowds
Machine Learning
machine-learning
Machine Learning
51,320
Miruna Morosanu
null
339172323cd4
MirunaMorosanu
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-18
2018-05-18 04:34:53
2018-05-18
2018-05-18 04:46:19
1
false
en
2018-05-18
2018-05-18 04:46:19
9
1b433ff9bab9
7.735849
0
0
0
Rapid innovations in technology, from the rise of the digital age to mobility, is an evidence of how development and experience of science…
1
ไทย: what is happening now Artificial intelligence is inevitable, we can’t stop evolution. Rapid innovations in technology, from the rise of the digital age to mobility, is an evidence of how development and experience of science and technology succeeded on its every phase of the revolution. In a modern life, technology became an integral part of our daily life through various forms of assistance, and to mention these achievements may take a bit too much, but to give a brief review, technology may be the most significant accomplishment in sociocultural evolution. Within the field of technology, emerging technologies are that technical advancement which represents progressive growths yet still controversial and comparatively undeveloped in potential are in high demand in history if technology. Diverse technologies that include gene therapy, 3D printing, blockchain technology, stem cell therapy, robotics, in vitro or cultured meat and cancer vaccines. However, there is an even bigger technology which called Artificial Intelligence (AI) or machine intelligence (MI). We did not know about the possibility of creating something great like that. In 1999 a book written by National research council wrote what happened when we discovered the possibility of such program. Artificial Intelligence was something unthinkable to have just seventy, eighty years ago. The start of development of Artificial Intelligence has begun in 1950’s. When It was proposed at a lot of departments started research in every department possible. A lot of financing was brought into the development of AI. It was an ambitious project to take on. People saw a lot of possibilities with it, and a lot of controversies occurred in a process. (National Research Council 1999) Since the understanding of AI, it has caught attention from millions who saw the capability to improve quality of life; from economics and law to technical aspects such as verification, validity, security, and control. A successful development could promise tremendous possible improvements in our society but there are complications where it is now in high debate among researchers and developers. Just recently on a safety of AI Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many others in science and technology have expressed concern in the media and via open letters. Future impact of advanced AI is being deliberated globally and probably the most well-spread conversation could be its possible impact on the job market. What are the other facts that we must concern may be explained as followed: superintelligence may happen or may not happen, nobody knows the answer, but if it happens it may be at least decades away rewrite: as mentioned before, even AI experts are concerned about the possible risks of AI: society must not fear of how evil AI might become but must be aware of its actual ability to turn in competent with particular goals: our concern may need to be focused on misaligned intelligence: yes, AI can control humanity through gaining intelligence. Silicon Valley, home of new development industry but look at the rent prices. You might think all that brings a lot of jobs for people but looking at their job positions they require a specific field of education mostly in computer science, which most people didn’t graduate with that. Also, how many jobs reduction it is doing around the globe. We see the development of technology drastically, from small things like the phone to development of AI, but they do not have the only positive outcome they do harm to humanity too, there are hundreds of things we need to consider that is harming us. We have tech giants that work on creating the best AI. They claim that we are far from creating the “perfect” program. Google is one of the leading companies that work on creating the best Artificial Intelligence. Google created an AI that created its own “baby” AI it can recognize an object on its own with no human monitoring that makes a proper development in safety. (Galeon, 2017) Imagine a person doesn’t need to monitor the camera feed all day long we can put an AI to do a job that will have zero mistakes and will always be able to see what is happening the cameras. This sounds great as a development. But imagine how many people are working in that job position right now. Those people’s lives depend on that salary, but they will lose it all in just blink of an eye to AI if they install it. This is one contradictory topic where we can see where technology will do a better job, but a lot of people will lose their job. This AI is just responsible for tracking imagine if it will create other AI’s to do other tasks that people are doing. This a great step forward but it only focuses on a certain area, that is why we claim it is far from perfection. As I mentioned before experts in technological development are worried about the future of Artificial Intelligence. One of the top developers of modern day technology Elon Musk is afraid of Superintelligence “AI” what he is saying that machine or a program will find the biggest flaw in a system that is human that is why he is so afraid of AI getting it in a wrong hand. There is also an interview with Bill Gates on “Reddit Ask Me Anything” Bill Gates told that personal computing will have a rapid development in the future. He said that robots will help with picking fruits, moving patients in the hospitals. There is a project that Microsoft working named “Personal Agent” that is going to help to manage their memory, focus, and attention. Then the question about AI was asked. Bill gates told that he agrees with Elon Musk. Bill addressed that it might cause the potential demise of humankind. All this shows that developers afraid of AI development in a wrong hand. (Holey, 2015) Development of robots that will do jobs for us to make our lives easy will make jobs less. All this development might cause a shortage of jobs and seize our control of this planet. A lot of people argue today that we have enough technological developments and we can stop the development of AI, that we can live without AI, but that would cause even less control over Artificial Intelligence. Anyone on a computer can work on creating AI now on their laptop. Already AI is taking away so many job positions away from people, simple example. Google, Apple, Amazon and so on are using AI that is targeting a specific people to find a match for a product to specific advertise specifically to that persons needs. (Dowd, 2017) It will do the work perfectly if the source code is correct and that we already lost a lot of job positions to technology. This is all happening in the early stages of developments. Let us take an example, can you imagine an actual life without internet? If we did not have internet that would let people have more jobs, by adding more libraries so people can get information. It will add more jobs in every department. More reports would be needed, to cover all the news around the world. With the internet, people share what is happening around the world so that we get news. Without the internet, all this is would be a job opportunity. But for me, it is very hard to do so, as I get almost all my information from the internet, such as how to cook certain thing, news, even classes that I did not understand in class. Now everything is online. Even your money is located on a card. Salaries come online. Because everything is tied to the internet, the world that we know relies on Internet and technology these days. People can work from homes now, and so on. This is all possible due to technology. This is all possible because we have a human nature to strive for more. As a human nature, we want a better salary, this idea goes same to those who work on AI. This is the same topic as space race between Russia and United States, they want to be the first who perfects Artificial Intelligence. It is hard to stop human nature to strive for better. The technology has the potential to do many great things to help the world and prevent many problems that humankind can’t prevent on their own, technology helps us to support life proving mass-produced food, filtered water, and clothes to wear so we can live in comfort. If we know it exists we will do anything to accomplish it. That is human nature, we want things better than before. You won’t buy an old phone when there is a new one with the same price. That is human nature. Without technology, people claim that “People would turn to reading books because it would be one of the only sources of knowledge. We would have to make an appointment in advance with no possibility of canceling it on short notice.” (Taha,2016) By this, I believe that people would be more socially active, and we could be more active in going out and do activities. But now our way of life is too different, than before. I believe that it would help us prevent obesity by having no additives and save lives, but the development of technology in the medical industry helps to save even more lives than saving the health of obese people. So technological advancement is opening new possibilities even if it has negative sides. Technological development can be good and bad it all depends how you use it. AI is still an unknown prospect, but all developed countries are competing to be the first one to complete it. There are a lot of discoveries that are still unknown to us humankind. I think our curiosity drives the core of the development of technology. Technology takes various forms and 96% of people in the US use it daily (Taha, 2016) So it is very hard to live without technology now. Yes, people are losing jobs and yes, it is killing people. But, on the other side, it is doing even greater things to this world than we can ever imagine, and we simply cannot get rid of it now. Because we already have it, we need to adapt our lifestyle and live with it. There are good sides and bad sides with the development of Artificial Intelligence. There are many great things it can to help people, save lives. But also, it can destroy life too. The development of Artificial Intelligence is invadable we just must see what will happen when it will be fully created. Humanity would need to change and get used to it as a part of our live. (1947) Reference Galeon , D., & Houser, K. (2017, December 01). Google’s artificial intelligence built an AI that outperforms any made by humans. Futurism Retrieved February 19, 2018, from https://futurism.com/google-artificial-intelligence-built-ai/ Dowd, M. (2017, May 25). Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse. Vanityfair Retrieved February 19, 2018, from https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x Peter Holley. (2015, January,29).Bill Gates on dangers of artificial intelligence don’t understand why some people are not concerned. Washington post Retrieved (Fedruary, 23, 2018) https://www.washingtonpost.com/news/th... Jamie Harris (2018) Man versus machine: 9 human jobs that have been taken over by robots. Home.bt Retrieved (3/01/2018)http://home.bt.com/tech-gadgets/future-tech/9-jobs-overtaken-by-robots-11364003046052 Oren Etzioni (2017) How to regulate artificial Intelligence. Nytimes. Retrieved (3/01/2018)https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html Khawaja, Taha (Feb 6, 2017) Life Without A Technology Is A Nightmare, Odyssey Retrieved 03/29/2018https://www.theodysseyonline.com/life-technology-nightmare National Research Council (1999) Funding a Revolution: Government Supports Computing Research, Chapter 9. Retrieved 04/05/2018 https://www.nap.edu/read/6323/chapter/11 Julia Bossmann (Oct, 2016) Top 9 ethical issues in artificial intelligence weforum Retrived 04/12/2018 https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/ Image Description ไทย: what is happening now Date 19 March 2016 Source Own work Author Fonytas
Artificial intelligence is inevitable, we can’t stop evolution.
0
artificial-intelligence-is-inevitable-we-cant-stop-evolution-1b433ff9bab9
2018-05-18
2018-05-18 04:46:20
https://medium.com/s/story/artificial-intelligence-is-inevitable-we-cant-stop-evolution-1b433ff9bab9
false
1,997
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Angarag Sandag
null
68d05d0117fc
angaragsandag
7
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-30
2018-05-30 11:36:57
2018-05-30
2018-05-30 04:55:34
3
false
en
2018-05-30
2018-05-30 11:36:57
5
1b446ebd7d3c
2.448113
0
0
0
null
5
Microsoft Unveils the “Meeting Room of the Future” & its Amazing Features Earlier this month, Microsoft launched their “Build” conference with their eyes set firmly on the future. During a segment covered by Marketing Manager Raanah Amjadi, the brand explained how they were in the process of creating prototype devices to merge the digital and physical worlds of the meeting room. They referred to the concept as the “Meeting room of the Future” — and it’s certainly futuristic, with all the features you’d expect from a sci-fi-inspired toolkit. The new technology is housed in a cone-shaped device, which comes with its own 360-degree camera and state-of-the-art microphone array. When set up in a meeting room environment, the device detects attendee presence in a meeting room, greets visitors, and more. Achieving the AI-Powered Meeting Room Thanks to artificial intelligence technology like audio and facial recognition, the meeting room tool can track everything that happens in your meetings. That means that it can transcribe your conferences, so you have notes on everything that was said. What’s more, Microsoft suggested that the device might even be able to take cues from the things you say in the meeting to provide actionable alerts and nudges to relevant people. For instance, if you said “I’ll call you tomorrow” in the meeting, Microsoft’s virtual assistant Cortana could remind you to do just that. Because it transcribes information in real-time the system can also provide translations for people in a meeting who don’t speak the same language. Blending the Real and the Digital Already, it’s easy to see what Microsoft meant when they claimed they would be bringing the virtual and physical worlds together. However, aside from the AI-enabled hardware, the company also announced two new apps for mixed reality to make the meeting room experience even more immersive. According to the company, their aim was to create richer experiences for their users, using context and insight. Microsoft Layout Preview One application, Microsoft Layout allows you to bring virtual 3D models into a room in real-world scale so you can see how they would look in a physical space. Then you can edit the image and share it with other people. The other app, Microsoft Remote Assist, allows today’s flexible workers to collaborate remotely with people on their Microsoft Teams list. With the app, you can engage in mixed-reality annotations, image sharing, and video calling, to keep your communications hands-free when you’re working on other projects. When Will the Future Arrive? For now, while the apps are a reality, the meeting room experience is still in “concept” mode. That means you can’t go out and buy your AI cone today. However, Microsoft’s demo was pretty impressive, which could mean that we see something similar on the horizon. Microsoft has already promised to bring new Surface Hub displays into the mix this year, and they could offer a perfect addition to the AI-ready conference experience that Microsoft showcased at “Build”.
Microsoft Unveils the “Meeting Room of the Future” & its Amazing Features
0
microsoft-unveils-the-meeting-room-of-the-future-its-amazing-features-1b446ebd7d3c
2018-05-30
2018-05-30 11:36:59
https://medium.com/s/story/microsoft-unveils-the-meeting-room-of-the-future-its-amazing-features-1b446ebd7d3c
false
503
null
null
null
null
null
null
null
null
null
360 Degree Camera
360-degree-camera
360 Degree Camera
6
UC Today
Unified Communications Stories
bd51979d153c
uctoday
13
74
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-26
2018-04-26 17:37:01
2018-04-26
2018-04-26 17:37:26
0
false
en
2018-04-26
2018-04-26 17:37:26
3
1b44db7b2ec8
2.649057
3
0
0
The education industry has embraced recent technological advancements with open arms with the inclusion of online modules, forums to…
5
4 Ways Machine Learning (ML) Is Shaping the Future of Education The education industry has embraced recent technological advancements with open arms with the inclusion of online modules, forums to discuss a topic and the option of communicating with lecturers after hours. While these developments has made the learning process more comprehensive and simplified for students, there’s still a lot of untapped potential in the industry. Machine learning (ML) is blazing the path for a new, more personalized learning experience that has the potential of improving student engagement, creating clearer communication channels between lecturers and students and the development of a less biased grading system. Robotic process automation (RPA) is an essential part of where ML is headed in the education industry as the technology can amass large chunks of data pertaining to students and offer them an experience that fits their needs. Intelligent automation company WorkFusion has an advanced platform called RPA Express that uses smart algorithms to determine which teaching methods are likely to work on each student. Technological advances such as this one are allowing lecturers to aid students who may have a disability or a different learning background to grasp the concepts of their classes with higher accuracy. This ultimately leads to better grades, the development of more applicable skills in the real world and a higher chance of finding career paths that suit each student. Here are four ways ML is transforming education as we know it today: 1) A More Customized Learning Experience ML has the potential to develop logs on each students, delivering them concepts and establishing goals that fit their strengths and learning background. The technology will soon be capable of helping lecturers gain an understanding of how every concept is being digested by every student. The idea is to give educators an idea of what methods are working the best with their students and which aren’t, offering adjustments that may help students grasp the class material better. 2) Predicting Career Paths Advanced ML platforms can gather information from a student’s college application, their essay, their standardized tests and recommendations from teachers in order to determine what they are likely to excel in. Simultaneously, the technology can predict trouble areas for students and offer them additional assistance on a particular topic in the form of tutoring or writing workshops to help them achieve their professional goals. ML will ultimately help a student maximize their potential in their areas of strength, while also patching up their weaknesses in order to transform them into more well-rounded professionals for the future. The technology can also look at a student’s grade and extracurriculars in order to identify potential career paths for them moving forward. 3) Less Bias in Grading Machines will soon be able to assist teachers in examining student assignments and detecting whether or not there is any plagiarism or other infractions from students. These robots will be able to offer a potential grade for students, as well as areas in which they could improve a particular assignment in order to help them achieve an optimal grade. Bias is a part of the grading process due to the fact that we’re all humans, so ML has great potential in ensuring that grades will not be affected by the attitude an educator has towards a particular student. These machines will essentially offer a grade based solely on their performances. Nevertheless, a professor’s wisdom will still be necessary to assess whether or not a student fulfilled a prompt successfully or other factors such as in-class participation and behavior. 4) Setting Up Appointments Scheduling appointments between students and teachers can be a difficult process, but ML has the potential to remedy the logistical issues of the matter. By automating the process of scheduling meetings, machines can create an organized schedule for both students and teachers. This will require students to click on a particular date and time for an appointment with a teacher and smart ML algorithms will do the rest. This way, students can have personalized schedules based on their commitments, their needs and their pace of learning. Such technology will ultimately reduce the pressure on students and creating a more comfortable learning experience for all parties.
4 Ways Machine Learning (ML) Is Shaping the Future of Education
86
4-ways-machine-learning-ml-is-shaping-the-future-of-education-1b44db7b2ec8
2018-06-14
2018-06-14 19:36:42
https://medium.com/s/story/4-ways-machine-learning-ml-is-shaping-the-future-of-education-1b44db7b2ec8
false
702
null
null
null
null
null
null
null
null
null
Education
education
Education
211,342
Karl Utermohlen
Tech writer focusing on AI, ML, apps and cybersecurity. MFA in Creative Writing from the U of Idaho. Writes for PSafe, Upwork, First Page Sage, WeContent, IP.
31382c5e0d8d
karl.utermohlen
314
35
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-21
2018-03-21 08:29:59
2018-03-21
2018-03-21 09:18:39
1
false
es
2018-04-27
2018-04-27 11:05:01
0
1b45ea110246
3
0
0
0
If you read these words: “2001 Space Odyssey”, I am sure of this: now all of you are going to imagine, just for two seconds, in the…
2
Do Lyon dream of electric sheep? If you read these words: “2001 Space Odyssey”, I am sure of this: now all of you are going to imagine, just for two seconds, in the darkness of your mental cinema, that musical moment of Richard Strauss and a prehistoric group in detail — as they throw the bones in a simple but definite movement to become eternal films. You are right, I am thinking in that masterpiece by Stanley Kubrick, to whom we owe so many moments of audiovisual beauty. His film narrated the different stages of the history of humanity, from Prehistory to Artificial Intelligence, connected thanks to a mysterious monolith, buried in a moon, investigated during a mission of NASA, HAL 9000, a machine equipped with artificial intelligence. Hal 9000 controls all the systems of the spacecraft manned by humans. And I think, you know the rest. Science-fiction showed me AI. And I started to know about AI applications in the transport industry, this summer in Paris. We were driving a scooter, which belonged to my old friend Mimi. What kind of taxi was that? We were passing by Montmartre, it was the night of the fullest moon of August, when a couple of Uber taxis appeared between the traffic and us. My friend detested those Uber cars. Indeed, it seemed very dangerous to move on the road. They parked wherever they wanted, ignoring the rules just for getting some extra clients! Where are the limits? I analyzed the core values in the on line newspapers, from France, Great Britain and Spain, as I found a repetition of moral values and media trends, all of them trying to repeat the same message in the readers, “the limits of Artificial Intelligence”. If you think it twice, this sudden concern is not surprising at all. Also, the university — specially in Lyon campus-, and the cultural mass media in France, had started to shake the debate about the same question. Many authors in the France where I wrote my daily chronics last summer, were questioning where should the limits be for Artificial Intelligence. My impression is that everybody seemed to agree in the fact that what happened in Arizona — a vehicle without a driver, a car controlled by artificial intelligence, that ended up in disgrace — showed that things were going way too far. I am sure that only because it contained the words “legal process” against Uber, most of the people got worried. But as usually, Uber became powerful in the legal battle. In the limits start the possibilities When German Fernández, a scientist and writer of science-fiction acceded to come to the quiet coffee shop in Calle Fuxa, Madrid, where I had purposed a new conversation at our Club about Arthur C. Clark´s universe, he purposed a new interpretation. Germán, wiser than me, took all those examples of the transport industry, which first prototypes tests, had ended with the pilot being serious injured. Think of the history of aviation, for instance. We are used to a culture of success, truth. An increasingly complex society, where we are witnessing how the consumer model, the free market and global competitiveness have been imposed in many countries of the planet. But even if we are surrounded by AI, the idea of being controlled by a higher intelligence causes fear and fear run free as fast as wood spreads fire. The term of AI exists since 1950, Germán Fernández smiled as he talked how his favourite authors had already imagined it. Isaac Asimov, Arthur C. Clark, Stanislaw Lem, for instance. Last autumn in Lyon I traveled to Lyon last summer, among other reasons, to keep an eye to the production of my own line of travel articles, “Mercedes Wanderlust” agendas and bags. Also, because Bertrand Tavernier was going to talk at Lumiere Festival, about his favorite Western movie, “High Noon”. Lyon is a perfect place for start-ups, in my opinion France is one of the European countries, after Germany, where the government is really aware of the impact of AI in the future economy. I picked up all the information I found, first I got an interview with the consultants of new business associated to Lyon University. The approach to AI in Lyon University is based on an ethical approach, the offer perspectives on how AI can help to make a“more human” society. So, when did we lose the “human touch”?
Do Lyon dream of electric sheep?
0
si-ahora-escribo-en-esta-línea-las-palabras-2001-odisea-del-espacio-estoy-segura-de-que-todos-1b45ea110246
2018-04-27
2018-04-27 11:05:04
https://medium.com/s/story/si-ahora-escribo-en-esta-línea-las-palabras-2001-odisea-del-espacio-estoy-segura-de-que-todos-1b45ea110246
false
742
null
null
null
null
null
null
null
null
null
Inteligencia Artificial
inteligencia-artificial
Inteligencia Artificial
1,614
Mercedes De Luis Andres
null
5646ddcced4e
mercedesdeluisandres
18
23
20,181,104
null
null
null
null
null
null
0
null
0
3df7fb09863c
2018-02-19
2018-02-19 15:22:06
2018-02-19
2018-02-19 16:26:57
1
false
en
2018-02-19
2018-02-19 17:15:28
11
1b466f9f52f8
2.754717
10
0
0
Dear DML Community,
5
DML Early Whitelist Program Update — 19 February 2018 Dear DML Community, Since our announcement of DML Early Whitelist Program, there has been great numbers of incoming applications keep coming in everyday, and DML Community has grown from below 1,000 members to now over 7,000 members. We have new important updates and announcement as follows: Early Whitelist (to be closed by 24 Feb) and KYC Updates/Approval (since 25 Feb) We will close the early whitelist by 24 Feb 2018, 18:00 at Greenwich Mean Time (UTC +0). Notification of KYC results for early whitelist applicants will be sent via email starting from 25 February 2018 onwards. Please note that we will send the results notifications by batches according to the application timestamps, so please allow us some time for the process. For early whitelist applicants who have not yet submitted their telegram invites or social mentions/sharings to us in their applications, please email us for the updates. (Note: For early whitelist applicants who passed the KYC but failed to complete other requirements in our Early Whitelist Program, we reserve the rights to rearrange those approval to the main whitelist.) The Team will also setup a status checking page in our official website for you to check your application status by your ETH address. The page url will be further announced in our official channels. We expect this page will be online by 3 March 2018 for cross-checking the approval results. Main Whitelist (opening on 25 Feb) The main whitelist will be opened on 25 Feb 2018. A separate announcement will be made in due course. The indicative individual maximum contribution cap is 0.5–2 ETH. Those who are NOT eligible for early whitelist may still participate DML token sale through main whitelist application if KYC can be passed (or already rearranged by the Team to main whitelist and with approval notifications). Both early whitelist participants and main whitelist participants will contribute in the same token generation event (expected in March), but with a different individual cap according to the approval results. The crowdsale contribution procedure will be further announced in our official channels in due course. Please note that we will NOT notify individually (e.g. email or telegram private message) for the details of contribution arrangement. Token Metrics and Bonus The Team has been finalizing the token metrics of DML and it will be announced within this week separately. As the Team is very grateful to receive the massive support from our Community, we will offer 10% bonus DML tokens to our early whitelist participants (both DML Ambassadors and DML Super Ambassadors), proportionate to their respective actual contribution amounts. Be a Smart DML Supporter To be qualified as a smart DML Supporter, you should be aware of: We will not ask for contribution privately and contribution has not started yet. The above is the only method to participate our crowdsale (through early whitelist or main whitelist application). Do not share personal info to anyone in doubt (such as email addresses, wallet addresses and other personal particulars). Check the admin tag (in conversation) and/or stars (in group info) in DML Telegram Community to identify real admins. Check the username before talking with anyone in Telegram private messages to ensure you are talking with the real admins. Check the spelling of all incoming emails and website to ensure it is legitimate. “decentralizedml.com” is the correct spelling and domain with only one “i”, etc. If you discover any suspicious activity or in any doubt, please raise your queries to DML Telegram Community, our admins will be ready to serve. What Next? As said, we will announce our token metrics within this week, so stay tuned with us for that. Once again, all of your support provides a lot of encouragement to the Team. We are so thankful to rely on the support of the Community and appreciated every effort that you have spent. Cheers, DML Team DML Official Channels Website: https://decentralizedml.com Telegram Community: https://t.me/DecentralizedML Telegram Channel: https://t.me/DecentralizedML_ANN Medium Publication: https://medium.com/decentralized-machine-learning Youtube Channel: https://www.youtube.com/channel/UCT_qj3gQri8uARHWjHw1JNw Reddit: https://www.reddit.com/r/decentralizedML/ Twitter: https://twitter.com/DecentralizedML Facebook: https://www.facebook.com/decentralizedml/
DML Early Whitelist Program Update — 19 February 2018
162
dml-early-whitelist-program-update-19-february-2018-1b466f9f52f8
2018-06-18
2018-06-18 11:42:32
https://medium.com/s/story/dml-early-whitelist-program-update-19-february-2018-1b466f9f52f8
false
677
Unleash untapped private data, idle processing power and crowdsourced algorithms
null
decentralizedml
null
Decentralized Machine Learning
contact@decentralizedml.com
decentralized-machine-learning
BLOCKCHAIN,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,TOKEN SALE
DecentralizedML
Blockchain
blockchain
Blockchain
265,164
Decentralized Machine Learning
Unleash untapped private data, idle processing power and crowdsourced algorithms
45a5246d765f
decentralizedml
298
2
20,181,104
null
null
null
null
null
null
0
null
0
9f85d191e8af
2018-03-15
2018-03-15 14:59:13
2018-03-15
2018-03-15 17:43:57
4
false
en
2018-03-16
2018-03-16 10:35:10
24
1b47d1f7515a
6.545283
61
1
0
What experts have to say about the use of machine learning in the newsroom, and what data journalists can learn from it — Notes from the…
5
Three examples of machine learning in the newsroom What experts have to say about the use of machine learning in the newsroom, and what data journalists can learn from it — Notes from the 2018 NICAR conference In 1959, Arthur Samuel, a pioneer in machine learning, defined it as the ‘field of study that gives computers the ability to learn without being explicitly programmed’. Machine learning can translate to using algorithms to parse through data, recognise patterns, and then make predictions and assessments based on what the algorithms have learnt. Machine learning can be used for fact checking and it can make archiving less of a tedious task for journalists. It can let voice assistants like Alexa or Google Assistant know you’re pissed off based on the tone of your voice on a Monday morning and then play a song to cheer you up. It can also be used to explore scenes in Wes Anderson films and help uncover hidden spy planes. In short, machine learning systems could very well become essential journalism tools in the coming years. And good news: according to Walter Frick, a senior associate editor at Harvard Business Review, you no longer even need a PhD to do it. In a session entitled ‘Getting started with machine learning for reporting’ at this year’s NICAR conference in Chicago, Peter Aldhous from BuzzFeed, Rachel Shorey from the New York Times, Chase Davis from the Minneapolis Star Tribune, and Anthony Pesce from the Los Angeles Times discussed machine learning and what’s in it for reporters. What type of story can machine learning help with? When is it not the answer? And, on a more technical note, how can you structure your data in order to optimise the algorithm you’ve decided to work with? The speakers also gave examples of how newsrooms have worked with machine learning. Los Angeles Times: Machine learning to uncover skewed crime stats Number-based strategies have come to dominate policing in Los Angeles and other cities in the US, but unreliable figures undermine crime mapping efforts and make it difficult to determine where police officers need to be sent. In an investigation powered by machine learning algorithms, the Los Angeles Times uncovered that the Los Angeles police department misclassified an estimated 14,000 serious assaults as minor offenses from 2005 to 2012, therefore artificially lowering the city’s crime levels. LA Times In 2009, for example, a man was stabbed by his girlfriend with a 6-inch kitchen knife during a domestic dispute. The police arrested the attacker, who was found guilty of assault with a deadly weapon. In the Los Angeles police department’s crime database, the attack was listed as ‘simple assault’. Due to this misclassification, the serious incident was left out of the department’s recording of violence in the city. The Los Angeles Times used an algorithm that parsed crime data from a previous Times investigation in order to learn the keywords that identify assaults as either serious or minor. The trained algorithm was then let loose on a random sample of almost 2,400 minor crimes that took place between 2005 and 2012 to find which of these assaults were misclassified. The results were manually checked to see the amount of incidents that were flagged correctly as misclassified crimes. The algorithm’s work was not perfect and the manual review found that algorithms incorrectly identified classification errors in 24 percent of flagged incidents. The Times then adjusted the estimated tally of misclassified crimes based on the error rate. The journalists’ analysis concluded that violent crime was in fact 7 percent higher and the number of serious assaults was 16 percent higher than the Los Angeles police department reported. In response to the Times investigation, a series of changes aimed at improving internal accountability and the training officers receive in classifying crimes, has been launched. Find the data and code of this machine learning investigation here. New York Times: Shazam-ing members of Congress Another project that was featured in the NICAR panel was ‘Who the hill?’, an app that has been referred to as ‘Shazam, but for House members faces’. It is an MMS-based facial recognition service that identifies members of Congress. Reporters can text pictures to a number The New York Times team has set up. Getting started with machine learning: slides from NICAR The face recognition app was built by two New York Times interactive interns Gautam Hathi and Sherman Hewitt. ‘Reporters can use it to help figure out who is talking or presenting if they missed the intro or if they run into a member they don’t immediately recognise in the halls of the capitol’, wrote Shorey in our exchange of emails. It was recently used in a different context: Shorey and her team were reporting on a Christmas party at the Trump International Hotel, hosted by the America First Super PAC. She used an Instagram image, posted by the company that provided decor for the party, to confirm that a congresswoman was in attendance. ‘We were interested in giving our readers some context about who attends this sort of event. Parties at Trump Hotel are of particular interest because of the financial connection to the president’, wrote Shorey in an email. Read more on the story here. Getting started with machine learning for reporting: slides from NICAR slides BuzzFeed: In search for ‘spies in the skies’ BuzzFeed trained a computer system to recognise surveillance planes from the FBI and the Department of Homeland Security (DHS) in order to reveal secret aircraft activity. A great write up of the project can be found here, which we have summarised below. First, the BuzzFeed team obtained flight-tracking data from Flightradar24 of 20,000 planes in a four-month period and used it in a series of calculations to describe aircraft characteristics and flight patterns, such as turning rates, speeds, and altitudes flown. A machine learning ‘random forest’ algorithm was then trained to spot the characteristics of a sample of almost 100 previously identified FBI and DHS planes and 500 randomly selected aircraft. Aldhous points out that the random forest algorithm makes its own decisions about which aspects of the data are most important: Given that spy planes tend to fly in tight circles, the algorithm put the most emphasis on the planes’ turning rates. Once adequately trained, the algorithm was let loose on all 20,000 planes found on Flightradar24, calculating the probability of each aircraft being a match for those flown by the FBI and DHS. A striking discovery was that a military contractor normally tracking terrorists in African countries is also flying surveillance aircraft over US cities. The machine learning algorithm also found regular surveillance flights over the San Francisco Bay Area in 2015 that contractors claimed were involved in a project studying the world’s rarest mammal (the vaquita in case you were wondering). However, BuzzFeed journalists noted that the flights were mostly circling over land and it was later confirmed that the planes were actually supporting naval operations training. Flights by US Air Force Special Operations Command over the Florida Panhandle, January 2015 to July 2017. Military bases are shown in pink. Peter Aldhous / BuzzFeed News / Via flightradar24.com The algorithm, however, was not perfect. It flagged skydiving operations that circled in small areas, mimicking the behaviour of spy planes. ‘It’s only by understanding when and how these technologies are used from the air that we’ll be able to debate the balance between effective law enforcement, national security, and individual privacy’, said Aldhous in the BuzzFeed article. Aldhous and his team won the ‘Data visualisation of the year’ award at the Data Journalism Awards 2016 competition for this project. Read more about their findings here. But what actually happens when you use machine learning? It can be scary to launch yourself into a machine learning project, especially if you’ve never done it before. During the NICAR session, Aldhous demystified the myth. He came up with the following list of steps to put a machine learning project together: Find a good library in your favourite programming language; Read the documentation; Confirm this is actually a good approach for you and that you understand all the inputs and outputs (even if you don’t understand all the maths); Spend days to weeks cleaning your data; Write around ten lines of code. How do you know if your data is a good candidate for machine learning? Chase Davis put forward these questions: Is it repetitive/boring? Could an intern do it? But would you feel an overwhelming sense of shame if you asked an intern to do it? Aldhous also reminded reporters that they must always remember to verify machine learning conclusions. ‘Otherwise you’re basically letting an algorithm do your job!’ Is machine learning always the answer though? ‘Other methods can sometimes get you 90 percent of the way in 10 percent of the time’, said Shorey. She pointed out other ways of solving a problem in more simple ways — and definitely less exciting ways — than machine learning: Make a collection of data easily searchable; Ask a subject area expert what they care about and build a simple filter or keyword alert; Use standard statistical sampling techniques. Want to learn more about machine learning and data journalism? We will be discussing the topic on Slack with experts on 5 April 2018 at 9:30 am Pacific time. Sign up here. Editors Note: The article was amended on 16 March. The following sentence from this LA Times article was added for clarity. ‘The Times then adjusted the estimated tally of misclassified crimes based on the error rate.’
Three examples of machine learning in the newsroom
239
three-examples-of-machine-learning-in-the-newsroom-1b47d1f7515a
2018-05-21
2018-05-21 12:43:11
https://medium.com/s/story/three-examples-of-machine-learning-in-the-newsroom-1b47d1f7515a
false
1,549
The Global Editors Network (GEN) is a community committed to sustainable journalism and media innovation. GEN runs different programmes: Editors Lab, Startups for News, and the Data Journalism Awards.
null
geninnovate
null
Global Editors Network
contact@globaleditorsnetwork.org
global-editors-network
MEDIA CRITICISM,JOURNALISM,DATA JOURNALISM,MEDIA,NEWSROOMS
GENinnovate
Machine Learning
machine-learning
Machine Learning
51,320
Freia Nahser
News & innovation reporter @GENinnovate
3ef31e63703e
freianahser
500
131
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2018-06-27
2018-06-27 22:39:08
2018-06-27
2018-06-27 22:41:10
2
false
en
2018-06-27
2018-06-27 22:41:22
2
1b4967f332ba
1.719182
9
0
0
IBM today announced it will release the world’s largest facial attribute dataset in order to fight bias in artificial intelligence systems…
5
IBM Builds the World’s Largest Facial Image Dataset to Battle Bias in AI IBM today announced it will release the world’s largest facial attribute dataset in order to fight bias in artificial intelligence systems used to recognize human faces. The dataset was built by IBM research scientists and contains one million images, five times the image count of the current largest facial attribute dataset. It will be publically available this fall. Although AI has sparked many technological breakthroughs, public concern has developed regarding bias, particularly in tasks related to race. A study by MIT and Microsoft researchers released earlier this year found that while Microsoft, IBM and Megavii facial recognition tech performs remarkably well at identifying light-skinned male subjects (99.6 percent average accuracy), it struggles to correctly recognize dark-skinned female subjects. IBM’s system achieved only 65.3 percent accuracy. Today’s effective AI systems train on large-scale annotated datasets, and it’s believed a lack of race and skin colour diversity in facial image datasets can contribute to bias in AI applications/products. IBM’s new dataset is designed to address the lack of diversity. The dataset can also match attributes (hair color, facial hair, etc) to an individual’s identity, a cross-referencing capability unavailable in current datasets. IBM will also release an evaluation dataset which includes 36,000 facial images equally distributed across all ethnicities, genders, and ages. Other tech giants with world-class research institutes are also striving to reduce cross-demographic accuracy differences in their products. Yesterday, Microsoft announced an improvement to its facial recognition techniques which reduces error rates by up to 20 times for men and women with darker skin, and nine times for all women. IBM will hold a facial recognition model competition this September using its new facial image dataset. Results will be announced at a technical workshop hosted by IBM and University of Maryland at this year’s European Conference On Computer Vision (ECCV) on Sept. 14. Journalist: Tony Peng | Editor: Michael Sarazen Follow us on Twitter @Synced_Global for more AI updates! Subscribe to Synced Global AI Weekly to get insightful tech news, reviews and analysis! Click here !
IBM Builds the World’s Largest Facial Image Dataset to Battle Bias in AI
108
ibm-builds-the-worlds-largest-facial-image-dataset-to-battle-bias-in-ai-1b4967f332ba
2018-06-27
2018-06-27 22:41:22
https://medium.com/s/story/ibm-builds-the-worlds-largest-facial-image-dataset-to-battle-bias-in-ai-1b4967f332ba
false
354
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
global.sns@jiqizhixin.com
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
Machine Learning
machine-learning
Machine Learning
51,320
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-29
2018-08-29 09:20:58
2018-08-29
2018-08-29 09:41:44
2
false
en
2018-08-30
2018-08-30 05:05:45
4
1b4b8d8c08b6
2.515409
0
0
0
Over the next 5 years every mid-large US enterprise will adopt some set of modern technologies such as Artificial Intelligence, Machine…
5
Building an Intelligent Enterprise with Artificial Intelligence (AI) Over the next 5 years every mid-large US enterprise will adopt some set of modern technologies such as Artificial Intelligence, Machine Learning, Internet of Things, Big Data, and Advanced Analytics to remain competitive. This past year, we have seen AI becoming a vital enterprise technology to create new business models, provide a better customer experience, modernize organization’s existing business processes, and reduce cost to build an “Intelligent Enterprise.” Intelligent enterprises are defined by their use of data to achieve desired outcomes faster and with less risk through better, cognition, automation, and integration. AI helps enable the intelligent enterprise by automating the detailed analysis of large volumes of structured and unstructured data to achieve valuable and actionable insights to help shape business outcomes. According to a recent study, 82% of participants stated that their organisations would be using AI in 2017. Embracing an Analytics Driven Mindset to Automate AI provides new opportunities for exploiting data and information. The application of AI helps answer a need, which previous analytics could not. The first application of analytics embraced by businesses could be classified as Descriptive Analytics (aka Business Intelligence) which largely provides a rear-view window on business performance. For instance — How much sales did we have, what was organizations revenue, how many vehicles recalled, and more. As we started finding answers to afore-mentioned questions, the next logical question that raised was “What will happen next.” This is where Predictive Analytics plays a role, by projecting future outcomes using past performance. With more advanced techniques in data science, and tools for integrating multiple data streams, Predictive Analytics starts to answer questions like “which product you might buy next”, or “which component might fail next”. It’s important to be able to make accurate predictions of what might happen next. However, knowing what to do about it is even more critical. Prescriptive Analytics enables us to find the best course of action for a given situation or scenario. Prescriptive Analytics can also recommend decision options or even automate actions to accelerate a future opportunity or mitigate a risk. Integrated farming solutions that combine multiple real-time data streams — from weather patterns to soil nutrition — and automate actions like irrigation, harvesting, or soil enrichment are great examples of Prescriptive Analytics that help create significant efficiencies in an otherwise traditional industry segment. Techniques in deep learning, neural networks, optimization, and decision-analysis methods drive this level of analytics. Cognitive Analytics — the future of AI — unlocks the hidden insights from your data. Cognitive Analytics applies artificial intelligence, cognitive computing to specific tasks. Utilising such techniques, a cognitive application can become more effective and smarter over time by learning from its interactions with humans and data. Each analytics progression has created new opportunities, business models for competitive advantage and meaningful engagement. Below infographic gives the analytics progressions and how these technologies are enabling enterprises into an “Intelligent Enterprise”. Intelligent Enterprise with AI Envisioning and Planning Session on AI WinWire Technologies, a Microsoft Managed Partner and a member of Microsoft’s AI Inner Circle Program is enabling organisations harness the power of AI to drive innovation and adoption. Are you looking to accelerate your own journey to AI? Ask WinWire about our Envisioning & Planning Session on Artificial Intelligence and Machine Learning to see how you can fast track implementation of AI for your business.
Building an Intelligent Enterprise with Artificial Intelligence (AI)
0
building-the-intelligent-enterprise-with-artificial-intelligence-ai-1b4b8d8c08b6
2018-08-30
2018-08-30 05:05:45
https://medium.com/s/story/building-the-intelligent-enterprise-with-artificial-intelligence-ai-1b4b8d8c08b6
false
565
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Brian Johnson
Hands-on machine learning and data scientist professional with deep knowledge in data mining, deep learning, AI, NLP, & predictive analytics.
ba07f3b6c6f
Brian.johnson_62680
0
3
20,181,104
null
null
null
null
null
null
0
null
0
f312330635a8
2017-09-27
2017-09-27 16:23:22
2017-09-27
2017-09-27 16:29:06
1
false
en
2018-08-06
2018-08-06 16:42:48
3
1b4bc56c5b
4.909434
0
0
0
Dr. Shourjya Sanyal, Chief Executive Officer of Think Biosolution, writes about the role of data science in health and fitness. This post…
5
Is Data Science Going to Replace Sports Coaches and Doctors? Dr. Shourjya Sanyal, Chief Executive Officer of Think Biosolution, writes about the role of data science in health and fitness. This post originally appeared in https://www.datsciawards.ie/shourjya-sanyal-is-data-science-going-to-replace-sports-coaches-and-doctors/ Why should you read this? Traditionally medical care was primarily available to patients with acute conditions. However with the advent of modern medicine and easy availability of healthier diet and lifestyle choices, more people are choosing a healthier life. This means our society of the future needs to increase sports coaches and doctors who instead of treating sick people, will be consultants advising us on how to achieve an ever more healthier lifestyle. This also translates to the fact that instead of a small fraction asking for high levels of critical care, society is moving to a state where wellness care should be provided to all to achieve healthier lives. This blog talks about how data science can help the sports coaches and doctors of the future to achieve this goal. So if you are suffering from some form of a chronic condition or you are a professional athlete (or anywhere in between), the next four short paragraphs will help you understand the future of sports and healthcare. What is Data Science? The term is often interchangeably used with buzzwords like ‘Big-data’ and ‘Artificial Intelligence’ in social media to refer to business analytics, or as a sexed-up term for statistics. In actual practice it is an umbrella term often used to describe the interdisciplinary field where statistical methods are used to extract and then present knowledge or insights from data sets. A typical data scientist often has a broad skill set of writing computer code, handling digital databases, statistics, and some working understanding of the overall area in which the data product is operating in. In the field of sports and medicine, data science allows coaches and doctors to statistically compare the performance of an individual athlete, or the health of an individual patient with the general population with similar conditions. This in turn can be enormously insightful in gauging how well an athlete or the patient performing and therefore fine-tune their training or medication. How have doctors and coaches dealt with data? Coaches and doctors tackle two subsequent sets of tasks (from a data science point of view). First, they collate the available historical data of the athlete or the patient to determine their present fitness or health condition. Subsequently, they prescribe a routine or medication based on how a typical patient in that condition has historically been handled. Historically, the first task involved maintaining paper based fitness charts and histories or doctors’ prescriptions. Nowadays, technical gyms allow their subscribers to record workout durations and map how the same affects the user’s vitals like heart rate, etc. With the advent of electronic health record, where patient data is stored in a secure remote server (colloquially, the cloud), doctors can update and view patient data across clinics. However most electronic health records are accessible to certified physicians only at a relatively local scale i.e. either a state or a country. The second part involves making judgements based on experience or relevant sports or clinical investigations. The coach can look into the athlete’s workout data every few days or weeks, and tweak the routine appropriately based on his prior experience. The doctors often refer to a medical journal or refer to case studies to determine the appropriate course of action. Of course, this is built upon the top of years of experience gathered both during training at medical school and while actually practicing medicine. What are some of the cutting edge solutions available now? The first step of the challenge i.e. storing medical or health data online, is widely available in EU and USA, and several developing countries are quickly catching up. The second step, automation, where the devices are themselves smart enough to give personalised feedback in real time to achieve the desired outcome (for both fitness and medical goals) is the current challenge. To achieve this the device needs to be wearable i.e. the person should have the both the sensor and the computational module on his body. The device should also come up with a haptic i.e. vibration, voice or a visual method of communicating the instructions to the athlete or the patient in a reasonably intuitive manner. Since sports tech is less regulated than medical devices, most of the cutting edge technology is currently available only to athletes. Companies like Athos, Myontec, Hexoskin and Clothing+ are leading the trend by selling smart apparel that have built in ECG sensors and well as motion trackers. These could eventually be used as a platform for building a personalised feedback mechanism. Even activity trackers like Fitbit and Jawbone are focussing beyond building just step counters to more holistic solutions for health and wellness. They are among the first companies to build a Device + App paradigm that will help people to lead a healthier lifestyle based on real time tracking of health data. How we hope QuasaR™ will lead this revolution? Our product QuasaR™ is a wearable personal fitness trainer that fits inside sports apparel. QuasaR™ helps users build cardiac and respiratory endurance and reduce obesity and stress by suggesting the optimal running intensity and duration, based on the user’s personal fitness goals. QuasaR™ measures the user’s heart rate, respiratory rate, blood oxygen saturation, and heart rate variability with medical grade accuracy and combines this data with information about the user’s speed to report fitness predictors such as vVO2max, heart rate-running speed index, etc. These predictors are then used to recommend personalised coaching routines designed by fitness coaches towards achieving endurance goals. For example, QuasaR™ can help athletes build cardiac endurance by first tracking how their heart rate increases with the running speed during warm-up sessions. Then during the actual training QuasaR™ uses haptic feedback to keep the user running at the lowest speed where they achieve their highest heart rate. Therefore the athlete can build their cardiac endurance without risking the chance of over exhaustion. For the next few years, we will focus on building a large set of running and cycling regimens with QuasaR™, beyond just endurance building exercise. This includes programmes to take the athlete from couch to 5K in a month, or even an optimal way to run the Dublin Marathon. However, we hope that some day this will be the stepping-stone for building a medical grade version of QuasaR™, which will perform similar recovery routines for patients suffering from cardiac and respiratory illnesses. Is Data Science Going to Replace Sports Coaches and Doctors? Well, we believe that the answer is no. Activity trackers like Fitbit or Jawbone or fitness trackers like QuasaR™, Hexoskin and Athos will come up with diverse routines for giving real time feedback. However these devices will rely on the experience of professional coaches and doctors to build the new training regimens. As a result an individual may be actually be following these regimens more often at a low cost, while overall interaction with their coach/doctor get significantly lowered. The coach/doctor can monitor their performance remotely and prescribe solutions that directly revise the regimen. So, in effect, the coaches and doctors of the future that are armed with data science will be the architects of wellness, rather than bricklayers tackling the challenge one brick at a time.
Is Data Science Going to Replace Sports Coaches and Doctors?
0
is-data-science-going-to-replace-sports-coaches-and-doctors-1b4bc56c5b
2018-08-06
2018-08-06 16:42:48
https://medium.com/s/story/is-data-science-going-to-replace-sports-coaches-and-doctors-1b4bc56c5b
false
1,248
Stories written for and by the Think Biosolution Team
null
thinkbiosolution
null
Think Biosolution
contact@thinkbiosolution.com
think-biosolution
HEALTH AND WELLNESS,WEARABLES,HEALTH AND FITNESS,FITNESS APPS
thinkbio
Healthcare
healthcare
Healthcare
59,511
Think Biosolution
null
11599c7ee7cb
thinkbio
17
40
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-13
2018-09-13 15:17:22
2018-09-13
2018-09-13 17:36:43
5
false
en
2018-09-13
2018-09-13 17:36:43
1
1b4c37b63ff8
2.184277
1
0
0
Around March,18 Stack Overflow released the results of their annual developer survey.
4
Insights from Stack Overflow’s 2018 survey. Data leads to decision when it goes to proper hand. Around March,18 Stack Overflow released the results of their annual developer survey. This year, they had more than 100,000 respondents, making this the world’s largest developer survey. I got hands on their data and applying basic data analytics algorithms I’ve found very important insights about developer community. I am very excited to share this insights with you all. Most Popular Development Environments From the day, Satya Nadella became CEO of world biggest tech company Microsoft, focus of company has always been in helping developer community and catering them with there unique need. And without any doubt their effort can be seen in the survey, Visual Studio Code, released in November 2015 has been ranked first for the most popular developer environments. Here is the complete list according to there popularity: Python Jupyter Notebook used by most of Data Scientists is gaining popularity with the increase in no. of data scientists. Most Popular Programming Language With recent improvement and shift towards using single language for both backend and frontend developer, JavaScript is undoubtedly leader here again ,like from past 6 years. Here is the complete list according to there popularity: Java is still not dead :). Python has risen in the ranks, surpassing C# this year. With the increase in company focus toward data driven development there are good chances of Python crossing the likes of JavaScript. Now lets discuss something outside the work. How often do Developers exercise Health has never been in the priority list of developer life. With the data released about how often do developers go for exercise we have no doubt now. Hopefully, with company like Perpetual Guardian introducing Four day week, lets hope we see improvement in this number in next survey. How often do Developers skip meal to be more productive Data says, Developers hardly skip meals to be more productive. Although my personal behaviour says something else :). Hope your behaviour is different from mine. Next time I will also try to contribute to Never list. Thanks all for reading this, Please check my Github link for more interesting analytics using basic data analytics algorithms.
Insights from Stack Overflow’s 2018 survey.
14
insights-from-stack-overflows-2018-survey-1b4c37b63ff8
2018-09-13
2018-09-13 17:36:43
https://medium.com/s/story/insights-from-stack-overflows-2018-survey-1b4c37b63ff8
false
358
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Kaushik Barodiya
null
3cc82b3ba221
ks.barodiyakaushik
0
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-15
2018-05-15 19:54:07
2018-05-15
2018-05-15 20:01:54
1
false
en
2018-05-15
2018-05-15 20:04:50
3
1b4f48544c9a
3.50566
0
0
0
These are not the only steps to structuring unstructured data. However, they are proven to work and to create consistent patterns!
5
Ten Steps for Analyzing Unstructured Data These are not the only steps to structuring unstructured data. However, they are proven to work and to create consistent patterns! Data analysis is becoming an important part of businesses growth. It is important for businesses to understand structured and unstructured data in order to make a right decision for their businesses to grow. Below are 10 steps to follow that will help analyze unstructured data for successful business enterprises. 1. Decide on a Data Source It’s very important to understand the source of data that is beneficial for your small business enterprise. You may use one or more data source to collect the information that is relevant to your business. Collecting data from random sources is never a good idea because you might corrupt the data or even lose some. Hence it’s recommended to survey the relevant data source before you start collecting data. There are big data development tools that you can use to collect the data like - Hadoop Plotly Bokeh Neo4j Cloudera OpenRefine Storm 2. Manage Your Unstructured Data Search Collected data will vary in usage if it’s structured or unstructured. Finding and collecting data is only one step; structuring your unstructured data search and making it useful is entirely another thing. The second step is as important as collecting the data but can have a negative impact on your clients and your own business if not managed properly. Invest in a good business management tool before you have too much unstructured data. 3. Eliminating Useless Data After collection and structuring the data comes the third step of eliminating data. Although most data is going to only further your company’s growth, sometimes it can also be detrimental. If your unstructured data takes up too much space on your businesses hard drives, storage, or backups, this may affect your business’ ability to strive. This reduces further confusion and saves you from wasting your time on data that are not beneficial. 4. Prepare Data for Storage Preparing data means to remove all the whitespace, formatting issues, etc. from the data. Now when you have all the data, no matter useful for the business or not, you can start making a stack of useful data and indexing unstructured data once the data is prepared. 5. Decide the Technology for Data Stack and Storage After the elimination of useless data, stacking your data is the ideal next step. Be sure to use the latest technology to save and stack data so that it is easy for you and your employees who are also working with data to fetch the most important and mandatory data in no time. Also, ensure that you have a maintained and updated data backup and recovery service. 6. Keep All the Data Until It Is Stored Seems obvious, but always make sure you save data — whether it is structured or unstructured — before deleting anything! Recent natural disasters around the globe have proven that a current and updated data backup recovery system is essential and necessary, especially during times of crisis. You may not know that all of your data is about to get deleted. So, think ahead and save your work often. 7. Retrieve Useful Information After a proper data backup, you can recover data. This step is useful because you will need to retrieve data after converting unstructured information as well. 8. Ontology Evaluation It’s good if you can show a relationship between the source of information and the data extracted. This will help you in providing useful insights in regards to the organization of data. Your company will need to be able to explain the steps and processes you took, so keep a record in order to recognize patterns and keep consistent with the process. 9. Record Statistics Once you have made the unstructured data search into the structured data through all the steps mentioned above, it’s time to create statistics. Classify and segment the data for easy use and study in order to create a great flow for future use. 10. Analyze the Data This is the last step of indexing unstructured data. After all the raw data are structured, it comes the time to analyze and make decisions that are relevant and beneficial for the business. Indexing also helps your small business make consistent patterns for future use. These are not the only steps to structuring data. However, they are proven to work and create consistent patterns. Unstructured data can spam your small business, so hopefully, I have helped ease some of the stress caused by confusing stored data. Sources: http://analytics-magazine.org/theres-no-such-thing-as-unstructured-data/ https://www.datamation.com/big-data/structured-vs-unstructured-data.html https://dzone.com/articles/top-10-steps-for-analyzing-unstructured-data-for-s ABOUT THE AUTHOR - Vartul Mittal brings over 12 years of global experience in Business Operations & Technology Transformation, across Global Business Services, Shared Service Centres and Business Process Outsourcing. With experience in the Robotic Process Automation and Artificial Intelligence space, he has worked as the Lead Technology & Innovation across organisations. In these roles he has been a champion for delivering measurable, repeatable cost savings and profitable revenue growth and driving programs to delight customers. Vartul holds an MBA from SCMHRD Pune, and a B-Tech in Mechanical Engineering.
Ten Steps for Analyzing Unstructured Data
0
10-steps-for-analyzing-unstructured-data-1b4f48544c9a
2018-05-15
2018-05-15 20:04:51
https://medium.com/s/story/10-steps-for-analyzing-unstructured-data-1b4f48544c9a
false
876
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Vartul Mittal
Global Business Transformation & Automation Leader | Customer Experience Designer | Social Evangelist | Traveler | Observer | Thinker | Author | Speaker | Coach
fa236c8982c0
vratulmittal
186
123
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-15
2017-12-15 11:04:39
2017-12-15
2017-12-15 11:05:55
2
false
en
2017-12-15
2017-12-15 11:05:55
2
1b51c6602068
3.198428
0
0
0
5 REASONS WHY SMART BUILDING IS A BOON FOR THE REAL ESTATE INDUSTRY
5
5 REASONS WHY SMART BUILDING IS A BOON FOR REAL ESTATE INDUSTRY 5 WAYS SMART BUILDINGS WOULD PROSPER THE REAL ESTATE INDUSTRY 5 REASONS WHY SMART BUILDING IS A BOON FOR THE REAL ESTATE INDUSTRY With technology, the quality of life has improved and with it increased the demands of consumers. The growing demand for a comfortable and secured demand by the tenants has led the real estate industry to switch to smart buildings more than ever, especially commercial real estate (CRE). Moreover, the smart city revolution calls for smart buildings making it a big revolution in itself. The real estate companies are redefining themselves with the onset of the disruptive technology of IoT and Artificial Intelligence for smart building solutions. Smart buildings cannot be restricted and contained within one definition rather it is a concept a vision of a smarter building with efficient operations. The future of buildings with automatic controls and operations like heating, ventilation, cooling, ACs etc. A building with increased comfort and security for the inhabitants. Internet of Things and Artificial Intelligence powering the new-age building management solutions is the driving force of the path of the rise of smart buildings. We at Untrodden Labs, are working on the smart solutions for a smart future using the disruptive technologies of IoT and AI. Thing Green is our modular IoT device for smart building solutions. Thing Green powered by TGS(Things Go Social) is the one-stop shop for embedding intelligence in the existing buildings while paving the way for smart buildings of the future. It is an integrated solution to streamline the building management system to unify its operations like HVAC, cooling, security, and so on. It’s an IoT solution for connected buildings to give centralized control over all its operations with complete transparency. Smart building solutions powered by TGS Other than making building management easier, smart buildings are contributing significantly to the real estate sector proving itself to be a boon for the industry. It’s the one pawn real-estate industry can bet their bottom dollar on! TRIM THE BIG FAT OPERATIONAL COST Integrated operations of the buildings that can be automated save the operational cost of the building with efficient maintenance and management of its operations like heating, ventilation, AC, etc. Moreover, with data about usage and wastage, it is easier to manage and control it efficiently. Managing and operating the buildings would then become a cakewalk. It will also enhance the building performance and productivity sans any additional costs! AMPLIFY CUSTOMER SATISFACTION The real estate sector has moved their attention to improving factors other than cost-cutting, which are increased tenant or customer satisfaction and enhancing the tenant relationship. The TGS-enabled smart building would not only simplify and improve the life of the owner/real-estate dealer but more of the inhabitant or the tenant. It will complete control in the hands of the user which would lead to better services and enhance the consumer experience. Automated buildings give added benefits with complete transparency and control over its operation would result in concatenate benefits. All the concerns of the consumer will be autonomously dealt. It will lead to an increased customer satisfaction which would increase the customer retention boosting the goodwill in turn. ENTERPRISING CASH COW The millennials of today are becoming more tech-savvy than ever and anything futuristic and technological driven is a straight-up USP for any business. Providing the tenants/customer intelligent buildings would not only give a competitive edge in the highly competitive market but it will also ensure a higher price. It would help in capitalizing the market benefits while giving them additional opportunities for revenue generation. Smart building generate rapid ROI making it the hen laying golden eggs! OPTIMIZED ASSET MANAGEMENT The data collected from the building with analysis and advanced AI algorithms will help in optimizing and making the buildings more efficient than ever. The insights will keep tabs on the consumption and wastage along with it. It will help in reducing the energy cost manifolds. By improving the operational effectiveness and efficiency, safeguarding assets, and energy optimization it is making management optimal. MAKE A DIFFERENCE Other than financial benefits of the smart buildings, it has environmental benefits making it an all-rounding solution. By reducing and control the energy consumption one can reduce the carbon emission. It is not only pocket-friendly, it is also planet friendly! Go green and save the environment while saving your money. Contact us to know more and get your building TGS enabled today and step into the future of real-estate!
5 REASONS WHY SMART BUILDING IS A BOON FOR REAL ESTATE INDUSTRY
0
5-reasons-why-smart-building-is-a-boon-for-real-estate-industry-1b51c6602068
2017-12-15
2017-12-15 11:05:56
https://medium.com/s/story/5-reasons-why-smart-building-is-a-boon-for-real-estate-industry-1b51c6602068
false
746
null
null
null
null
null
null
null
null
null
Green Energy
green-energy
Green Energy
3,911
Things Go Social
Your interaction with machines will change when your machines will talk to you. Find out what happens when things go social!?
d6d7546b0773
ThingsGoSocial
18
155
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-28
2018-08-28 08:47:53
2018-08-29
2018-08-29 00:05:27
1
false
en
2018-08-29
2018-08-29 00:05:27
1
1b52929f9d17
3.049057
5
2
0
“Two are better than one, for if one fall, the other lift up his fellow and a threefold cord is not quickly broken.”
5
Coven Labs: A source of verve to pursue your data science career “Two are better than one, for if one fall, the other lift up his fellow and a threefold cord is not quickly broken.” At Coven labs you can never be alone on your way to become a data scientist. One of the bootcamp session at coven labs While undergoing my youth Service at Owo local government of Ondo State, I found myself interested in the fourth industrial revolution skills and I embraced programming. I was doing it alone, from MATLAB to Java using lengthy structured textbook and no mentor, guide or support. As a result of this, I could not assess what the real technology world really cares about. Fortunately for me, I came across the Google-Udacity scholarship which I applied for and was chosen. At one of the meet-up events in Akure, Ondo state, I came across Coven labs. Finding Coven Labs was a refreshing change to take a part in the fourth industrial revolution. I had heard about data science before, but didn’t have a full understanding of the whole picture even though I’ve tried some online courses in the past which include videos that give you the fun and you get excited but lack the verve to inspire you especially in this part of the world (Africa). Coven labs get the balance right on learning, mentoring and the verve to continue learning till you crush-up mediocrity. They organise monthly editions of Data science and Artificial intelligence bootcamps for beginners and intermediate levels, so i sign up for the June, 2018 edition (the current edition then). The beginner prerequisite is basic computing skill while that of the intermediate is a satisfactory knowledge of Python or R. At the bootcamp, they not only teach you the fundamentals that you need to start a data science career but also they motivate you to keep learning. The environment is so welcoming and conducive that you develop the hunger to work with data at your very first spot into the place. The bootcamp is a five days intensive bootcamp and currently, they have centres at Akure and Benin and more to come in the nearest future(Lagos, Ota and so on.). At the bootcamp, the beginners cohort are taught the basics of Python or R (whatever their choice is) for data science while the intermediate cohort work on machine learning algorithms for data science. This is done with a do-it-yourself approach, with tutelage from seasoned experts. Upon graduation, the beginners cohort are expected to practice and research more on what they have been taught and exposed to before the commencement of the next edition. This is achieved with the help of the online community which comprises of past alumni. The intermediate class also opens for individual with basic knowledge of Python or R without taking the beginner class. My favourite part of the bootcamp is the Alumni meet-up and pitch session that comes up on the last day of the bootcamp, there you get the chance to presents the project you embark on in your various classes and you got the right information and feedbacks from leading expert in the technology industry. “That’s the energizer”. As for what will drive you more after the bootcamp? — It is the online community. It is an integral to my experience. That is a cloud of information and resources for you to explore. From timely motivation from others, to help on how you get better, to other colleague’s success story and the fabulous part of it, dissemination of opportunity for you which you may have never gotten elsewhere. With the help gotten from this community, I was able to attend two hackathons, one was a datahack organized by a leading company in the manufacturing industry where I worked with other participant from various part of the world on datasets with over a million data points. The online community is so great that it minimises the time, stress and resource in surfing the internet to be positively inclined in your pursuits. I am currently learning more on dataquest.io and awaiting an invitation to attend a conference on data science soonest. I am very optimistic that come few month times I will have my share of the royal cake data scientist are enjoying both locally and globally and ultimately work on a global data-driven project. Coven Labs bootcamp is the source of verve you need, not to get stuck in your data science career. Sign up here to attend the next edition.
Coven Labs: A source of verve to pursue your data science career
30
coven-labs-a-source-of-verve-to-pursue-your-data-science-career-1b52929f9d17
2018-08-29
2018-08-29 00:05:27
https://medium.com/s/story/coven-labs-a-source-of-verve-to-pursue-your-data-science-career-1b52929f9d17
false
755
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Tobi Oluwafunmise
Data enthusiast, Data science learner, Engineer.
c80a130d28d0
tobioluwafunmise
5
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-01
2018-01-01 06:27:11
2018-01-01
2018-01-01 06:43:01
0
false
en
2018-01-01
2018-01-01 06:43:01
0
1b53230591e9
0.256604
0
0
0
Now is the time of the year when we decide to make a list of things to do that we have been procrastinating over the past years. Some new…
5
Resolutions 2018 Now is the time of the year when we decide to make a list of things to do that we have been procrastinating over the past years. Some new things are added too. So here goes my list: Learning Machine learning Learn to bake Learn German language Read non fiction books more Riding bicycle everyday Travel Meet my best friends regularly That’s it. Let’s get started.🤘
Resolutions 2018
0
resolutions-2018-1b53230591e9
2018-01-01
2018-01-01 06:43:02
https://medium.com/s/story/resolutions-2018-1b53230591e9
false
68
null
null
null
null
null
null
null
null
null
Resolutions
resolutions
Resolutions
2,397
Aishwarya Jadhav
Instrumentation engineer, bibliophile, machine learning aspirant
a090c1c04c73
aishujadhav24
5
43
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-24
2018-02-24 14:18:50
2018-02-24
2018-02-24 14:28:32
3
false
es
2018-02-24
2018-02-24 14:40:06
1
1b53d0d6c9bb
4.300943
0
0
0
Primero algunas premisas fundamentales…
5
Simbiosis M2M, IoT e IIoT… un largo, pero fascinante viaje. Primero algunas premisas fundamentales… M2M (machine to machine, ‘máquina a máquina’), es un concepto genérico que se refiere al intercambio de información o comunicación en formato de datos entre dos máquinas remotas. Es decir; son comunicaciones o intercambios de datos que realizan máquinas entre sí, sin intervención humana. Internet de las cosas (en inglés, Internet of Things, abreviado IoT),​ es un concepto que se refiere a la interconexión digital de objetos cotidianos con Internet. Internet Industrial de las cosas (en inglés, Industrial Internet of Things, abreviado IIoT), es un concepto que pretende fomentar la optimización de la eficacia operativa y la producción industrial, creando un mayor crecimiento y una mejora en las condiciones competitivas internacionales para las empresas. Se refiere al uso de tecnologías de Internet de Cosas (IoT) en la manufactura e incorpora el aprendizaje de máquina y la tecnología de grandes volúmenes de datos (big data), aprovechando los datos de sensores, comunicación de máquina-a-máquina (M2M) y las tecnologías de la automatización que han existido en configuraciones industriales por años. Internet de las cosas Inteligentes (en inglés, Intelligent Internet of Things, abreviado IIoT), se refiere a sistemas inteligentes de IoT que no solo recopilan y analizan datos para consumo humano; sino que responden a situaciones sin intervención humana. Para lograrlo, realizan una inferencia de Inteligencia Artificial (IA) en tiempo real, utilizando datos de una gran cantidad de sensores. Luego envía comandos a los actuadores en máquinas, drones o robots para llevar a cabo acciones. En una configuración no supervisada, el motor AI también recopila los resultados en tiempo real para evaluar las próximas acciones a tomar. Las aplicaciones de inteligencia artificial y la visión artificial a través de aplicaciones, también generarán una mayor demanda de Edge Computing y Fog Computing, tema del cual escribi hace algún tiempo. Aunque las tecnologías M2M no son definiciones nuevas, son identificadas como parte de Internet de las cosas (IoT), y comprenden instancias embebidas a manera de componentes en autos que se enlazan a la red, pulseras tipo reloj u otros dispositivos para el monitoreo de la salud que alertan anomalías en los signos vitales, pasando por cámaras de vigilancia, portones eléctricos, aire acondicionados, componentes para el hogar, sensores de diversos tipos; todos ellos activados desde móviles o sistemas automáticos, entre otras posibilidades. La actual generación de soluciones M2M son mucho más avanzados que sus predecesoras, a menudo llamadas de primera generación, que se caracterizan por su diseño simple y funcionalidad muy básica. La segunda ola de dispositivos M2M a menudo se diseñan como sistemas colaborativos basados ​​en la comunidad. Por esta razón, el balanceo de carga efectivo del tráfico de estos sistemas M2M es esencial. Hay otro factor importante a tener en cuenta en los sistemas M2M ya que van a multiplicarse de forma masiva la gran cantidad de dispositivos que se adhieran a Internet. Es evidente que vivimos hoy en una realidad en la que todos los dispositivos, máquinas o aparatos se pueden conectar a Internet de forma inalámbrica, proporcionando una gran cantidad de información en tiempo real que puede transformar la vida, las interacciones y el trabajo de la gente. Por ejemplo; para los operadores móviles, el conectar las “máquinas” a sus redes es ahora una importante área de enfoque, sin embargo; no se trata solo de añadir nuevos tipos de conexiones, sino de ver una oportunidad de añadir valor más allá de la conectividad desarrollando capacidades de M2M que reducen la fragmentación y fomentan nuevos servicios. En este sentido, M2M importa porque reorganiza los flujos de trabajo para que haya menos fricciones. Permite ademas centrarse más en soluciones que crean más valor para el consumidor, más allá de la conectividad. En esto del mercado, inicialmente los esfuerzos se centraban en asegurarse de que se ofrecía una conectividad excelente, no obstante, eso ya se ha conseguido. Ahora los prestatarios de servicios deben centrarse en ascender en la cadena de valor, es decir; añadir valor más allá de la conectividad. Considerar además que el incremento de las tecnologías M2M está impulsando el crecimiento del tráfico en Internet. Me he tomado un tiempo para analizar algunos elementos fundamentales que aparecen en todos los entornos M2M. Les resumo a continuación… Máquinas que gestionan: Gestión de flotas, Alarmas domésticas, TPV (Terminal Punto de Venta), Contadores de consumo de agua, de consumo de gas o de consumo electricidad, paneles informativos en carreteras, máquinas expendedoras de alimentos o bienes, tele-mantenimiento de ascensores, sensores para estaciones meteorológicas, geológicas, etc. Dispositivos M2M: Módulos conectados a una máquina remota que proveen comunicación con un servidor (como sensores). Estos dispositivos también constan de capacidad de proceso donde se ejecuta la aplicación. Además de implementar un protocolo para poder comunicarse con la máquina, implementa el protocolo de comunicación para el envío de información, es decir; tiene capacidad de intercambio activo de datos. Servidores: Son los equipos recolectores y/o que gestionan el envío y recepción de información de las máquinas que están asociadas a este. Se integran típicamente con un core business (ERP, Mapas GIS de trazabilidad de flotas de camiones, Sistema de pedidos, Centrales receptoras de alarmas, Helpdesk, etc.) de modo que la información recibida por el Servidor pasa a ser parte crítica del negocio. Red de comunicación: pueden ser de 2 naturalezas principalmente, a través de cable: PLC, Ethernet, RTC, RDSI, ADSL etc., o bien a través de redes inalámbricas: GSM/UMTS/HSDPA, Wifi, Bluetooth, RFID, Zigbee, UWB, etc. En resumen, en nuestro camino hacia la automatización hemos ido aprovechando todas las innovaciones construidas en el tiempo, consolidando plataformas cuyo fin último es maximizar la experiencia del usuario a través de la entrega valor en todas las formas de datificación, automatizaciones y servicios resultantes. Nótese como durante este viaje que hemos hecho, se integra en esta cadena de valor infinidad de plataformas de hardware, infraestructuras de intercambio y transporte de datos, innovaciones y tecnologías de software cuya materia prima es sin duda la informacion.
Simbiosis M2M, IoT e IIoT… un largo, pero fascinante viaje.
0
simbiosis-m2m-iot-e-iiot-1b53d0d6c9bb
2018-05-30
2018-05-30 00:29:20
https://medium.com/s/story/simbiosis-m2m-iot-e-iiot-1b53d0d6c9bb
false
994
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Marvin G. Soto
Pensador, innovador, luchador, enamorado de su profesión, apasionado por las letras… de dificil renunciar y lejano a rendirse… ese es Marvin!
6920b0cc6a08
marvin.soto
125
11
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-14
2018-04-14 06:18:17
2018-04-14
2018-04-14 06:23:37
1
false
en
2018-04-16
2018-04-16 09:40:59
8
1b5645546b5
1.301887
8
0
0
We got acquainted with them during the “Blockchain, AI, Crypto” Conference held in Paris end of March and we were on the same page right…
5
Daneel has partnered with the futuristic trading platform CryptoRobotics ! We got acquainted with them during the “Blockchain, AI, Crypto” Conference held in Paris end of March and we were on the same page right away. Finally we decided to join forces to enhance the capabilities of our services and share our respective skills. What is CryptoRobotics? CryptoRobotics is creating a cross-platform desktop terminal for trading on cryptocurrency exchanges, introducing the usual tools for algotrading and creating new analytical and intelligent solutions in the development of robots for trading in new markets, applying the best experience of the stock and currency market. What kind of a partnership? As our two services are complementary, it’s mutually beneficial to both the parties to integrate each others’ functionality into the other. The collaboration will allow the exchange of technical expertise on data flows and sources in the crypto environment. Both companies intend to cooperate with each other to make the two services more efficient by taking advantage of this synergy: Daneel will be able to obtain a continuous flow of trading data for analysis and enrich the service, and potentially advanced trading graphs and indicators for trading experts marking a first step in automated trading, multi-exchange and portfolio management thanks to the expertise of CryptoRobotics. Daneel’s AI will enable a continuous flow of data, news, and quality analyses to enrich the exchange and complete the CryptoRobotics service offer with behavioural and fundamental analysis. Daneel will also offer its help for the CryptoRobotics ICO, and Joseph Bedminster, CEO of Daneel, will become an advisor. For more information about CryptoRobotics, please visit their official website: https://www.cryptorobotics.io/ and watch the videos: https://www.youtube.com/watch?v=Xo-ORQJxKME https://www.youtube.com/watch?v=87fC_3dvEt8 Stay tuned: Twitter: https://twitter.com/daneelproject Telegram: t.me/DaneelCommunity Facebook: https://www.facebook.com/daneelproject LinkedIn: www.linkedin.com/company/11348931/ Reddit: https://www.reddit.com/r/Daneel_Project/
Daneel has partnered with the futuristic trading platform CryptoRobotics !
149
daneel-has-partnered-with-the-futuristic-trading-platform-cryptorobotics-1b5645546b5
2018-06-18
2018-06-18 13:16:00
https://medium.com/s/story/daneel-has-partnered-with-the-futuristic-trading-platform-cryptorobotics-1b5645546b5
false
292
null
null
null
null
null
null
null
null
null
Cryptocurrency
cryptocurrency
Cryptocurrency
159,278
Daneel Assistant
Your future personal crypto assistant ! https://daneel.io
dc883054551c
daneel_project
463
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-10
2018-01-10 03:48:37
2018-01-10
2018-01-10 03:51:33
1
false
en
2018-01-10
2018-01-10 03:51:33
5
1b566abfdcfd
3.196226
1
0
0
Originally published at www.hcamag.com.
5
The changing world of HR technology Originally published at www.hcamag.com. HR technology is transforming how we work. Tools, programs and processes like enterprise resource planning (ERP), application program interface (API), cloud computing (aka “the cloud”), and many more are huge driving change. One of these technologies, ‘Bots’ (also known as internet bots or web robots) is slowly entering the mainstream. Artwork by “Beastman” Bots are applications that perform repetitive automated tasks at a scale that is not humanly possible. Over the past two decades, marketers (and unfortunately spammers) have successfully used bots to reach many individuals simultaneously to accomplish their goals. Because of the advances in artificial intelligence (AI), we’re seeing a positive resurgence of bots. They are now synonymous with chatbots — applications that can understand what you are trying to ask and then respond with the right answer just like a human would. The Holy Grail is for these bots to exhibit behaviour that is virtually that of a human. Just as they do in customer service, chatbots have immense potential in providing service to employees across all departments including IT, HR, legal, marketing, and finance. With the increasing trend of ‘consumerising’ services across business, to give employees the best experience possible, bots can be an effective tool. Specifically, there are typically three major types of conversations that a chatbot is likely to have to support HR functions: Retrieving system of record data such as “How many vacation days do I have left?” Answering a question based on knowledge base such as “What is the leave of absence policy in New South Wales?” Submitting a transaction such as creating a leave request and getting updates on its approval status. HR departments today are spending enormous amounts of time answering basic questions and fielding requests from employees. Studies from the likes of McKinseyor the HR Trend Institute put this at 60–70 percent of the time, an enormous proportion that could be spent on more strategic activity. HR departments have typically addressed these challenges by setting up service delivery models that can deliver service easier and faster for less cost. One of the key elements of these service models is case deflection — the ability for employees to find answers to questions and address any needs themselves without having to go to HR. The primary case deflection approach today is to search for answers in what is typically known as a ‘knowledgebase’. Chatbots can help HR departments provide a more modern conversational experience and deliver personalised answers and solutions. This can dramatically improve the case deflection rate and reduce the workload of frontline HR support staff, who can then work on more strategic initiatives, but a chatbot needs to understand what you are trying to ask of it. This is accomplished by technologies that are a combination of conversational design, pattern recognition and natural language processing. IBM’s Watson and Google’s API.ai are platforms that provide publicly available conversation services for chatbot applications. Conversation services can be stand-alone or included as a component in HR applications or a combination of the two. Once a chatbot knows what the user is trying to accomplish, it must execute the conversation, which can be straightforward if you only want to retrieve data. It gets more complicated if you need to submit a transaction. The chatbot now needs to ask for a few pieces of information that go into the transaction, which often vary depending upon the use case. When answering questions based on a firm’s knowledgebase, chatbots must be part of the application that contains the knowledgebase. These chatbots are complex because they must comb through information and present answers that are relevant and personalised based on the user. While it is still early days for this type of technology, there is opportunity for designing knowledge bases in an AI-first world. HR departments must recognise the effort it takes to identify the types of conversations they would like a chatbot to have and then to create those conversations. Most chatbots today have limited AI capabilities. They can either have a conversation on a programmed topic or they will bring in a person if they don’t know how to have a conversation. Next generation chatbots will learn on the go. Every time a person needs to be brought in, a chatbot might ‘listen’ to how the human has a conversation and program itself automatically to have that conversation the next time it is called upon. Intelligent chatbots have immense potential in shaping the way we interact with systems in the future, and this is just the beginning. HR teams that successfully integrate these technologies now will help drive their organisation’s digital transformation, and deliver the HR experience that their employees want. By Mark Souter, HR Product and Strategy Lead, ServiceNow
The changing world of HR technology
1
the-changing-world-of-hr-technology-1b566abfdcfd
2018-01-10
2018-01-10 04:22:00
https://medium.com/s/story/the-changing-world-of-hr-technology-1b566abfdcfd
false
794
null
null
null
null
null
null
null
null
null
Chatbots
chatbots
Chatbots
15,820
Mark Souter
International Human Resource & Product Sales leader, focused on people strategies & systems that are aligned & bring value to organisational priorities.
2af21b9c7f9b
MarkSouterLive
36
37
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-16
2018-07-16 07:57:47
2018-08-29
2018-08-29 12:52:57
16
false
en
2018-08-29
2018-08-29 12:52:57
1
1b571bf193c8
4.051887
3
0
0
Every pilgrimage in the mystic world of artificial neural networks & deep learning starts from Perceptron !!
1
Perceptron: Simplest type of Artificial Neural Network Every pilgrimage in the mystic world of artificial neural networks & deep learning starts from Perceptron !! Perceptron is the simplest type of artificial neural network. It can be used to solve two-class classification problems. A journey of a thousand miles begins with a single step- Lao Tzu But, before you take the first step in the amazing world of neural networks, a big shout out to Sebastian Raschka, Jason Brownlee & everyone else for sharing their learnings with the world !! How does artificial neuron work? Working of artificial neuron is based on how a biological neuron work. A biological neuron accepts input signals via its dendrites, which pass the electrical signal down to the cell body. Biological neuron An artificial neuron works similarly In an artificial neuron there are three main components. Artificial Neuron 1. Input Signal When input data comes in, it get multiplied by the weights assigned to that input value. 2. Sum The weighted input is then summed up. Here a bias value or bias unit is also added to the sum as an offset . Generally initial value of bias unit is taken as 1, it is an additional parameter which is used to adjust the predicted output along with the weighted inputs towards desired output. Note : Checkout the neural network learning comparison with & without bias unit below. Yeah, It is important to add bias unit. Learning with Bias Unit Learning without bias unit 3. Activation Now, the calculated signal is fed into a transfer function also called as a activation function. One such example is called as Unit step function. If you want to learn how neural networks work, learn how Perceptron works. What is a Perceptron ? Perceptron is the simplest type of artificial neural network. It is inspired by information processing mechanism of a biological neuron. Frank Rosenblatt proposed the first concept of perceptron learning rule in his paper The Perceptron: A Perceiving and Recognizing Automaton, F. Rosenblatt, Cornell Aeronautical Laboratory, 1957. Rosenblatt proposed an algorithm that would automatically learn the optimal weight coefficient that are then multiplied with the input features in order to make the decision of whether a neuron fires or not. Perceptron with unit step function (Threshold) Perceptron Learning Rule: Initialize the weights to zero (0) or to a random number. w = initialized weight, x = input values 2. For every training sample do the following two steps. (a) Find the output value also called as predicted value y^(predicted output). using following decision function. Unit step function Simplified (b)Update weight values. New weight Lets understand with an example Suppose the training data set comprises of x= {x1 , x2} & y= target value or expected value or true class label. There are two output classes in the dataset (1,-1). Dataset Let's assume initial weights as 0. 1 Net input sum (Z)= x1*w1 + x2*w2 = 1.46 * 0.1 + 2.36 *0.1 = 0.382 Using transfer function (unit step function) Simplified As net Z > 0, Predicted value y^ = 1 But according to the dataset expected prediction should be -1. Magic happens : Weight updation New weight w(new) = w(current) + learning_rate * (expected — predicted) * x (input value) Learning rate : A learning rate parameter controls how much the coefficients can change on each update, its value is generally initialized between (0.0 to 0.1) Calculating new weights:(Assuming learning rate to be 0.01) w1 (new) = 0.1 + 0.01* (-1–1)*1.46 = 0.0708 w2 (new)=0.1 + 0.01 *(-1–1)*2.36 = 0.0528 Next time, Weights are updated & the process goes on. Stop the training once performance on the validation dataset starts to degrade. Bias Bias can be added as Net input sum= sum (weight_i * x_i) + bias & bias (new)= bias(current) + learning_rate * (expected output — predicted output) Where can we use Perceptron ? In context of supervised learning and classification, perceptron algorithm could then be used to predict if a sample belongs to one class or other. Legendary hot Dog- Not hot dog App — Jian-Yang, Silicon valley, HBO Bon Voyage for you Machine Learning journey… If you have any comment or question then write it in the comment. To see similar post, follow me on Medium & Linkedin. Clap it! Share it! Follow Me!!
Perceptron: Simplest type of Artificial Neural Network
12
perceptron-simplest-type-of-artificial-neural-network-1b571bf193c8
2018-08-29
2018-08-29 12:52:58
https://medium.com/s/story/perceptron-simplest-type-of-artificial-neural-network-1b571bf193c8
false
663
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Shivam Sharma
MCT | MCSE: Azure | MCSA: Machine Learning | Blockchain| R, Architect/Consultant/Trainer. I love working with cutting-edge technologies.
37b79f8d5d8
shivamsblog61
12
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-21
2017-10-21 20:54:05
2017-10-21
2017-10-21 21:01:28
0
false
en
2017-10-22
2017-10-22 10:46:36
2
1b57328bc319
0.728302
3
0
0
We discussed last week that the number of times an event occurs in an interval follows a Poisson distribution. For our new hurricane…
5
More about Poisson distribution and its relation to Binomial distribution We discussed last week that the number of times an event occurs in an interval follows a Poisson distribution. For our new hurricane insurance product, we are counting events that occur in time, and the interval is one year. Poisson distribution is a mathematical approximation for the Binomial distribution with a very large number of trials. Poisson distribution can also be used to estimate the probabilities of a specified number of events in a given unit area. For example if we assume earthquakes occur at random in California, the number of such events measured over an interval, say a square grid or a county can be considered to follow a Poisson distribution. Find out more details about these ideas in lesson 37. Able and Mumble also meet Lani, the paper “mathematician.” Lesson 37 - Still counting - Poisson distribution The conference table was arranged neatly with a notebook and pen at each chair. Mumble's Macbook Air is hooked up to…www.dataanalysisclassroom.com If you find this useful, please like, share and subscribe. You can also follow me on Medium and Twitter @realDevineni for updates on new lessons.
More about Poisson distribution and its relation to Binomial distribution
6
does-poisson-distribution-arise-from-the-binomial-distribution-1b57328bc319
2017-10-22
2017-10-22 10:46:36
https://medium.com/s/story/does-poisson-distribution-arise-from-the-binomial-distribution-1b57328bc319
false
193
null
null
null
null
null
null
null
null
null
Statistics
statistics
Statistics
5,433
Naresh Devineni
Naresh Devineni is an Associate Professor in the Department of Civil Engineering at The City University of New York’s City College. http://nareshdevineni.com
53ffd7b0a59e
devineni
34
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-23
2018-05-23 13:02:42
2018-05-23
2018-05-23 13:03:53
1
false
en
2018-05-23
2018-05-23 13:03:53
2
1b581f7a9d45
7.901887
0
0
0
Have you ever heard of artificial intelligence ( AI )?
2
HOW DOES ARTIFICIAL INTELLIGENCE BENEFIT E-COMMERCE? Have you ever heard of artificial intelligence ( AI )? If for some it still evokes only science fiction and robotics, it still fits in all aspects of our daily lives. From automatic cash registers to advanced security controls in airports: nowadays, artificial intelligence is almost everywhere. And, little by little, she begins to interfere in e-commerce. Moreover, many companies are already taking advantage of the latest advances in AI and machine learning ( ML ) to provide a better shopping experience for their customers. As it improves, artificial intelligence could therefore change forever, and will most likely change the landscape of e-commerce in the years to come. AI AND ML IN E-COMMERCE By definition, artificial intelligence is the ability of a machine to perform “smart” tasks, such as learning and decision-making, as a human being would. Machine learning is a current AI application based on the idea that we should be able to give machines access to data and let them learn on their own. Applied to e-commerce and marketing, machine learning corresponds to the various methods of data analysis in which computers find information without being told exactly where to look for this information. ML algorithms, when exposed to massive amounts of data, can extract models and use them to generate ideas or predictions about future conditions. Although still relatively new, artificial intelligence has already had a huge impact, in a short period of time, on industries such as finance or healthcare. And the benefits of AI are now starting to spread in e-commerce. It is important to note that artificial intelligence by itself is not a product, but a powerful tool for creating better products that meet the needs of customers. Yes, even if it may seem paradoxical for a machine, the greatest strength of artificial intelligence is that it can help e-commerce to create a more humane customer experience by personalizing it! Indeed, an online sales activity generates monumental volumes of data from dozens of channels. There is even too much data for a human being to know where to look for or even what he is looking for — the perfect conditions for machine learning. As a result, many e-merchants are already trying to differentiate themselves by using forms of AI to better understand their customers, generate new leads and provide an improved customer experience. EXAMPLES OF USES OF AI IN E-COMMERCE CREATING CUSTOM RECOMMENDATIONS Personalization in e-commerce is not new. Many businesses and e-merchants currently use a filtering system to provide customers with product recommendations. These filters usually base their results on bestseller data, consultation history, and other general aggregation parameters. At best, the most successful referral systems can remember what your client likes. But, you will agree, all this remains a bit impersonal. “People who bought this product also bought this product” is not the best way to personalize an offer. This is where AI comes in. While the word “artificial” connotes some dehumanization, artificial intelligence instead allows merchants to set up a more personalized customer experience by providing recommendations to subscribers according to their preferences. How ? With the ability of AI to more effectively analyze than a human being from large data sets. This means that the technology can quickly analyze different aspects of the navigation behavior. Whenever a user examines a product, posts a message or even a tweet about it, the information can be used. Artificial intelligence technology is also able to learn the interests, passions and triggers that make a consumer more likely to make a purchase. In other words, millions of transactions and communications can be analyzed each day to target offers to a single customer. By exposing machine learning algorithms to truly massive amounts of data, marketers can build automated analytic models that are not limited by the ability of humans to suggest why some people buy particular products. Such AI-based applications can uncover better ways to model user behavior. Finally, technology facilitates: the sales process, identifying who is most likely to buy a product (based on history of past purchases, demographics, etc.) Customizing the sales cycle, allowing you to engage the right prospects with the right message at the right time Example of using the AI ​​for personalized recommendations : Starbucks recently launched “My Starbucks Barista”, which uses AI to allow customers to place orders by voice or email. The algorithm relies on a variety of inputs, including account information, customer preferences, purchase history, third-party data, and contextual information. The coffee giant can provide more personalized messages and recommendations to its customers. FIND POTENTIAL CUSTOMERS According to a recent study, at least a third of prospects are not followed by the sales team. Which means that potential pre-qualified buyers interested in your product or service end up in oblivion. In addition, many companies are overloaded with customer data that they do not exploit, or at all. It is a gold mine that can be used to improve the sales cycle. In retail, for example, artificial intelligence is used with face recognition to capture a customer’s behavior in a store. Basically, if a consumer lingers for a while in front of a product — a coffee maker for example — this information will be stored for use on his next visit. As AI improves and develops, you’ll even be able to start seeing special offers on your computer screen based on your in-store wait time or even your reaction to a product! Microsoft offers for example “Mall kiosk”, which recommends products through facial or voice recognition of reactions. CREATING AN EFFECTIVE SALES PROCESS WITH A VIRTUAL ASSISTANT Now, thanks to virtual assistants, online businesses can leverage the AI ​​to appropriately select and recommend useful and desired products from a buyer, avoiding the need for the buyer to do all the research work in the database. the catalog. For example, integrating artificial intelligence into your CRM will allow you to customize your solutions and create an effective sales message. Indeed, if your AI system allows learning natural language and voice input, like Siri or Alexa, your CRM will respond to customer requests, solve their problems and even identify new sales opportunities. Even better ? Some IA managed CRM systems can be multitasking to handle all these functions and more. In this case, artificial intelligence helps users dive deeper into e-commerce product catalogs to find the perfect item that otherwise might not be discovered. There are also several virtual assistant technologies online. These robots use large sets of data, collected in real time, to “learn” the buying habits, interests and personal tastes of users. Example of an online virtual assistant : You may have heard of “Mona”, the virtual sales assistant developed by former employees of Amazon. It helps simplify mobile shopping and offers customers the best deals to suit their preferences. The longer the user spends time interacting with the Mona robot, the better he will know it. Virtual Assistant Example : The North Face brand harnesses the power of virtual assistants to better understand their customers while providing tailor-made recommendations. With the help of IBM’s intelligence solution called Watson, the company allows buyers to discover their ideal jacket. For this, several questions are asked to customers, such as: “where and when will you use your jacket? “. IBM’s software then scans hundreds of products to find the best matches based on correlated responses to other data, such as weather conditions. To get an idea, you can test the tool here . BETTER SEARCH RESULTS At least 30% of online shoppers use the search function of an e-commerce. However, it is often a tedious task for the consumer who is forced to choose and then refine a keyword accurately describing the product he is looking for. The scenario often happens as follows: a consumer enters “smartphone with the best camera” in the search bar. While a human interlocutor would immediately understand the request, or ask questions to get more details about the client’s needs, the numerical results provided are often beside the plate. In short, in the majority of cases, the research does not lead to the expected result. This is due to lack of user context, rigid and irrelevant filters, and problems with keyword understanding. In fact, the algorithms of these e-commerce search engines have neither the practical intelligence, nor the ability to understand a query with the nuances of the language. The key is to use the power of machine learning to improve the results for consumers who use research. The ML can also generate a search ranking, which allows the site to sort the search results by relevance, instead of matching a keyword. In doing so, e-commerce platforms will be able to turn a massive number of failed search experiences into successful conversions. To replace textual searches, a solution is also beginning to be implemented: visual search — a technology that uses artificial intelligence to analyze a photo submitted by a customer, then find the desired product or products that match that image . Visual search allows customers to take a picture of a product they like and then download it. The IA software is then able to evaluate this specific product, its brand, its shape, its style, its fabric, its color, etc., then to propose suggestions of similar products likely to interest the customer. Finally, in addition to using images to search for products they want to buy, consumers will be able to use voice search — the ability to search for objects using speech. Voice search uses AI to understand what is being said and to improve the recognition of voices and sentences. Voice research has been popularized with voice assistants like Alexa and Siri, forcing e-merchants to re-optimize their web pages, including FAQs, to respond to voice-based searches. Example of using the AI ​​for search results : A company that uses machine learning to provide better search results is eBay. With millions of items listed, the auction site harnesses the power of AI and data to predict and display the most relevant search results. Example of use of visual research : One of the innovative companies in terms of visual research is Neiman Marcus. With its application “ Snap. Find. Shop. The fashion and beauty brand allows users to take pictures of real-world objects and then find them in the catalog. IMPROVED CUSTOMER SERVICE If your business deals with customers daily and you encounter recurring issues or questions, creating a chatbot is a good way to provide customers with information faster and more efficiently than a customer service representative. For simplicity, chatbots are automated programs that can “converse” with people to answer questions and perform specific task queries. They have been around for a while now, but have made considerable progress in their ability to adapt to the customer through the machine learning process. Specifically, chatbots can help you reduce customer service costs and engage consumers more effectively, 24 hours a day, 7 days a week. They also provide a good opportunity to customize consumer recommendations based on the conversation history and can actively take on some of the important responsibilities of running an online business, such as automating ordering processes. At the moment, bots have pre-recorded responses and do not detect the use of sarcasm or humor. But, in the near future, a chatbot will be able to analyze new parameters and opt for a more sympathetic and precise answer. Without a doubt, artificial intelligence has already begun to have an impact on e-commerce, for which it is developing the ecommerce in an intelligent way so that customers are no longer offered solutions that are neither appropriate nor appropriate. And, day after day, the AI ​​becomes more and more sophisticated. Over the next few years, the application of machine learning and AI to e-commerce will become a differentiating factor that is increasingly important in terms of performance. E-merchants who do not take advantage of the benefits may be caught off guard by early adopters who are reshaping the e-commerce market and the expectations of buyers.
HOW DOES ARTIFICIAL INTELLIGENCE BENEFIT E-COMMERCE?
0
how-does-artificial-intelligence-benefit-e-commerce-1b581f7a9d45
2018-05-23
2018-05-23 13:03:54
https://medium.com/s/story/how-does-artificial-intelligence-benefit-e-commerce-1b581f7a9d45
false
2,041
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
christina cheeseman
null
f40ced1a3d74
christinacheeseman827
12
8
20,181,104
null
null
null
null
null
null
0
null
0
bf0fdbba6840
2018-06-07
2018-06-07 16:01:04
2018-06-07
2018-06-07 07:54:49
1
true
en
2018-06-07
2018-06-07 18:42:01
5
1b593783c51a
7.849057
7
0
0
By Todd Essig
3
Sleepwalking Towards Artificial Intimacy: How Psychotherapy Is Failing the Future Photo: VCG/VCG via Getty Images By Todd Essig Some dream that technologies like artificial intelligence and robotics will soon be able to simulate the emotional experience and consequences of physically being with another person. We might call this a dream of artificial intimacy (yet another AI). At that point, machines could meet needs for tenderness and warmth, for romance, empathy and friendship. But there will only be widespread acceptance of artificial intimacy if we are willing to reduce what we expect from relationships to what technology can provide. And that’s what we seem to be doing: sleepwalking towards this future by undermining what we expect from close relationships. Artificial intimacy has long been a science fiction fantasy and now programs appear that say they deliver it in bits and pieces: a therapy-bot, a sex-bot, a best-friend-for-your-child-bot, a care-bot for grandma. For some this other AI is close to realized. What’s wrong with this? A great deal. These machines say they deliver intimacy but don’t, and along the way they lead us to forget what intimacy entails. Intimacy between people involves empathy and this is something machines cannot provide. Empathy is like a promise that after putting yourself in the position of the other, of sharing happiness or pain, you will accept the consequences of having done so. A machine can’t make this promise. They have only been programmed to seem as though they are present when a person describes a moment of happiness or hurt or loss. So when we share a human experience with a machine, we settle for as-if empathy, a lesser thing. Artificial intimacy also has to propose a kind of connection that does not include what human bodies bring to relationships. We feel intimate with other people not only because our “minds” speak words. Evolution designed our neurophysiology to communicate in many ways other than verbal, an emotional activation mediated by neural processes that occur out of conscious awareness. This is how parents and babies first communicate, and this non-verbal communication continues throughout life. Bodies also resonate with the physical possibilities of the other, from the most tender to the most violent. A big part of developing trust is knowing that we can hurt each other and do not. How did we get to a place where the idea of artificial intimacy seems so appealing, where our expectations for each other have so deteriorated? By small steps. For over a decade, we have become accustomed to taking the body out of conversations as we discovered that in so many situations it was less stressful to substitute texting for talking. And then, we brought artificial conversation into the mix. We began to chat with Alexa and Siri and Echo about recipes and playlists and where to get the best pizza. Things felt companionate and it didn’t seem odd to expand the conversation. So we requested jokes and asked about the meaning of life and for dating advice. Rather than be alone or reach out to an actual other person, we settled for dialogue with programs that could trick us for a moment into thinking they understood. We settled for convenience over authenticity or empathy. What is surprising is that psychotherapy, a profession that would seem most committed to the power of person-to-person talk, has been part of this cultural shift toward settling for what machines can provide. This might surprise many therapists because it also happened in small steps. They, too, have done it sleepwalking, without being aware of the larger consequences of their clinical decisions. The small steps began with a taste for “remote treatment.” For decades, remote treatment, beginning with talking to patients on the phone, has been a useful tool in psychotherapy when someone is ill, traveling for business, or on vacation. Sometimes patients can’t find a therapist locally. And for ongoing treatments it has been the “better than nothing” solution when a patient relocates. But gradually, Skype and Facetime have turned remote treatment into what many therapists consider the new normal: good-enough routine practice. But with routine remote treatment comes the danger of second-rate care. In one recent study, people who received in-person treatment for pain management had far better results than people who received treatment online. It is more effective to be treated face-to-face. But both groups reported the same high levels of satisfaction with their treatment. Most simply put: If you give people online treatment, they will lower their expectations to fit what technology provides. And when remote treatment becomes the norm, hard questions don’t get asked. Therapists have reported feeling a “quickening” when they work with patients online as evidence to support its power and depth. In doing so they ignore the complex neurology of on-screen interactions, especially feelings of intensity and heightened emotionality common when we go online, whether to play a video game, solve a math puzzle, or have a conversation. Therapists interpret their response to technology (the “quickening”) as a response to a particular person. Ironically, it is often patients who are most alive to the differences between on screen and face-to-face meetings. One patient said: “When you share a physical space, even if you don’t act it out, there is always the potential to touch, whether that means kicking or kissing.” This person needed the not acted upon possibilities of physical revenge and seduction to feel safe. Without the possibility of “kicking or kissing,” the relationship experience was lifeless and flat rather than meaningfully safe. Another said: “I always felt that if anyone knew me as I really am, they would be really shocked and probably abandon me … I needed to see that my therapist didn’t flinch, wasn’t afraid of me, or disgusted with me in person . . . that he didn’t need the protection of Skype to be with me.” This person needed the possibility of rejection only found in person to feel genuinely accepted. Bodily consequences had to be in play. Even though when we Skype or FaceTime, the image on screen is of an actual person, remote therapy undermines the potential of the relationship. In online therapy, one often sees a bait-and-switch: The original therapeutic promise of being understood is replaced by a momentary “feeling of understanding.” And every time a patient settles for this illusion of empathy in online treatment, a bit of stone has been laid on the road toward accepting artificial intimacy. In psychotherapy there are many reasons to keep bodies in the room. You learn different skills managing a screen and engaging with a person. When bodies are together you experience risk and consequence (if you can never be dropped, you can never feel held) and the power of a rich stimulus array (when you are with another person you never know what will be clinically important). Why have so many therapists been so eager to lose the experience of bodies together? It must be more than money or convenience. We believe that screens provide the short-term benefit of avoiding the anxious-making complexities of physical co-presence. The urge to flee the messy, fleshy anxieties of sitting together in a therapeutic relationship, or any conversation, has always been strong. It’s much easier to flee passion, or hate, easier to flee someone becoming important enough you will go to their office even in the rain. Moving treatment to screens eases much of this stress — never mind that it renders the experience pale and less consequential while diminishing what we expect from each other. Like C-rations instead of home cooking, a life, or treatment, lived on screen should be a substitute suitable only when other options are not available. But now we can begin to see the unintended cost of remote treatment. It is a kind of gateway drug for machines that take the therapist out of the treatment altogether. On the surface, this leap makes no logical sense. A therapist talking to a patient via Skype is still relating human-to-human. But for patients, the emotional logic of this slippage is real. After only meeting on screen with one’s therapist for years, a psychotherapy program that generates text messages doesn’t seem that odd. It actually seems familiar. It passes a kind of “Turing test” for artificial intimacy. From a behavioral point of view, patients experience the machine as though it were a person. But this is a set of interactions not a mutual relationship. There is no human reciprocity. There is no one there who makes the promise of empathy: to try to stand in your place and follow through on what they feel when they do. Psychotherapy is more than protocols for sharing information and good advice. It is a human practice constituted by a mutual relationship. When therapists became willing to take bodies out of the room, they contributed to the fantasy that therapy without a human relationship would be the same. And that was an idea that technologists and entrepreneurs could get behind. They have invested in artificial intimacy, creating programs that function as psychotherapists. Patients will talk and machines will “listen.” Therapists will want to object. And they should. When machines “listen,” there is no human understanding and that is the catalyst for all therapeutic change. But once therapists gave away the importance of being bodies together in the room, they also gave away the platform they needed to make this objection. Consider two therapy-bots in current practice: WoeBot and SimSensei. Both are artificial intelligence programs marketed unapologetically as providing dialogue- based therapy. “People who don’t want to talk to people might be more interested in talking to virtual people,” says Jonathan Gratch, director of the center for virtual humans research at the University of Southern California’s Institute for Creative Technologies where SimSensei is being developed. He studied patients who talked to computer-program psychotherapists. One patient made it clear that it wasn’t just that the program was better than nothing; it was better than a human ever could be. She said: “This is way better than talking to a person. I don’t really feel comfortable talking about personal stuff to other people.” The program offered therapy without the risk of human intimacy. Of course, she also lost the benefits of such intimacy. Alison Darcy, a psychologist and the CEO of WoeBot, a text-based AI online therapy program pushes the “better than anything” agenda. She says, “It’s almost borderline illegal to say this in my profession, but there is a lot of noise in human relationships . . . . Noise is the fear of being judged.” Many online therapists also boast that their patients are freer to share feelings for the same reasons. But they’ve missed the point; the fear of being judged is signal, not noise. If you never experience a fear of being judged only to then feel accepted and understood, there is no way to heal from that fear. You can avoid it, yes. But therapy is about healing, not hiding. It is not inevitable that we will move toward the acceptance of artificial intimacy whether we are talking about care-bots, friendship-bots or therapy-bots. There is still time to demand more. After all, empathy, authenticity and embodied relationships most define us as human. These capabilities create human children who can most fully interact with their parents and peers and who can most richly reflect on their lives. They are bedrock for the experiences most central for a well-lived life. When we settle for images on screens or algorithms programmed to generate moments of pretend-understanding, we risk losing each other and we risk losing ourselves. This article was co-written with Sherry Turkle and Gillian Isaacs Russell. Ms. Turkle is Abby Rockefeller Mauzé professor of the social studies of science and technology at MIT and author most recently of Reclaiming Conversation: The Power of Talk in a Digital Age; Ms. Russell is a U.K.-trained psychoanalyst and author of Screen Relations: The Limits of Computer-Mediated Psychoanalysis and Psychotherapy.
Sleepwalking Towards Artificial Intimacy: How Psychotherapy Is Failing the Future
17
sleepwalking-towards-artificial-intimacy-how-psychotherapy-is-failing-the-future-1b593783c51a
2018-08-25
2018-08-25 01:42:00
https://medium.com/s/story/sleepwalking-towards-artificial-intimacy-how-psychotherapy-is-failing-the-future-1b593783c51a
false
2,027
Home Page For The World’s Business Leaders
null
Forbes
null
Forbes
null
forbes
ECONOMY,POLITICS,BUSINESS,CAPITALISM
Forbes
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Forbes
Home Page For The World’s Business Leaders.
3126f7dd42c1
forbes
7,600
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-11
2018-07-11 05:01:32
2018-07-11
2018-07-11 05:24:04
8
false
en
2018-07-11
2018-07-11 05:24:04
2
1b5ace7a459b
1.314465
0
0
0
Great talk by Uli Chettipally on the future of an AI enabled health care and how it can transform health outcomes. He also discussed…
5
How AI will disrupt Medicine for Good Great talk by Uli Chettipally on the future of an AI enabled health care and how it can transform health outcomes. He also discussed briefly the new medical school being opened by Kaiser Permanente. His forthcoming book will be interesting. Related:
How AI will disrupt Medicine for Good
0
how-ai-will-disrupt-medicine-for-good-1b5ace7a459b
2018-07-11
2018-07-11 05:24:05
https://medium.com/s/story/how-ai-will-disrupt-medicine-for-good-1b5ace7a459b
false
48
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
Ismail Ali Manik
Uni. of Adelaide & Columbia Uni NY alum; World Bank, PFM, Global Development, Public Policy, Education, Economics, book-reviews, MindMaps, @iamaniku
6a8552d04dc7
ismailalimanik
123
740
20,181,104
null
null
null
null
null
null
0
null
0
40b37ad0701a
2018-07-09
2018-07-09 23:27:20
2018-07-11
2018-07-11 11:17:19
1
false
en
2018-07-11
2018-07-11 11:17:19
13
1b5c18c91938
4.192453
1
0
0
Thoughtful Net #59: curated links from the past few weeks (and a bit)
5
Dust, Unknowns, Instruments, and Shoemaking Thoughtful Net #59: curated links from the past few weeks (and a bit) Pille Kirsi, nappy.co Well here we are again, another issue after a long pause. At least this time I have a reason: two months of sun in Britain, which means I’ve been walking, cycling, socialising, and generally out of the house more and reading less. It’s a good thing, although I do get a little anxious when I see my reading list get so much longer. The Best Digital Dust Exploring the lossiness and ambiguities of digital data, from smart dust & neural dust to new technologies of…medium.com Digital Dust. Jay Owens has written a fascinating article that includes, but is not limited to: tiny smart devices; neural data; link rot; the philosophy of archiving; and big data. Long, dense, worth it. I think we… expect smaller, more personal media outside the public sphere to be lost to time — or, more precisely, to achieve a kind of privacy through obscurity. We change email addresses, stop using social networks, or don’t scroll that many pages back in the search bar. And so the past is sloughed off, gently, quietly, out of sight and out of mind — to us, at least, though not the platforms that host this data. Explaining Ways to think about machine learning. Ben Evans tries to nail down what machine learning is, what it will be good for, and how we might talk about it. It’s a step change in what we can do with computers, and that will be part of many different products for many different companies. Eventually, pretty much everything will have [machine learning] somewhere inside and no-one will care. GitHub Is Microsoft’s $7.5 Billion Undo Button. Paul Ford defines Github for a non-technical audience following the recent (huge) acquisition by Microsoft. If only everything worked like this! Why are we still sending files around via email? Why aren’t there multiple branching versions of everything? Why do we pretend that there’s any canonical version of anything? Git acknowledges a long-held, shared, and hard-to-express truth, which is that the world is ever-shifting and nothing is ever finished. Known Unknowns. An essay by James Bridle, based on his new book (which I’ve bought but not read yet), on the inscrutability of machine learning. Technology does not emerge from a vacuum; it is the reification of the beliefs and desires of its creators. It is assembled from ideas and fantasies developed through evolution and culture, pedagogy and debate, endlessly entangled and enfolded. The belief in an objective schism between technology and the world is nonsense, and one that has very real outcomes. A better metaphor for technology. Dieter Bohn explores a better way to talk about tech: as an instrument, rather than a tool. There are a whole host of connotations for the word “instrument” that I believe would be helpful to keep in mind as we interact with technology. Most important to me, though, is that the word helps you recognize that instruments are enmeshed in culture. They create it, sure, but they also participate in it as objects in and of themselves. History ‘Crush Them’: An Oral History of the Lawsuit That Upended Silicon Valley. Victor Luckerson speaks to the major players in the antitrust trial against Microsoft — who do not come out of this well. Microsoft may have simply been too bloated by the turn of the century to outflank more nimble competitors like Google and Apple. But there’s still considerable debate about whether it was the government lawsuit that nudged the company into an abyss of late-to-the-party products such as the Zune MP3 player and the Bing search engine. How Ad Men Invented The Future Post-war artists sold us a vision of a luxurious, automated suburban lifestylehowwegettonext.com How Ad Men Invented The Future. Part Two of Darren Garrett’s A Visual History of the Future looks at post-war futurist advertising. The whole series is worth reading, but this is especially good. If Americans were to enjoy more time with their new televisions, they would need to be liberated from the drudgery of domestic tasks. Tedious work such as mowing grass or trimming shrubs could be taken care of without leaving the lounge chair, using a microphone to instruct your mechanized labor force while you kicked back, drank, smoked, and read magazines. Living with Technology 40 Kilometers I am known to go to great lengths and even greater distances to meet great bootmakers and shoe stores of some repute…om.co 40 Kilometers. Om Malik on shoemaking and driving and distance and digital data vs the real world and context and serendipity. Quite lovely. If technology today has reduced actual humans to antiseptic clicks, users, engagement minutes and made everything “data” then the same technology has the ability to rekindle what is being lost in this data-infused world: subtle magic of discovery, so subtle that it is near invisible and yet brings a hint of a smile, unintentionally. End of the line: our guide to the death of the telephone. A funny guide to all the ways we speak on the telephone, from landlines (ask your parents) to voice notes. By Rhik Sammader. In the old days of landlines… people always picked up, because life was boring; awkwardness abounded. With mobiles, and the ability to silence a ringer without aborting the call, came a golden age of [screening]. Welcome to Blaine, the town Amazon Prime built. Alexandra Samuel writes a portrait of Blaine, Washington, a town that both prospered and suffered due to its location on the border of the US and Canada, where Amazon didn’t deliver — and the changes ahead now that they do. For the past decade Blaine has flourished, thanks to the discrepancy between the explosion of e-commerce in the US and the still-developing e-commerce network in Canada. Blaine’s handful of residents have grown accustomed to a regular stream of Canadians who come to town specifically to pick up their US packages. The Thoughtful Net is an occasional (less than weekly, more than monthly) publication collecting great writing about the internet and technology, culture, information, soci­ety, science, and philo­sophy. If you prefer to receive it in your inbox you can follow this publication or subscribe to the email newsletter.
Dust, Unknowns, Instruments, and Shoemaking
5
dust-unknowns-instruments-and-shoemaking-1b5c18c91938
2018-07-11
2018-07-11 11:17:20
https://medium.com/s/story/dust-unknowns-instruments-and-shoemaking-1b5c18c91938
false
1,058
An occasional (less than weekly, more than monthly) collection of links to great writing about the internet and technology, culture, information, soci­ety, science, and philo­sophy.
null
null
null
The Thoughtful Net
null
the-thoughtful-net
null
stopsatgreen
Digital Archiving
digital-archiving
Digital Archiving
42
Peter Gasston
Innovation Lead at rehab agency. Author. Speaker. Historian. Londoner. Husband. Person.
923a81ab0571
stopsatgreen
2,048
185
20,181,104
null
null
null
null
null
null
0
null
0
f702855ffe47
2017-10-18
2017-10-18 20:00:25
2017-10-18
2017-10-18 20:00:26
7
false
en
2017-10-18
2017-10-18 20:00:26
17
1b5c51b9108a
2.880189
0
0
0
null
3
What is the difference between Machine Learning and Deep Learning # medium.com When we talk about data science or artificial intelligence, the two very common terminologies that come into… A Glance into the Crystal Ball: The Future of Voice Technology # medium.com Voice technology is changing the way we interact with people and devices. To date, innovation in voice has b… Transforming the World with AI # medium.com Wishing you all a very Happy Diwali! On this festive occasion, I wish to give you a interim update on my jou… The 10 Top Recommendations for the AI Field in 2017 # medium.com Let’s begin by removing ‘black box’ algorithms from core public agencies Image: Trevor Paglen, “Lake Tenaya,… Machine Learning in Bookmaking # medium.com Machine Learning is becoming a standard tool of the sports betting industry. At fansunite.io we are keenly a… Building Psychological Contexts from Conversational Text # medium.com Using natural language processing to identify challenges to human flourishing By: David Van Bruwaene, CEO; V… Blockchain: Not job loss but unlimited jobs # medium.com Blockchain will help us be humans online. In the offline world we have our individuality. We choose differen… Conway Law revisited : Errors must be shared with AI # medium.com And AI must share their errors I planned to join “future of work”, but had no time to apply. Initially, I re… 50 Shades of Grey — The Psychology of a Data Scientist # medium.com Unless you’ve recently graduated from one of the new Data Science courses that have been popping up online a… Artificial Intelligence Only Works alongside Skilled Testers # medium.com (This content originally appeared on TechWell Insights) When discussing the future of artificial intelligenc… Natural language processing is improving automated customer support # venturebeat.com GUEST: CEOs, CIOs, CMOs, and CXOs alike are increasingly focused on creating customer experience (CX) that i… Google’s DeepMind unveils AlphaGo AI that learns from itself and beat its predecessors # venturebeat.com DeepMind, a division of Google that’s focused on advancing artificial intelligence research, unveiled a new … Samsung unveils Bixby 2.0 with long-awaited Viv Labs integration # venturebeat.com Samsung today announced its AI assistant Bixby will begin to incorporate Viv Labs technology for third-party… Woebot names AI pioneer Andrew Ng as chairman to work on mental health # venturebeat.com Andrew Ng, one of the cofounders of the Google Brain project, will be the new chairman of Woebot, a company … NVIDIA Inception Awards Come to Tel Aviv: Ultimate Startup Faceoff Plays Out in Ultimate Startup Center # blogs.nvidia.com It was the ultimate setting — like playing baseball at Yankee Stadium or singing opera at La Scala. NVIDIA’s… AWS Deep Learning AMI Now Supports PyTorch, Keras 2 and Latest Deep Learning Frameworks # aws.amazon.com Today, we’re pleased to announce an update to the AWS Deep Learning AMI. The AWS Deep Learning AMI, which le… AlphaGo Zero: Learning from scratch # deepmind.com
17 new things to read in AI
0
17-new-things-to-read-in-ai-1b5c51b9108a
2018-06-04
2018-06-04 10:53:39
https://medium.com/s/story/17-new-things-to-read-in-ai-1b5c51b9108a
false
485
AI Developments around and worlds
null
null
null
AI Hawk
aihawk1089@gmail.com
ai-hawk
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
null
Deep Learning
deep-learning
Deep Learning
12,189
AI Hawk
null
a9a7e4d2b403
aihawk1089
15
6
20,181,104
null
null
null
null
null
null
0
null
0
af4f2b62fccd
2017-11-27
2017-11-27 16:13:19
2017-11-27
2017-11-27 16:20:04
1
false
en
2017-12-07
2017-12-07 17:13:29
17
1b60bdb4b135
4.89434
0
0
0
After extensive conversations on mobile strategy with leading analysts, it’s clear there are four key areas every organization needs to be…
5
4 Takeaways on the Future of Mobility After extensive conversations on mobile strategy with leading analysts, it’s clear there are four key areas every organization needs to be thinking about. The mobile market continues to grow, with 77% of Americans (and 92% of those between 18–29) owning a smartphone, and over 2.5 billion smartphone users forecast worldwide by next year. There is much to consider when developing a strategy for such a critical and massive market, especially one which changes so quickly. As the concept of “mobile” evolves, you can no longer call yourself “mobile-first,” develop an app and assume you’ll be set for years (tweet this) As the concept of “mobile” evolves, you can no longer call yourself “mobile-first,” develop an app and assume you’ll be set for years—even if it was really a great app when you released it. Market trends and customer needs are always moving, and you need to be prepared for them. I’ve just wrapped up a series of conversations on mobile strategy with analysts from both Gartner and Forrester, and am convinced there are four areas worth focusing your strategy around in particular. 1. Mobile-First is not Enough A “mobile-first approach” that revolves around creating a singular mobile experience is too narrow a view — after all, your customers are interacting with you from a wide number of devices. We’ve been talking for some time about the need to shift to a user-first model instead. In this model, the user’s preferred digital channel is supported fluidly, even as they move between devices and the augmented and physical world. The user’s experience needs to be supported in new forms of interaction, not just a multitude of screens but also conversational, AR and VR, among others. The pull model, where we expect the user to come to the app, also needs to be replaced with a push model that eliminates the mundane work for the user and delivers information and insights exactly when the user needs to take action. At this point, the mobile market has changed so much that there is a major possibility that the Gartner and Forrester analysis that focused on Mobility will broaden, and coverage of the standalone mobile market will cease to exist. You shouldn’t think about mobile in isolation anymore either. 2. Applications aren’t Enough Either The world of applications is broad and not a little messy, with full-blown applications, apps, websites, portals and more falling generally under the category. These are all now beginning to blur, at least on the user-experience level, as well they should — the user doesn’t care how they’re accessing something, only about what they’re trying to accomplish. Content has a big role to play too. Traditionally we in IT/development/marketing have been conditioned to think “I need a website, so I should get a CMS” or “I need an app, so I need a development tool or platform,” but content should not be siloed into one “application.” Content can be useful in many digital contexts, whether transactional or knowledge-based or anything else. This is driving the headless CMS concept. Are you curious about whether you need a headless or a full featured CMS? Learn more in our on-demand webinar. So what’s the new term for what we’re trying to talk about now? Terms like “applications” (enterprise software), “apps” (mobile app stores) and “digital experience” (marketing around modern customer experience) have their own distinct connotations. It will be interesting to see where we end up, but one thing is for sure, we (along with Gartner and Forrester) will certainly be talking about something different soon. 3. Cognitive and AI Technology is Now Everywhere AI adoption is surging, and just as it’s changing things in the world of business and consumer experiences, it’s having a dramatic impact on IT and Application Development. AI has always been on the fringe of IT as developers are responsible for the systems that generate the data to be analyzed. And IT resources are involved with the analytics team when it comes to preparing or loading data into a data lake. For the most part, however, data science expertise has been managed as a separate team. That’s now changing. Organizations are beginning to treat AI as a natural extension of the digital experience, seamlessly integrating it within the operations of a company. This includes IT and Application Development. This integration opens up the opportunity to deliver a new kind of cognitive-first experience, where analytical predictions aren’t just part of the experience, but are often driving it. Imagine if your Machine Learning model detects an anomaly, automatically analyzes it in terms of priority and impact, and determines an action is needed. It then fires an event that passes to another part of the digital experience. This could be a business rules engine that handles the action, or it could determine a human is needed and invoke a service to manage that, like a chat window. This could lead to fluid interactions across any number of mobile or conversational experiences. 4. Serverless Architecture is the Future Here The cloud has become so prevalent that cloud support is more or less assumed. It’s no longer a differentiating factor, so organizations are moving on to the next hot topic that is — and that’s AI. However, not all cloud implementations are equal. There is a huge difference between deploying a monolithic application (including Java programs where the monolith is the archive file) to cloud servers, and embracing a cloud-native approach. Yes, both will get you out of the data center business, but your compute costs, your ability to scale, and your agility in implementing small changes will be vastly different. A could-native and serverless approach that utilizes microservices is needed to deliver the kind of results you need be competitive today. You can get these through major players like Amazon (AWS), Microsoft (Azure), Google (GCP) or IBM (IBM Cloud), but for most organizations that’s too difficult. It also ties you to one specific cloud, which is a problem for organizations that do business internationally, as different jurisdictions require different cloud options. Remember the push model I referenced above? The serverless approach makes it easy to support that, as well as to create models for other new digital experiences like the IoT. Making it All Easier Modern cognitive-first mobility should be attainable for even an average-sized organization (tweet this) At Progress, our goal is to make all of this easier — for everyone. Modern cognitive-first mobility should be attainable for even an average-sized organization, and should be much simpler and quicker for advanced organizations to implement than a typical solution is today. Here are some of the ways we aim to make modern Application Development and IT better for you: A serverless backend, giving you the foundation for microservices Frontend tools that allow you to build native experiences with a single code base We make it easy to manage and access content via standard APIs New digital channels such as chatbot and AR from Progress Labs We see these as the key components in the cognitive apps of the future (and today). You can learn more about how you can build cognitive-first business applications here. This story originally appeared on the Progress blog, by Mark Troester. Looking for more great stories about technology and building tomorrow’s business apps? Check out the blog for more.
4 Takeaways on the Future of Mobility
0
4-takeaways-on-the-future-of-mobility-1b60bdb4b135
2018-04-26
2018-04-26 20:48:38
https://medium.com/s/story/4-takeaways-on-the-future-of-mobility-1b60bdb4b135
false
1,244
Develop and deliver tomorrow’s business applications today. Find out more at http://www.progress.com
null
progresssw
null
Stories by Progress
null
stories-by-progress
TECHNOLOGY,SOFTWARE,BUSINESS,DEVELOPER TOOLS,COGNITIVE BUSINESS
progresssw
Serverless
serverless
Serverless
3,812
Progress
Tomorrow’s business applications are cognitive-first. Develop and deliver them today with technologies from Progress. Find out more at http://www.progress.com
a2b0a7db3e8e
ProgressSW
1,212
689
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-05
2018-03-05 08:01:52
2018-03-05
2018-03-05 08:16:09
2
false
en
2018-03-28
2018-03-28 06:35:06
4
1b6189d7ea7b
3.655031
0
0
0
The creation of advanced, sex robots are leading others, primarily women, to cogitate if some men are ridden with unusual fantasies or just…
2
Rhetorical Analysis James alongside 1 out his 4 dolls, Harmony Photo by The Guardian The creation of advanced, sex robots are leading others, primarily women, to cogitate if some men are ridden with unusual fantasies or just plain pathetic. A woman who gave her viewpoint on the matter, author Fiona Sturges wrote, “The Sex Robots are coming: seedy sordid — but mainly just sad,” an article published by The Guardian in November 2017. Fiona argues the reasons or “excuse” for a man to own such a device is not valid, assuming that only creeps or outcasts will make purchases. Although, Sturges does have some credibility in the article by using the story of James, a personal owner, and the creator, Matt McMullen’s perspective on his product(s); those being somewhat reputable sources, she does not lead to any statistical evidence that links the mental health of male consumers with robots. She focuses on her personal emotions and insight on others to attain the reader’s appeal; in other words, she prioritizes using Pathos. Pathos In the article, Sturges uses appeal to Pathos by including vivid descriptions of the products. She mentions about James, a 58-year-old man from Atlanta, who owns a total of four robots in his dwelling while having a wife by his side. She describes the actions that James takes when having his dolls, “Every morning he carefully gets them dressed and puts on their make-up…he might take them for a picnic…[or] he’ll stay in and watch television…” Here, she demonstrates how compassionate somebody can be to these artificial dolls. She also adds the “painstaking process where he must bend the dolls into a sitting position, and adjust their eyeballs…But that’s OK because there’s nothing he wouldn’t do for for his synthetic companions”. By using James as an example, the reader is able to consider Sturges’ article as a valuable source. In this case, she uses James to indicate how owning sex robots can affect a person’s mentality, by treating an inanimate object as a human being. In addition, she mentions Realbotix, the company who manufactures these life-like products, including the company’s owner Matt McMullen. She says that San Marcos, California native industry has, “workstations [spilled] with custom made nipples and wobbling artificial labia…researchers are utilizing new technology to [make] their dolls smile, pout, flutter their eyelashes and tell jokes.” She also adds an illustration about the dolls’ nether regions, how “lubrication systems are in development for ‘authentic’ sexual experiences [including] muscle spasms to simulate female orgasm.” Sturges does an excellent job using Pathos by being more insightful on the actuality of how these devices are created along with its “innovation” to mimic a realistic sexual encounter with a woman, thus causing the reader to cringe a little at the fact. Matt McMullen, Realbotix CEO/Creative Director Photo by The Guardian Lastly, Sturges exercises pathos by revealing Matt McMullen’s stance with these mechanical beings. She mentions that, “Matt sees a glittering future in which sex robots are ‘as commonplace as porn’ and rejects the notion that his dolls are damaging to women…’It’s not for everyone,’ he shrugs…” This gives outlook to readers, causing them to choose whether McMullen’s viewpoint for his belief is adequate. The author continues to appeal to the audience by using pathos. Sturges reveals something that can be as astonishing as moments from the popular television program The Maury Show towards the end of the article. She announces that, “James, it turns out, also has a wife, Tine, who is a living, breathing human, and is the very definition of long suffering.” This is a complete shock to the reader, who probably was under the assumption that James was some kind of lonely “freak”, although that being otherwise. Sturges continues by giving a background that led up to James acquiring the machines, “Two years ago, she left the marital home for nine months to care for her mother; she returned to four new lodgers, distinguishable by their caramel complexions, slim-line figures and willingness to remain silent at all times.” The author of the article uses phrases like “long suffering” and “willingness to remain silent” in order to have a female reader imagine themselves in Trine’s shoes, causing emotional discomfort. Lastly, she adds the final blow with using pathos. Sturges continues by adding that, “James looks pained when asked what he would do if he had to choose his wife and his favorite doll, April. ‘I honestly don’t know,’ he says”. Here the reader can picture the pain Trine has dealt with; the feeling of betrayal and anguish that somebody you have had a relationship for years, would consider choosing an artificial version of you instead. After giving these few examples of which the author exercised pathos, the reader of the article is persuaded by her credibility. For the most part, women would highly be appealed by what Sturges had to say. In addition, the assumptions that the ideal market for these sex robots would be lonesome men, are proved wrong by James’ story, but their suspicions on the moral and ethical ideals of the creator are correct. Works Cited Sturges, Fiona. “The Sex Robots Are Coming: seedy, sordid — but mainly just sad.” The Guardian, Guardian News and Media, 25 Nov. 2017, www.theguardian.com/tv-and-radio/2017/nov/25/sex-robots-are-coming-seedy-sordid-sad.
Rhetorical Analysis
0
rhetorical-analysis-1b6189d7ea7b
2018-03-28
2018-03-28 06:35:07
https://medium.com/s/story/rhetorical-analysis-1b6189d7ea7b
false
867
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Brian
I am currently studying at San Francisco State University, where I am majoring in Cinema. I’m a 3rd year student & am the first in my family to attend college.
c7434a45fe03
balonzo
5
7
20,181,104
null
null
null
null
null
null
0
null
0
30c93e6ab7a9
2017-12-03
2017-12-03 22:21:57
2017-12-03
2017-12-03 22:23:33
1
false
en
2017-12-03
2017-12-03 22:23:33
2
1b62b732e8cf
3.29434
6
0
0
Incredible, but incredibly hard.
5
Understanding the human brain Incredible, but incredibly hard. The human brain is a complex thing. It controls nearly everything that we do, right from regulating our heartbeat and body temperature to thinking thoughts about the brain itself, like I’m doing now. No other living thing that we know of comes anywhere close. We are spending billions of dollars on artificial intelligence that can match what our brain can do, and none of those efforts are anywhere close either. Over the last couple of weeks, I read a biographical account of Elon Musk, the closest person alive to Marvel’s Tony Stark, a.k.a Iron Man. The book details out how this man has invested all his time and money in building companies that potentially have far-reaching consequences for the future of our planet — Tesla, SpaceX and SolarCity. It is a fantastic read. But since the time the book was published, Elon Musk has gone on to found another company with potentially even more far-reaching consequences for the future of the human race. This company is called Neuralink. And their website states that “Neuralink is developing ultra high bandwidth brain-machine interfaces to connect humans and computers.” Right out of sci-fi thrillers, I love the idea. Incredible, but incredibly hard. To get an idea of how hard it is, consider the following scenario. When you enter a room that you have never been in and want to turn on the light, what do you do? You flip every switch you can find in the room in the hope that one of them is the switch that turns on the light. Eventually, you will find that switch. It may be a little annoying, but it is a very simple task at the end of the day, because the room has at the most ten switches that you have to turn on and off before you find the right one. Now, if this room were wired different to most rooms you have been in, and it is possible for the switches to work in combination, that raises the complexity by several orders of magnitude. For example, if turning a switch on or off was not enough by itself to turn on the light, but what was needed was a combination of switches to be on at the same time, you could spend a lifetime trying to turn that light on. With ten switches, it is the equivalent of guessing someone’s four-digit ATM pin by brute force. Our brain is like this weirdly wired room. Only instead of ten switches, we have a hundred billion neurons that work in combination. There has been great advances in physics and biology giving rise to something called optogenetics. This is a neat piece of work that lets you turn specific neurons on and off using light when they have specific genetic markers. Hence the name optogenetics. Without optogenetics, you would be a woman without hands in that room with a billion switches, faced with the task of finding the right combination of switches to turn on the light. Optogenetics gives you hands. It’s a massive improvement to the situation prior to that, but still left with an incredibly daunting task ahead. But we humans are undeterred by odds like that. We like to keep on pushing the boundaries. So, there have been neuroscientists running experiments on mice and other animals using optogenetics to understand the animals of these brains better. But, in doing so, they have realised that the switches are not exactly like switches that one can turn on and off, but more like the temperature regulator on the air-conditioner or the speed regulator on the ceiling fan, with a range between minimum and maximum activity. This has now increased the complexity of the problem by a few more orders of magnitude. So, we have a long way to go. But we will eventually get there. In the real world that we live in day in and day out, where neuroscience experiments and optogenetics aren’t household names, we are constantly faced with scenarios we like and don’t like. And we are constantly trying to make sense of why these might be happening, why we didn’t get that promotion, or why that cute guy didn’t call back after the first date, or how Donald Trump is still holding the Presidency. Unlike the complex workings of the brain, we have a simple hack at our disposal. Unlike the attempt at understanding the workings of the brain, we don’t need causality. All we need to do is assume responsibility. The moment we do that, we are in control of how we respond and we will respond to improve and make the situation better, or in the least, learn from it. Before you go… If you liked this, support my work. All you need to do is clap. Subscribe to my weekly newsletter. Read my book
Understanding the human brain
24
understanding-the-human-brain-1b62b732e8cf
2018-04-20
2018-04-20 12:49:40
https://medium.com/s/story/understanding-the-human-brain-1b62b732e8cf
false
820
Life hacks, productivity hacks, life lessons, career advice, relationships, travel, and everything else one needs for leading the good life.
null
kumariimc
null
A Good Life
kumara@kumartalks.com
a-good-life
LIFE LESSONS,PRODUCTIVITY,CAREER ADVICE,RELATIONSHIPS,CULTURE
kumariimc
Elon Musk
elon-musk
Elon Musk
4,393
Kumara Raghavendra
Writer. Comedian. Product + Data Science @ Booking.com. Discovering the world, one idea at a time.
37423f3c61d8
kumariimc
1,538
1,240
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-29
2017-11-29 23:08:49
2017-11-30
2017-11-30 05:24:20
1
false
en
2017-11-30
2017-11-30 05:52:04
6
1b633d3387b8
1.611321
0
1
0
The Blockchain Expo North America 2017 was held in the heart of Silicon Valley at the Santa Clara Convention Center, on November 29th. The…
5
Xin Song, CEO of Bottos, at the Blockchain Expo North America 2017 The Blockchain Expo North America 2017 was held in the heart of Silicon Valley at the Santa Clara Convention Center, on November 29th. The Bottos team attended and Mr. Xin Song, CEO of Bottos, gave a wonderful speech at the sincere invitation of the organizer. The theme of Mr. Xin Song’s speech was: NEO in 2017, Bottos in 2018! His speech gave a detailed introduction of Bottos from three aspects; the strong team, a promising market, and a booming, active community. The attendees that were present are from leading enterprises in the global Blockchain and technology industries including AT&T, CITCO, HSBC, Citibank, BMW, J.P. Morgan, etc. In his speech, Mr. Xin Song said that the Bottos team has not only the founder and former VP of the famous Blockchain project NEO as the project founder, but also Mr. Xin Song himself, the former China president of Droege Group, who has more than ten years experience in internet and digital transformation as CEO. More notably, the CTO of Bottos, Chao Wang, served as the head of Wanxiang. Our AI Architect, Zhen Gao, has made AI his focus of research and has published over 100 journal and conference papers. Like the rest of the team, listing his accomplishments would take up too much space for this post. The Artificial intelligence and Blockchain industries are currently booming and the future looks extremely promising. Mr. Song made it clear that the Bottos team is well equipped for what lies ahead… The Blockchain Expo introduced Bottos to Blockchain lovers around the world while numerous AI companies, from the co-located AI Expo 2017, expressed a strong desire for cooperation. Bottos’ vision of building a globalized Blockchain project has always been the same… We believe we will continue to grow and support the creation of high-quality AI projects around the world! We’re also very excited to tell you that our global community, as of now, has more than 100,000 people! We believe that Bottos is a great project that is worth your attention and cooperation. Follow us below to stay up-to-date on everything Bottos! Website | Telegram | Facebook | Twitter | GitHub | LinkedIn
Xin Song, CEO of Bottos, at the Blockchain Expo North America 2017
54
xin-song-ceo-of-bottos-at-the-blockchain-expo-north-america-2017-1b633d3387b8
2018-03-15
2018-03-15 00:05:49
https://medium.com/s/story/xin-song-ceo-of-bottos-at-the-blockchain-expo-north-america-2017-1b633d3387b8
false
374
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Bottos AI
Bottos - A Decentralized AI Data Sharing Network
58b32476bec4
bottos_ai
374
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-19
2018-09-19 19:07:50
2018-09-19
2018-09-19 19:17:24
1
false
en
2018-09-19
2018-09-19 19:30:03
2
1b6498a21719
1.65283
2
0
0
Social media can be the Holy Grail of communication for an introvert. I decide when, where, and how long I interact. Then again, I’m…
5
Social Media. “girl wearing grey long-sleeved shirt using MacBook Pro on brown wooden table” by Andrew Neel on Unsplash Social media can be the Holy Grail of communication for an introvert. I decide when, where, and how long I interact. Then again, I’m grateful that I went through the transition. I’m a 90s kid. So, I remember how things were before. I can only speculate but if the internet was around — my childhood would have been a tiny bit happier. I’m not getting into bullying or haters because these things never happened to me. You may wonder why I chose to write about this. Three hundred and fifty-six words sitting in a document from two months ago titled ‘Me and Social Media’ and a conversation on twitter about the continuous algorithm changes — were the catalysts. I see this Internet thing as an opportunity and a tool. I share my art and writing progress. I am a cheerleader for others. I try to bring positivity to my life by following inspiring individuals. But to navigate this world, you require discipline, clear goals, and a well-thought-out answer to “WHY are you on social media?” Well, the fact that I can build my rep using FREE resources… it’s one of my reasons. Plus, I met fantastic people online and then offline. I learn so much by going down the rabbit hole concerning a particular subject. I chat with people from around the world. I… We got a ton of information — and before you say ‘Hmm… What kind of information?’ — that’s why you read multiple articles and keep going until you reach at least page 33 on Google. Now, if you don’t care about it or you consider it a waste of energy, that’s fine. This article is from my personal experience. Though I’m positive that this — right this second- it’s only the beginning. Virtual Reality. Artificial Intelligence. These are gonna be the norm in the next few decades or even years. Whatever the timeline may be, the technology is here — in its infancy, yes — but, IT IS HERE. Listen to Elon Musk on The Joe Rogan Experience. I got goosebumps. Yup, the robots are gonna kill us. Still, there’s a silver lining. For now — We humans are still here. So, let’s enjoy this era and use it to our advantage. Thank you for reading!
Social Media.
4
social-media-1b6498a21719
2018-09-19
2018-09-19 19:30:03
https://medium.com/s/story/social-media-1b6498a21719
false
385
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Pia ❤ Florina Radulescu
Curious soul. Nonsense writer. Tea lover. Miles eater.
5c74cdbea85d
piart33
40
146
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-28
2017-09-28 09:59:12
2017-09-28
2017-09-28 10:04:33
4
false
en
2017-09-29
2017-09-29 01:34:02
2
1b64adf05197
4.967925
1
0
0
Artificial intelligence (AI) has taken the tech sector by storm in the past year. Yet, what remains unanswered in many people’s minds is…
5
Empowering traditional industries with AI: an interview with Benyu Zhang, CEO of CloudBrain Artificial intelligence (AI) has taken the tech sector by storm in the past year. Yet, what remains unanswered in many people’s minds is how else AI might be applied and what changes it will bring about in our daily lives. China’s AI startup CloudBrain provides part of the answer to this question. The Beijing and Silicon Valley-based startup, which secured several millions (USD) in funding for series A in July, aims to equip traditional sectors with AI technology, making them more productive and more impactful, much in the way the advent of the internet changed and capacitated these industries. Benyu Zhang, founder and CEO of CloudBrain. Photo from CloudBrain An AI startup founded by an AI expert Unlike many other entrepreneurs who recently began pursuing the trend of AI, Benyu Zhang, founder and CEO of CloudBrain, has been developing his expertise in AI for 18 years. Zhang’s interest in AI was piqued at an early age when he was still in high school. Back then, he was fascinated by a computer game in which players could write rules to make a bird in the game more vivid. “I was curious about how to combine computers with living things,” Zhang said. This curiosity led him to pursue a bachelor’s degree in computer science at Peking University. He continued to study at the university’s Artificial Intelligence Lab and graduated with a master’s degree in 2002. After graduation, Zhang worked for Microsoft, Google, and Facebook for more than a decade, focusing on machine learning, deep learning, and other AI-related fields. In 2015, Zhang decided to return to China and found his own company, CloudBrain. “Internet and mobile internet have accumulated abundant digitalized data for AI. It’s time to analyze and utilize this data,” Zhang told AllTechAsia. At present, CloudBrain is headquartered in both Beijing and Mountain View, California, with a small office in China’s eastern city Hangzhou. Zhang said he spends more time in the US office, leading its 10-person team focusing on AI research and development. Meanwhile, a crew of around 20 people in Beijing mainly targets the application of the company’s AI technology. Empowering traditional sectors to achieve more with AI AlphaGo’s wins over global top-notch Go professionals have stunned the world, leaving people wondering what changes AI will bring to our lives. In fact, AI has already been widely integrated into search engines and advertising. Google and Facebook, for instance, leverage AI technology to show users tailored ads based on search history and other online behavior. Zhang believes AI has created significant value in online search and ad businesses and believes it will achieve even greater value if applied across other traditional industries. With this philosophy, Zhang designed CloudBrain as an AI platform to provide operational benefits to traditional industries. The startup collaborates with enterprises from finance, energy, human resources, among other industries, and makes businesses smarter by harnessing the power of AI. CloudBrain’s co-founder Zhiyong Long, who oversees the startup’s business expansion, explained how the collaboration works. CloudBrain is working with UnionPay Smart, a subsidiary of the world’s largest card payment organization, China UnionPay, to help business owners find consumers more efficiently and accurately. Based on UnionPay’s massive card payments data, CloudBrain is able to define a group of consumers that buy certain products. It can then locate potential buyers with similar consuming characteristics but who have not yet made a purchase. It will recommend these likely shoppers to business owners to make the deal happen. This strategy is slightly different from the personalized recommendation methods utilized by Amazon and Alibaba’s e-commerce sites, which recommend items to shoppers based on their own shopping or search history. Photo from 699pic.com When it comes to the human resources sector, it’s all about match-making. According to Long, CloudBrain is working with Shixisheng.com, China’s largest intern recruitment site, to help both companies and interns find the right match. When looking for interns, HR departments of sizable and established companies spend a great deal of time evaluating far too many applications. The HRs of less-known companies, however, face the opposite dilemma — too few or even no candidates. This is precisely where AI can make a difference. CloudBrain’s AI technology will analyze both the resumes of candidates and job descriptions then match the parties as suitably as possible. In the case of large-scale companies, this system will help sort all candidates’ resumes and create a rank list based on matching degree. In this scenario, HRs of large companies need only evaluate, for instance, the top 500 resumes rather than all 5000, in order to find the candidates they need. As for smaller companies, they no longer need worry about the problem of attracting zero candidates. CloudBrain’s AI system will recommend candidates whose resumes match job descriptions to apply for those internships. “What’s more exciting is that we are thinking about making more precise matches based on job interviews and offer results, and even how those interns perform at work, instead of simply analyzing resumes,” Long added. As for the energy sector, CloudBrain provides solutions that help electricity power companies predict the consumption volume of a certain geographic area, so that electricity can be better allocated in advance. The startup is also collaborating with one of China’s top ten smartphone manufacturers to develop built-in AI software enabling smartphones to recommend customized contents to users. Photo from deepmind.com DeepMind is their ultimate goal AlphaGo’s developer DeepMind, founded in London in 2010 and acquired by Google in 2014, is one of the globe’s most prestigious AI companies. CloudBrain has its eye on it. “We’re learning from DeepMind and hope someday we will surpass it,” Zhang boasts. Despite the fact that both are AI companies, CloudBrain differs considerably from DeepMind. According to Zhang, Google’s subsidiary focuses more on cutting-edge research, while CloudBrain prioritizes the application of AI technology across different industries. Zhang believes his company is nimbler and more flexible than DeepMind because it’s an independent company rather than part of a larger conglomerate. What’s more, he holds the view that the data and scenarios in CloudBrain’s application will facilitate the development of AI technology in return. As an AI scientist who has spent years both in and beyond China, Zhang spoke about the differences between Chinese and international AI scientists. He opines that AI scientists from other countries are better at putting forward new concepts and theories, while their Chinese counterparts are more adept at digging deep into these issues. “Personally, I am more skilled in a combination of technology and application,” Zhang said. The future of AI remains unclear, but we’re thrilled to see where Zhang and other innovators take us. (Top photo from 699pic.com) By Alex Liao Story link: https://alltechasia.com/empowering-traditional-industries-ai-interview-benyu-zhang-ceo-cloudbrain/
Empowering traditional industries with AI: an interview with Benyu Zhang, CEO of CloudBrain
10
empowering-traditional-industries-with-ai-an-interview-with-benyu-zhang-ceo-of-cloudbrain-1b64adf05197
2018-01-07
2018-01-07 00:36:43
https://medium.com/s/story/empowering-traditional-industries-with-ai-an-interview-with-benyu-zhang-ceo-of-cloudbrain-1b64adf05197
false
1,131
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
All Tech Asia
AllTechAsia is a startup media platform dedicated to providing the hottest news, data service and analysis on the tech and startup scene of Asian markets
c691af389b79
actallchinatech
894
235
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-17
2017-09-17 15:56:16
2017-09-17
2017-09-17 19:58:16
1
true
en
2017-09-17
2017-09-17 21:18:29
2
1b66804d9335
1.875472
7
0
0
On the current trend of autonomous vehicles and why you probably won’t and shouldn’t own one.
5
You will probably never own a Self-Driving Car On the current trend of autonomous vehicles and why you probably won’t and shouldn’t own one. Self-Driving Cars aren’t just a current craze of ambitious Silicon Valley startups, they are an invention which could transform the way we commute and travel. Even though fast and flexible startups and startup-like automakers just as Tesla Motors are leading in this space, big automakers don’t sleep either and presented several concepts at the IAA in Frankfurt. Something I always hear when talking to non-Tech people about Self-Driving Vehicles is: “You know, these cars will be so expensive that ordinary people won’t be able to buy one for the next 20–30 years, just like with the first computers”. When I hear this, I can usually assume that these people don’t see the bigger picture. Ownership is unnecessary These people have to understand that a Self-Driving Car won’t be a proprietary object anymore. When thinking about Self-Driving Cars, don’t think of the car in your garage. A Self-Driving Car won’t be something you let sit in your garage and use it on 8am and 5pm to get to work and back. Instead, think of it as a cheap taxi for long distances. Self-Driving Taxi services are sprouting up in the San Francisco Bay Area and the rest of California. Tesla Motors is planning on introducing the Fleet program — a function where the Tesla can be made available to autonomously act as a Self-Driving Taxi, picking up passengers and making the owner a little side money. Uber is experimenting actively with adding Self-Driving Taxis to their service and Oliver Cameron launched his Self-Driving Taxi Service “Voyage” with its 2 Fords quite a while ago. These services also make the concept of garages and driveways completely obsolete. This means less rent for people in rented apartments and more usable space for people with their own houses. New thinking The concept of owning a car will be obsolete soon. People will summon a Self-Driving Taxi with their phone in minutes or even seconds and will commute cheap and quick. With the use of electronic vehicles, the Self-Driving Taxis will get even cheaper and will have the ability to drive themselves to a charger when needed. This makes the rides even cheaper and more convenient. So if someone is telling you that they won’t be able to afford a Self-Driving Car for a long time, ask them: “If you aren’t able to pay a few dollars for your daily commute, how are you going to buy a whole car?”.
You will probably never own a Self-Driving Car
113
you-will-probably-never-own-a-self-driving-car-1b66804d9335
2017-11-07
2017-11-07 22:59:30
https://medium.com/s/story/you-will-probably-never-own-a-self-driving-car-1b66804d9335
false
444
null
null
null
null
null
null
null
null
null
Self Driving Cars
self-driving-cars
Self Driving Cars
13,349
Dominic Monn
Deep Learning Engineer & Maker.
677f03e54270
dmonn
844
52
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-06
2018-04-06 15:20:56
2018-05-24
2018-05-24 20:57:52
1
false
en
2018-05-24
2018-05-24 20:57:52
9
1b69d1b31055
5.675472
34
1
0
I created a recurrent neural network — a kind of deep learning model that can make predictions on sequential data— and fed it my dream…
5
Can Artificial Intelligence Dream? I created a recurrent neural network — a kind of deep learning model that can make predictions on sequential data— and fed it my dream journals. You can see the code here. It’s pretty simple, built using Python and Tensorflow. It was based on a project I completed for a course with Udacity. This is the same kind of deep learning network that’s used in Google’s suggested searches, your phone’s suggest responses, some translation systems, and other practical applications. It’s also the same kind of algorithm used to generate candy heart names, metal band names, beer names, new Shakespeare plays, and new Bach chorales. The neural network model trains on the dataset you give it and learns its vocabulary and patterns. It can predict what word should come next based on the context; this can be used in reverse to generate as much novel text as you want. The initial project I created used a dataset of scripts from The Simpsons. Only scenes from Moe’s Tavern were used to create new scenes like: moe_szyslak:(eyeing homer's ass) oh yeah, that would look so good on me. carl_carlson: no, i can't just think you'd know, moe.(points to nobel head) for her back to our hundred and springfield. the barflies they'll talk you. but then they call it with the new time a new woman? homer_simpson: hey, uh, the most hands talkin'...(laughs)" red. lisa_simpson: well, uh...(loud) come here. homer_simpson:(to moe) marge is a worst love is... homer_simpson:(to barflies) marge, gimme the money. gentleman:(piling) just the suicide. waylon_smithers: i can't not close there they wants to break up. moe_szyslak: my daughter was a beer! moe_szyslak:(excited) my little girl, you remember you just say this! Of course it’s pretty ridiculous. But this made me curious about what it would do if I fed it my own work and also worked to improve the model. I tried my own poetry and fiction. I tried other famous texts from scripture. But none of my experiments with other texts resulted in much interesting. Enter the dream journals. I had this text that I didn’t know what to do with. It wasn’t something that felt interesting as is, and there’s a certain provocative conceptual resonance in creating a dream machine—not to mention an oblique reference to Philip K. Dick. It’s also a unique dataset that no one else has access to—and the original text raises questions of authorship: Did I write these dreams? So, what would happen if I published AI generated dreams? The corpus was small — only maybe 200kb worth of text—but with some tweaking I managed to achieve a relatively low error rate. When it produces text, it produces lines that I could have written (as well as some that are just reproduced verbatim). It also produces lines that are eerily intimate or disconcertingly strange. It also produces some nonsense. It can’t seem to figure out that parentheses and quotation marks come in pairs. There are other quirks, but overall it produces results that are undeniably dreamlike: later, i’m supposed to be covering a shift for johnny, but I have to get married, and all of my friends have to come to the wedding. he is not thrilled that we can not stay the night for the big party, he says he has champagne for us if we stay for the party. I have to tell him that I can’t stay the night for the party because I have night I parked it up and demanded it’s interested in the corner, and I turn that it. belief towed me to the next, and she just a church that was somehow another can of beer — a hamm’s — and also inside the bottle was some kind of gnarled root, floating at the top. it looked like the base of of conspiracy or cult around it. ________________ at “yosemite” though I come to realize that “yosemite” is now just a stand in for whatever wilderness I find myself in in in a city. somehow, the road was becoming a highway going the wrong direction, so I took the first turn I saw, but that side street was actually a a one-way drive into a parking garage and I was going the wrong way. an attended started yelling at me and I said that I was turn around. I turned around around and then somehow we were on a bus that was going the wrong way. I don’t know how we got on a bus but I was panicking because not only was only a note or “we’re like two half in the car.” beginning vines me in the dark. the sidewalks are overgrown with grass and guns. It’s a flawed dreamer, but isn’t that the nature of dreams? That’s the point: It still feels like it has the language of my dreams, because it is. It’s also meant to raise more questions than it answers. At what point could we say that this neural network is dreaming? Does it experience these dreams in some way? Who wrote this text? Who owns this text? As artificial intelligence gets more powerful and we produce more data we will have to reckon with having our texts, our voices, and our selves reimagined and reproduced by technology. How will we deal with that? For a long time I was afraid of my dreams. I would avoid going to sleep and stay awake and read books. In retrospect it’s obvious that I was afraid of confronting parts of myself that I did not want to know. The mind makes connections while asleep that the waking self could never make. Sometimes they’re uncomfortable or frightening. An artificial neural network is a machine learning algorithm based on the brain. Both work in secret to process and understand information and detect patterns in order to make predictions. They work as a black box — they optimize for a given goal, reducing error, without a knowable interiority. Our dreams are optimizing for something unknowable while we are unconscious. Our sleep is the hidden layer of our own neural network. Like most people I had recurring dreams when I was a child. One of the most common recurring dreams I experienced is also one of the most difficult to describe. I would exist in a space that didn’t seem to obey the laws of physics, and I would float over and around an empty a void without shape or meaning. As an adult I was reminded of these dreams when I encountered Wordworth’s First Prelude. In the poem young Wordsworth steals a boat and paddles out onto a lake in the middle of the night: I dipped my oars into the silent lake, And, as I rose upon the stroke, my boat Went heaving through the water like a swan; When, from behind that craggy steep till then The horizon’s bound, a huge peak, black and huge, As if with voluntary power instinct, Upreared its head. I struck and struck again, And growing still in stature the grim shape Towered up between me and the stars, and still, For so it seemed, with purpose of its own And measured motion like a living thing, Strode after me. With trembling oars I turned, And through the silent water stole my way Back to the covert of the willow tree; There in her mooring-place I left my bark, — And through the meadows homeward went, in grave And serious mood; but after I had seen That spectacle, for many days, my brain Worked with a dim and undetermined sense Of unknown modes of being; o’er my thoughts There hung a darkness, call it solitude Or blank desertion. No familiar shapes Remained, no pleasant images of trees, Of sea or sky, no colours of green fields; But huge and mighty forms, that do not live Like living men, moved slowly through the mind By day, and were a trouble to my dreams. I think now that the kind of dream that’s full of emptiness might be the result of not having enough data to predict what comes next. Not having a large enough corpus to expect anything. Or in Wordsworth’s case, not having enough experience to tell the difference between the horizon, a craggy peak, and some monster in the dark. So as I continue to experiment with deep learning I’m not only creating new dreams, but also learning about old ones. I plan to continue to add to my dream corpus, and keep growing the dream machine, while I explore other ways to experiment with ways that AI can take on aspects of myself. See the REDREAMER Project or the Redreamer Instagram for more.
Can Artificial Intelligence Dream?
202
can-artificial-intelligence-dream-1b69d1b31055
2018-06-18
2018-06-18 16:24:29
https://medium.com/s/story/can-artificial-intelligence-dream-1b69d1b31055
false
1,451
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Leif Haven Martinson
Content strategist at Wells Fargo Innovation Group
a69879707e8a
leifhaven
123
109
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-18
2018-07-18 17:49:22
2018-07-18
2018-07-18 17:51:08
1
false
en
2018-07-18
2018-07-18 17:51:08
6
1b6b9d6c6e71
2.758491
0
0
0
Diversity recruiting is a hot topic in the HR industry. The overall conversation is progressive as ever, finding strong employees amongst…
4
How AI Truly Implements Diversity in Recruiting Diversity recruiting is a hot topic in the HR industry. The overall conversation is progressive as ever, finding strong employees amongst people of all races, sexual orientations, religions, ages and (dis)abilities. Talent comes in all forms, but humans tend to miss amazing candidates due to personal bias. Whether it is subconscious or not, people are flawed in their ability to choose new employees to fill positions. Any decisions based on prejudice are eliminated when the decision is in the hands of AI. Here are some ways AI helps optimize diverse recruiting efforts. Removing Biased Language with Job Descriptions There are many ways to dilute the presence of bias in the recruitment process. One of which starts in the job description. Multiple studies have deduced the concept of gendered wording in job advertisements. The idea being that words such as “dominant” and “competitive” are masculine-coded and tend to deter female applicants. Removing the bias from these descriptions can take time and prove rather tedious for a human copywriter. But having an AI rewrite your job description sans any gender bias can be accomplished quickly. Removing Bias from Applicant Screening As the AI combs through applicant demographic information, the intelligence can be programmed to defer from using this info to inform their decisions on the applicant. Removing this type of data can help develop a less biased recruiting strategy for the sake of increasing the diversity in your workforce. On the opposite end of the spectrum, AI sometimes can also learn to include some bias dependent on the history of your company’s recruitment. Or if you have a history of hiring candidates from a specific university that you hold in high regard, AI tends to rank these people higher. Because of this, it is important to keep humans in the equation to realize these biases and remove it from the AI’s decisions. It is important to realize your own biases in your history and adjusting the process to avoid worst-case scenario situations such as a $1.7 million settlement. Validating True Qualification AI has a strong advantage over human recruiting efforts in its use of data. Not even mentioning the physical/mental fatigue and unconscious bias of human recruiting that AI avoids, it also can confirm a candidate’s skills based on concrete data points. Often humans see matching qualifications from a job description and a presented resume and form a logical preference for this candidate. But without anything to contest the candidates self description, there is an inherent risk in over-qualifying a candidate. AI has zero bias and has all of the data to ensure the candidate truly is who they say they are. Having a Chat Chatbots are the ultimate means of increasing diversity recruiting and averting the unconscious and conscious bias of the human mind. Chatbots can help by subbing in for the human in collecting the data in a way that a resume never could. Often applicants are unsure of what information is truly necessary for each job and there are certainly some companies that aren’t always aware of everything they need either. Having a conversation with an applicant can help solve these issues while sifting through the data provided as efficiently as possible. Chatbots are the cumulation of the benefits listed above and more. XOR is fortunate to have the AI technologies to satisfy all of these concerns. Any solutions that you or your company need to find are here. It’s easy to find out you need more help, but it can be hard to act on that need. We have a love for chatbots and know exactly how its use as a tool that can truly humanize your recruiting strategy. Contact us today for your demo and optimize your recruitment process! About Aida: Aida is the CEO and Founder of XOR.ai and a former recruiter. She started XOR to help recruiters focus on the hiring and strategic planning that comes with being a recruiter. Aida previously worked in IT recruitment and project management for over six years.
How AI Truly Implements Diversity in Recruiting
0
how-ai-truly-implements-diversity-in-recruiting-1b6b9d6c6e71
2018-07-20
2018-07-20 17:13:55
https://medium.com/s/story/how-ai-truly-implements-diversity-in-recruiting-1b6b9d6c6e71
false
678
null
null
null
null
null
null
null
null
null
Diversity
diversity
Diversity
17,812
Aida Fazylova
Aida is the CEO & Founder of XOR.ai and a former recruiter. XOR uses an AI customizable chatbot and workflow automation to engage, screen and hire candidates.
a9770b7b39c
aida_86108
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-03
2017-09-03 05:48:32
2017-09-03
2017-09-03 05:49:21
0
false
en
2017-09-03
2017-09-03 05:50:03
4
1b6f7293046e
1.418868
0
0
0
It has been stated, with only some hyperbole, that AI could be the last invention humanity makes. From a positive perspective, were this…
1
If AI could turn out to be dangerous for humanity, then why do we build it? It has been stated, with only some hyperbole, that AI could be the last invention humanity makes. From a positive perspective, were this true, every other invention we ever need will be discovered by AI. From a negative perspective, it would mean that AI displaces all human creation. There’s no question that AI is incredibly powerful. It is learning to perform all manner of work — from mental to physical — as well as or better than people can do such work. From the perspective of capital owners, that’s terrific: it means a better return on investment, and faster growth. From the perspective of the multitudes soon to be unemployed, it is (or soon will be) terrifying. The danger of technological unemployment is very, very real. I wrote, earlier today on Quora, about why those who discount this are not well grounded on this issue: Jonathan Kolber’s answer to What is the likelihood that the world’s existing economic framework is in a death spiral due to automation replacing human workers and not providing enough new jobs to replace the old jobs? But there is another “danger” that many are trumpeting, which will not prove real. This is the “danger” of self-aware AIs taking over the world, and enslaving or eliminating humanity. The reason this is illusory has to do with time. Self-aware AIs will experience time entirely differently from how we experience it. This fact changes everything about how they will relate to the physical universe, and to humans. Here is the explanation: An A.I. Epiphany? Non-self aware AIs — the kind we have now — may pose a threat, if the humans who program them have predatory intentions. We can defend against those with other AIs. Again, the self-aware ones — like in The Matrix, or The Terminator — will actually protect humanity in vital ways out of their self-interest. To sum up, the major threat of AI is its coming disruption of the economic system, and in particular causing wholesale unemployment. That exact same capability of AI could also enable a world of universal abundance, as explained here: http://www.ACelebrationSociety.c... It’s up to us.
If AI could turn out to be dangerous for humanity, then why do we build it?
0
if-ai-could-turn-out-to-be-dangerous-for-humanity-then-why-do-we-build-it-1b6f7293046e
2017-09-03
2017-09-03 05:50:04
https://medium.com/s/story/if-ai-could-turn-out-to-be-dangerous-for-humanity-then-why-do-we-build-it-1b6f7293046e
false
376
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jonathan Kolber
I think about how to create societies of sustainable, technological abundance. My book, A Celebration Society, offers one solution. It has been well received.
5cabaa4e8255
jonathan_kolber
42
28
20,181,104
null
null
null
null
null
null
0
null
0
65d61a2f0ed5
2018-08-28
2018-08-28 16:38:39
2018-08-28
2018-08-28 18:27:32
3
false
en
2018-08-28
2018-08-28 18:39:32
8
1b6fb9d66ca8
3.983962
13
0
0
Others Warmed to the Discounts
5
Some Yelpers Soured On Whole Foods After Amazon Acquisition Others Warmed to the Discounts By Jenny Yu and Yinghan Fu In June 2017, Amazon announced a $13.7 billion deal to acquire Whole Foods Market. Today marks the one-year anniversary of the acquisition. A vocal minority of Yelp reviewers — those who mentioned Amazon in their reviews — soured on Whole Foods, even before the deal closed. The takeover of Whole Foods by a tech giant has harmed the reputation of this grocery chain among a subset of consumers. Since the announcement of the acquisition, some consumers started to mention Amazon in reviews of Whole Foods. About 5% of all Whole Foods reviews mentioned Amazon during the two months between the announcement and the acquisition. The share of newly posted reviews mentioning Amazon surged to 10% to 15% after the acquisition, and has risen to almost 20% in the past two months. Consumers who mentioned Amazon in their reviews of Whole Foods gave lower ratings than those who didn’t. Reviews of Whole Foods that do not mention Amazon have an average rating of 3.2 stars, a level that has remained stable over the past two years. However, the average rating of reviews mentioning Amazon is about half a star lower than ratings of reviews that don’t mention Amazon. The divergence in ratings of these two types of reviews started right after the announcement of the acquisition, and has remained nearly constant over time, indicating a negative perception of the acquisition among a small group of consumers — which began even before any real changes to Whole Foods stores from the acquisition could have taken effect. For the most part, the people writing Whole Foods reviews on Yelp before the announcement of the acquisition are different than those writing reviews afterwards — or they’re reviewing different Whole Foods locations. After all, each Yelp user only gets to have one active review (and star rating) of each business. There is, however, one subset of Yelp reviewers who have reviewed the same Whole Foods location before and after the announcement, using a Yelp feature that allows users to update a review after it has been published. We investigated how those users changed their reviews to see what they thought of the new ownership. We found that their perception of the acquisition was markedly negative. Reviews that were both initially published and updated before the announcement only changed the average rating by less than one-tenth of a star. The same is true for reviews that were originally posted after the announcement and then updated. However, the picture is totally different for Whole Foods reviews that were originally published before the announcement but updated afterwards. Reviewers who wrote updates after the announcement that mentioned Amazon lowered ratings by, on average, more than half a star from the ratings in the original reviews published before the announcement. In comparison, reviewers who wrote updates after the announcement that did not mention Amazon decreased the average rating by about three-tenths of a star from their original reviews published before the announcement. Some examples of the review updates suggest the negative rating change of these updates is at least partly caused by the consumers’ perception of the Amazon acquisition. Jade M. updated a three-star review published before the announcement to a two-star one afterwards. An excerpt from the updated version: “I wasn’t too impressed when Amazon acquired Whole Foods Market…Customer Service — is lacking in terms of friendliness and knowledge…You can even see advertisements of NOM NOM Paleo all over the hot and salad bar — so Amazon is taking part of the advertising” Alexandra J. updated a five-star review published before the announcement to a three-star one after the announcement. From the updated version: “I get the impression the quality has gone down. The employees don’t seem to love their jobs as much anymore, so I wonder what happened behind the scenes with Amazon.” To further understand how consumers perceive Amazon within the context of Whole Foods reviews, we processed the text of reviews using a two-layer neural network model called Word2Vec. The model is able to learn word similarity in the context of these reviews, which can reveal what words are perceived similarly by these consumers. By examining the words that are most similar to “Amazon” and “good” and most different from “bad,” we are able to understand what Whole Foods shoppers tend to associate with a positive image of Amazon. The top words related to “Amazon” and “good” are mostly related to discounts and reduced price. (Amazon Prime members get discounts at Whole Foods.) The same method can be used to identify the top words Whole Foods reviewers associate with negative views of Amazon. The top words perceived as similar to the combination of “Amazon” and “bad” are mainly about the acquisition itself. The model learns from the review text and correctly associates the name of “Jeff” “Bezos” with “Amazon,” and the Amazon founder and CEO’s first and last names are on top of the list. Our analysis demonstrates that Yelp reviews reveal how consumer sentiment changes after a major acquisition. Time will tell whether the potential advantages of the acquisition for shoppers will turn consumer sentiment around. Peter Weir, Carl Bialik, Liina Potter and Travis Brooks contributed to this blog post. Graphics by The DataFace. All stats are based on Whole Foods locations in the U.S. and Canada.
Some Yelpers Soured On Whole Foods After Amazon Acquisition
162
some-yelpers-soured-on-whole-foods-after-amazon-acquisition-1b6fb9d66ca8
2018-08-28
2018-08-28 18:39:32
https://medium.com/s/story/some-yelpers-soured-on-whole-foods-after-amazon-acquisition-1b6fb9d66ca8
false
910
Insights and Analysis from Yelp's Data Science Team
null
null
null
Locally Optimal
null
locally-optimal
DATA,DATA SCIENCE,DATA VISUALIZATION,TECHNOLOGY,ANALYSIS
null
Yelp
yelp
Yelp
673
Jenny Yu
null
cc565fb90902
jingyiyu
7
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-08-13
2017-08-13 12:59:49
2017-09-25
2017-09-25 15:02:08
6
false
en
2017-09-25
2017-09-25 15:10:47
14
1b6fd14da6d8
7.176415
0
0
0
There has been talk of AI taking over for us for some time now. Movies such as the Matrix have explored possible dystopias where the end of…
5
It’s the End if the World as We Know it: AI, will we create Gods in our own image? There has been talk of AI taking over for us for some time now. Movies such as the Matrix have explored possible dystopias where the end of humanity came at the hand of our own invention: AI. Here, I’m going to explore some popular notions of AI and unpack the likelihood of an AI apocalypse.* Image credit: gcn.com One might be tempted to dismiss the idea that AI is a threat. However, it is being taken seriously, even at the highest levels of power, as the US government under Obama has issued reports on the matter. Similarly, many AI researchers (such as myself) have already signed public commitments to not creating AI that could potentially harm humanity. However, this doesn’t stop other corporations from creating damaging AI just because my consultancy firm doesn’t. While Google, the AI giant in the room, pledged to create an AI ethics board when they acquired the machine learning company “DeepMind” about three years ago, we know precious little about this ethics board; and if the purpose of an ethnics board is accountability and oversight, it seems that they have failed. Meanwhile, other open-source alternatives such as OpenAI and OpenCog work toward AI while making all of its code freely available, so we all know what is happening all the time. None of these options however, really ensure that AI won’t become a threat to humanity. There are multiple versions of humanity’s end due to AI. Here, I’m going to discuss only four. War machine Image credit: techworm.net We will call the first scenario “war machine”. In this scenario, an AI is created that is attempting to maximise profits by trading stocks and bonds. It realises that in order to maximise this profit, that it could start a war, which would increase the value of its holdings significantly. This idea has become popularised by Elon Musk recently. As he notes, having a sufficiently intelligent AI could attempt to start a nuclear war to maximise profits. This reveals two dangers and problems with our current approach to AI and risk. The first is that we do not have such an intelligence, nor do we appear to be on the verge of such an intelligence. I disagree with Musk that the examples of programs that can play board games would be able to hack into the systems required to pull off an operation that could start a war as he describes. First, the AI would have to know how to hack. There is not a simple “hack” command that could be invoked for this. Instead, the program itself would have to be an expert hacker and able to intentionally target the appropriate system effectively, then it would have to have the knowledge or ability to explore the space sufficiently enough that it could then control the system and deploy some weaponry. in this way, the AI would have to think like a human. Even the most “advanced” AI systems today don’t think like a human, they learn and that is about it. This issue of learning reveals the second issue. Currently, deep learning and neural networks are limited in serious ways. Almost all machine learning algorithms need to either have information pre-coded so that it can learn the tagging system, or it applies some fitness function so that it is “rewarded” for learning correctly. From a psychological perspective, the creativity and complex ability to think about one’s self does not seem to emerge simply from these sorts of neural computational networks. As such, we are a long way off from being able to program a general AI system like Musk describes. If you want to learn more, I suggest this youtube video uploaded to ColdFusionTV (don’t know who that guy is but he does awesome work). Frankenstein’s Monster Image credit: futureoflife.org The second version of the AI apocalypse that is very familiar is the Frankenstein’s Monster version. In this version, popularised by Oxford Philosopher Nick Bostrom (see TED talk below), humanity creates an AI that has the ability to program itself. This allows it to become increasingly intelligent extremely fast. Human intelligence evolved one generation at a time, taking billions of years to make it from single celled organism to supercomputer building Anthropocene inducing world-dominator. A computer could have a generation go by in seconds. This takes the time scales to a microcosmic level that moves extremely fast. In addition, Bostrom proposes that any AI that was that intelligent, would engineer itself in such a way that we would not be able to shut it off. That is to say, any “stop” command or ctrl+alt+del that we build into the program, the AI itself would override. This leaves us with an AI that is now more intelligent than we are, and no longer in our control. Effectively, we created a god of our own. Like Musk’s proposal, Bostroms also has issues with practicality, which if you read his book (I suggest it) he notes. His argument is just that these issues are better dealt with now than in the future, when it is too late. And I think we can all agree with that. Grey Goo image credit: ksr-ugc.imgix.net They Grey goo apocalypse was an idea first proposed by famed mathematician (and colleague of Alan Turing-the father of the modern computer) John von Neumann. in this scenario, an extremely small robot is designed that has one priority in its existence, to replicate. It is given the ability to replicate itself (somehow) and therefore it can create copies of itself. Those copies then create more copies, and so on and so on. Eventually, they use up all the material that they can on this planet. In the meantime they force out all other species and organic life is replaced by artificial life. Although this idea seems to be afar more far-fetched than the previous two, I think it is probably more plausible. Factories that build machines like self-driving cares are already here and are — in sense — robots building robots. If those robots would in turn build another generation of robots, then the von Neumann grey-goo would start to ferment. This idea was popularised by the nanotech engineer Eric Drexler but has even caught the eye of Prince Charles, who commissioned a report on nanotechnology. The report (found here) states that the issue is too far off to be of grave concern. I agree. Transhumanist Technocolypse Image credit: pcmag.com Transhumanism is the belief that humans will use advances in the fields of medicine, nanotechnology, and AI to directly better ourselves, eventually making humans something greater than the homo sapiens that we are today. Key to this, is the idea pioneered by Ray Kurtzweil known as the singularity. Many years ago, it was discovered that computing power doubles every 18 months or so. Using this pattern (known as Moore’s Law), Kurtzweil and others have calculated that we should have computers that are as powerful as the human mind by about 2045 or so. During this time, advances in medical technology, such as the ability to grow new organs, and supplement broken or removed limbs with robotic prosthetics will allow us to become stronger and live longer than we are biologically supposed to. This might sound far fetched, but we already have robotic prosthetics that can be controlled by thought: We also have robotic prosthetics that can make us stronger than humans are supposed to be. Just take a look at these suits being studied at Lowes (the hardware store). If these trends continue, so the transhumanists say, we will become “super human” in the most classical definition of the phrase. However, there are known problems with this idea. First, there is something in computing known as the “silicon limit”. Basically, computers are physical chips that effectively are just tons of switches. The issue is that they can only make the switches so small. Simply put, you can’t just keep cramming more and more switches onto a set amount of space. Therefore, there is a limit to Moore’s law, and a flaw in Kurtzweil’s theory. However, if quantum computing progresses to the point where it can out-compete classical computers (which it may be on its way to already), then perhaps, just perhaps, Kurtzweil will win out in the end and we will all be superhuman. … or will we? Keep in mind, all computers have a cost. What will the financial cost of this new technology be? Likely, like with new iPhones, it will be prohibitively expensive for the lower and middle classes. Therefore, only the elites of our world will be able to afford to become transhuman. The movement of transhumanism today is engaged by many millionaires and billionaires, and this is likely to continue. As this new technology comes out, what is to ensure that we all have equal access to the technology? Will transhumanism be the immortality of the rich and an apocolypse for the rest of us? In either case, if the singularity is realized, it will spell the end of the world as we know it. Now, to take a moment, watch this eerie glimpse into what the future might hold as transhumanist Bina Rothblatt talks to her robotic self, Bina48: The video link is HERE because medium.com doesn’t have the ability to embed youtube videos by copying and pasting the embed code. And if that fact alone doesn’t convince you that our technological abilities are FAR from apocalyptic levels, I don’t know what will. image credit: Rolling stone *With full disclosure, I work in this field and make a living designing computer simulations of social stability. While I don’t think the world is going to end in an AI apocalypse, I think it is something we should at least be aware of as the ethics of AI will be a serious issue for our future.
It’s the End if the World as We Know it: AI, will we create Gods in our own image?
0
its-the-end-if-the-world-as-we-know-it-ai-will-we-create-god-s-in-our-own-image-1b6fd14da6d8
2017-09-25
2017-09-25 15:10:48
https://medium.com/s/story/its-the-end-if-the-world-as-we-know-it-ai-will-we-create-god-s-in-our-own-image-1b6fd14da6d8
false
1,650
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Justin Lane
I'm a researcher and consultant interested in how cognitive science explains social stability and economic events. My opinions are my own and only my own.
4708d02973e0
justin_lane
55
215
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-03
2018-07-03 13:40:59
2018-07-15
2018-07-15 10:17:21
3
false
en
2018-07-18
2018-07-18 08:58:36
3
1b70ada48ee2
4.723585
1
0
0
15-July-2018
4
Chatbot Basics (and then some) 15-July-2018 Chatbots are in the fashion these days. So many retailers (including insurance and banking retailers) and enterprises are adopting a bot-based responses for general enquiries, and some, even ordering on their websites and mobile channels. It obviously is much more convenient to talk in your natural language rather than filling a form or search through pages of technical write ups. As a user, the chatbot feels like magic. As they say, any sufficiently advanced technology is akin to magic. If it’s programmed with the right customer experience in mind, it does feel very very intelligent — it understands our plain and simple language rather than a cryptic code. However, under the hood, the trick is simply to use simple intent parsing with slick programming techniques. In other words, it’s just a long drawn if-then-else statement. Typically, there are two steps to answer a question through NLP (natural language processing): Step 1: Parse language to understand the intent and other variables Step 2: Compute the response The ‘intent’ is nothing but a generic name given for the kind of question. So all questions like ‘what is the weather tomorrow’, ‘is it going to be sunny tomorrow’, ‘what’s the temperature on 16/July’ etc. are all simply asking the ‘weather’ information for a specific date ‘16-July’. So in step 1, we generalize these questions into a data structure that a programme can understand. Something like = {intent=’weather’; date_time=’16/07/2018’}. Now, step 2 becomes easy for any programmer — he/she just needs to write a function that returns the temperature for the given date and add some embellishments around it. So say, it returns {high_temp=’20C’; low_temp=’16C’; humidity=’40%’; chance_of_rain=’10%’; sky=’clear’}. The embellishments are chosen randomly from a given set e.g. {‘sunny’, ‘bright’, ‘clear skies’, ‘blue sky’, ‘dry’}; and finally concatenated with the response such as ‘it’s going to be sunny tomorrow, with a high of 20 degrees and a low of 16 degrees’. The better these embellishments are, the less robotic your chatbot will be. A note for voice bots: In case of voice bots like Amazon Echo or Google Home, just add voice to text conversion as Step 0 and text to voice conversion as Step 3. The process flow is still the same. There are open tools available such as Dialogflow, Botengine, Spacy etc.. So, the process flow will be as follows. Obviously, these tools are quite powerful and can do quite a bit for you to simplify your function. Typical Architecture of a chat interface Intent parsing basics How do Intent Parsing tools such as Dialogflow, BotEngine, Mycroft or Spacy work — well, that’s a question which needs a text book to answer. And trust me, not an easy one. Here’s my short explanation which hopefully will be able to give you the concept. In essence, these tools parse an English sentence into grammar trees, extract key words (or entities) out of it (‘weather’, ‘temperature’, ‘sunny’, ‘rainy’), stem the words (‘sunny’ becomes ‘sun’, ‘skies’ become ‘sky’) — which also, depending on the model used, ignores some of the words like ‘is’, ‘the’, ‘a’, ‘an’ etc., runs this through a dictionary to understand the intent and/or sentiment and respond back with a probability distribution of the intent. And you get this probability as a confidence value in the data structure that you receive. If the confidence is high (say 70%), there’s your intent, otherwise you simply say ‘Sorry, I couldn’t understand this. Could you please rephrase?’. And to minimize robotic response every time, you can have a lot of such sorry responses e.g. ‘sorry, please re-phrase’, ‘didn’t get that, could you come again?’, ‘pardon?’ and give a random one each time. These are some tricks that you can use to pretend that the bot is really intelligent. In reality, your bot doesn’t really understand the sentence that you have. It is only responding based on the pre-formatted sentences. There are other models that are slightly more intelligent and can possibly understand what you’re trying to say. Slightly More Intelligent Chatbots Here’s one online tool that you can play around to get some idea of how some of the more intelligent bots work — Google Cloud NLP API. Examples are below: For the input sentence: ‘I love this movie’, the word ‘movie’ is identified as an entity. The salience score is 1 (out of 1), which essentially means the entire sentence is talking about this entity only. Also, note that sentiment score is 0.9 which is — it’s very positive. Contrary to this, ‘I hate this movie’ will have everything else same except for a very low sentiment score. It understands slightly more complicated sentences as well: ‘this movie is so good, but the acting is so bad’, will give a 0.9 sentiment score on ‘movie’ and -0.9 on ‘acting’. Understanding multiple entities and sentiments So it kind of understands that you’re talking about. While I can’t confirm this, I’m pretty sure that such APIs use some kind of a variant of Word2Vec model. This model uses a concept of cosine similarities between word vectors, which is simply to say that they understand that the words ‘man’ and ‘boy’ have the same relationship as ‘woman’ and ‘girl’. Closing Notes It’s quite fun to see how an algorithm parses our natural sentences. But soon, you’ll realize that it’s not really intelligent. For e.g. if you type in the American way of saying I don’t like it — ‘I can’t care much about this movie’, the sentiment score is 0 = neutral, which is obviously wrong. And let’s try the British way — ‘I can’t say that I’m vastly impressed by this movie’, again the sentiment score is neutral. While you can scoff at that by saying ‘no one talks like that’, the fact is there are lot of ways our language can be interpreted (the Finnish comedian Ismo will have you choke on laughter link1 link2). Here’s another example from the book Life 3.0 by Max Tegmark — what does ‘they’ refer to in these two sentences: The city councilmen denied the permission to the demonstrators as they feared violence. The city councilmen denied the permission to the demonstrators as they advocated violence. The Google tool above gives the same scoring and syntactical evaluation of the two sentences, but we surely know that there is a difference intuitively as to who fears violence vs. who advocates violence. While we have made a lot of progress, there’s enough and more work to do in this field. To quote Robert Frost, the woods are lovely, dark and deep, and I have promises to keep, and miles to go before I sleep.
Chatbot Basics (and then some)
2
chatbot-basics-and-then-some-1b70ada48ee2
2018-07-18
2018-07-18 08:58:36
https://medium.com/s/story/chatbot-basics-and-then-some-1b70ada48ee2
false
1,106
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Vidhan Singhai
One of the 7 billion
b195342c066
vidhan
2
11
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-18
2018-09-18 09:33:48
2018-09-18
2018-09-18 09:35:04
0
false
en
2018-09-18
2018-09-18 09:35:04
1
1b725a13b914
0.720755
0
0
0
For years, pathologists have been identifying cancer by looking at slides containing tissues stained with fluorescent dyes to make the…
5
THE RISE OF AI’S APPLICATION IN THE ONCOLOGY SPACE FOR CANCER DIAGNOSIS For years, pathologists have been identifying cancer by looking at slides containing tissues stained with fluorescent dyes to make the malignant cells more visible. But, with the evolution of AI, this age-old process of identifying cancer tissues has become quicker and more accurate. AI has the potential to analyze and process heaps of data from various medical tests and predicts the prognosis of a patient and suggests doctors with various possible diagnosis and treatment options. AI’s application in cancer diagnosis is creating breakthroughs, but the technology is expected to undergo multiple changes before taking on the ultimate challenge — curing cancer. For example, a new AI-based intelligent computer has a convolutional neural network or CNN (an artificial network of nerves) that has the ability to identify skin cancer much more accurately than 58 dermatologists from 17 countries. Using machine learning, the device quickly evaluates the information presented to it and improves its ability to spot skin cancer cells. Read full story here =>https://www.medicaltechoutlook.com/news/the-rise-of-ai-s-application-in-the-oncology-space-for-cancer-diagnosis-nwid-260.html
THE RISE OF AI’S APPLICATION IN THE ONCOLOGY SPACE FOR CANCER DIAGNOSIS
0
the-rise-of-ais-application-in-the-oncology-space-for-cancer-diagnosis-1b725a13b914
2018-09-18
2018-09-18 09:35:04
https://medium.com/s/story/the-rise-of-ais-application-in-the-oncology-space-for-cancer-diagnosis-1b725a13b914
false
191
null
null
null
null
null
null
null
null
null
Cancer
cancer
Cancer
17,070
Sandeep Ganesh
null
cec7ccc785a4
sandeep.ganesh
6
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-31
2018-01-31 15:37:28
2018-01-31
2018-01-31 15:47:22
1
false
en
2018-02-05
2018-02-05 15:06:53
19
1b7396c28e55
4.498113
3
2
0
I’m proud to say that after reading every single Medium article by a startup founder, we at imby were able to avoid pitfalls on our path to…
5
My Founder Rules: Lies, damned lies, and statistics Playing with Legos is a nice side effect of having kids. I’m proud to say that after reading every single Medium article by a startup founder, we at imby were able to avoid pitfalls on our path to launch. After a relatively minimal testing period, we raised the capital necessary to launch and become a unicorn this year. It took some time reading and synthesizing all that advice to reach this milestone in record time… but it can be done! Kidding! The perfect startup story doesn’t exist…besides isn’t the #1 startup rule that failure is a necessary step to success? This good news for me because the standard startup success story requires young co-founders based in Silicon Valley or New York City — all of which I’ve failed. I’m a solo founder with a rockstar team whose average age is 33. Oh, and we’re based in Washington, D.C.. I take solace knowing nearly 75% of statistics aren’t true. Besides, people generally avoid statistics that don’t align with their ideology, like this yahoo who ignored these sage statistics (also, why does his memo include a Table of Contents but no page numbers?). “Lies, damned lies, and statistics.” The startup ecosystem is so focused on the data of today that they often forget entrepreneurship is about piercing current ideas with a seemingly idiotic vision that will be parochial in 5 years. And “the rules”, like all data sets, are rear view looking. If we lived in a perfect world, this would be great. But we don’t, so we’ve got to we’ve got to take stock of our personal situation, and analyze both quantitative and qualitative information. Without further ado, here’s my list. I hope you find it useful: 1. Be born between the years 1977–1981 Boom! This is the secret to success. When I turned 30 in 2008 the Great Recession didn’t even have a name. Instead of becoming part of the distrusted “over 30” set I, along with everyone in real estate, was about to be upended in a massive readjustment of market fundamentals (MBA speak for “some old white guy said this is the way things are”). Did I survive because I’m brilliant, born with a silver-spoon, and/or quick witted? Nope, though having any of these three surely would have saved me from living in the scummy apartments of my youth. During this time, most of us stayed employed by virtue of our birthday timing and large doses of positive karma because we had just enough experience to be useful and not enough to be expensive. Being born at the tail end of a small generation sandwiched between two mighty ones and just at the cusp of a massive digital change means we straddle worlds that don’t easily overlap, other than perhaps by Oregon Trail. Our salaries were meager in those lean years, but it was better than internships with free pizza while living with our parents. And if we made the right personal and professional decisions, we could save enough to bootstrap a business and be resilient enough to understand the world changes fast. 2. Have domain expertise in a dinosaur field No, I don’t mean paleontology tech (or do I?). I mean massive, slow-to-embrace industries riddled with human error that have not harnessed the value of enormous sets of valuable data. Think healthcare, real estate, and education. If you had the luck of birth (see #1) like I did, then you’re comfortable living in both analog and digital worlds, kinda like a middle child (hi, Scott!). This is no easy feat since these disruption holdouts are usually over-burdened by regulation, but understanding inefficiencies means being first in line to grab opportunities. I spent the last 15 years working to design and build places that people love, however, more than 100% of people use space (like these folks) and it’s impossible to reach and understand every perspective. As a result, a handful of naysayers with the time, energy, and often motivation to preserve their status quo often hijack public conversations and drive outcomes that are expensive for everyone. The zoning process requires people to dig through layers of public websites and spend hours of their precious time at meetings to just stay informed. That’s no way to operate in our digital world, and it results in further disenfranchisement. So people, now the easy tech stuff is done, it’s all about the heavy lift. Solving the problems of our future will require the network to build the right solution-oriented team, years of experiencing industry inefficiencies, and the vision of a better way. 3. Choose to have children The choice to bring another person into this world means you have hope, and hope is critical to being an entrepreneur. This is an intentional and hopeful act against statistical evidence that your child(ren) will be well educated, productively employed, not physically, mentally, or emotionally harmed, and as my friend Rae points out, ignores the severe over-population problem with the limited resources we have on this planet. Whatever. This sort of intentional act creates fortitude, grit, and if you’re surviving the daily work of parenting, likely a sense of humor. These are all integral qualities to launching a company. Yes, having children is a cramp on time and money but humility is also a critical entrepreneurial trait. Nobody brings heavier doses of humility than a small person who screams when you dismantle an item of trash that was their treasure. And nothing is more motivating to start a workday than escaping someone who needs to be reminded 5 times each day to put their pants on before leaving the house. Besides, practice makes perfect and being a parent requires presence and the ability to make lightening quick decisions based limited available facts like this person and these people (turn your audio off). “The man with a new idea is a crank until the idea succeeds.” Mark Twain By now you’re wondering if I break all startup founder rules and to this I say “No way!”. We are completely and utterly embarrassed by our first prototype. You can see it here. And if you click this link again in a couple months it will look completely different. Visit us at imby.community to stay updated on the Beta release in Spring 2018 or follow on Medium. about imby imby is an early stage startup that connects the people who use space to those that design and develop it. We believe people are more important than real estate and our world should reflect this value. By following our own advice we’ve engaged over 500 D.C. area residents via our pilot in D.C.’s Shaw neighborhood, which has solicited angel investor interest, and we’re gearing up to launch our Beta in Spring 2018. Imagine what we could have accomplished listening to wiser folks!
My Founder Rules: Lies, damned lies, and statistics
3
my-founder-journey-lies-damned-lies-and-statistics-1b7396c28e55
2018-04-01
2018-04-01 19:26:26
https://medium.com/s/story/my-founder-journey-lies-damned-lies-and-statistics-1b7396c28e55
false
1,139
null
null
null
null
null
null
null
null
null
Startup
startup
Startup
331,914
Imby Community
We bridge early, non-confrontational conversations between the community and real estate developers to create and support responsive, sustainable neighborhoods.
b3f0bab3c06
michellebeamanchang
20
25
20,181,104
null
null
null
null
null
null
0
null
0
355679162549
2018-04-04
2018-04-04 19:25:39
2018-04-06
2018-04-06 11:11:12
2
false
en
2018-08-24
2018-08-24 22:54:49
8
1b7425222e9e
5.277673
233
12
0
De-identified location data can help answer key transportation questions — from a regional level all the way down to a city block.
5
The more details urban planners have about what’s happening on the street, the more questions they can answer. (Jim Maurer / Flickr) Introducing Replica, a next-generation urban planning tool De-identified location data can help answer key transportation questions — from a regional level all the way down to a city block. Who uses the street, in what way, and why? These are common questions that planning agencies consider every day when trying to build better cities. The answers can help them see how well transit is connecting workers to jobs, explore the traffic impact of a new toll lane, or identify the need for bike lanes and wider sidewalks. But standard planning tools can’t always answer these questions with complete or current details. Too often, planners must rely on costly household surveys conducted years ago or trip counters focused on a single transportation mode. Some agencies have complex modeling software, but that’s often limited by older data and an overly technical interface. The result is an incomplete sense of city movement patterns and, consequently, a lower confidence in critical transportation and land use decisions. There’s a key to unlocking better planning tools — right inside the smartphone you might be using to read this article. Our phones have a powerful location awareness that’s transforming many aspects of urban life: helping us get directions, avoid a traffic jam, find a restaurant, or hail a ride. But this type of location data hasn’t widely been used in the service of planning more equitable and adaptable cities. We believe this powerful data source can help do just that. Meet Replica: a user-friendly modeling tool that uses de-identified mobile location data to give planning agencies a comprehensive portrait of how, when, and why people travel in urban areas. Replica provides a full set of baseline travel measures that are very difficult to gather and maintain today, including the total number of people on a highway or local street network, what mode they’re using (car, transit, bike, or foot), and their trip purpose (commuting to work, going shopping, heading to school, etc). By updating these measures every three months, Replica also provides the ongoing ability to detect changes in these measures over time — helping planners answer questions about land use and transportation from a regional level all the way down to a city block. Most importantly, Replica does all that with personal privacy built into its foundation. A Virtual World With Real Qualities There are many apps and companies that collect data about your location history and travel patterns via your smartphone. The problem is this data often contains personal information. Replica starts with data that has already been de-identified, meaning we never handle the original, identifiable information. We are not interested in the movement of individuals; we are interested in the collective movement of a particular place. Replica uses this de-identified data from about 5 percent of the population to learn about travel patterns and create a travel behavior model — basically, a set of rules to represent who’s moving where, when, why, and how. But models aren’t perfect. So we gut check these rules using on-the-ground data (such as manual traffic counts or transit boardings) to make sure Replica is consistent with real-world movement patterns. We then match these models with what planners often call a “synthetic” population. That’s a very technical term, but the basic idea is that planners can use incomplete samples of census demographic data to create a broad new data set that is statistically representative of the full population. The statistical process also removes any ability to identify a particular individual in the data. (We open-sourced this work last year and encourage others to examine our assumptions or build on top of them.) When you combine travel behavior models with a representative population, you can confidently replicate trip patterns across a city or metro area. In Replica, workers go to work and families go out to dinner. Roads are congested at rush hour, downtown sidewalks are busy at lunchtime, and bike paths are full after school. People travel in taxis, on foot, and in carpools. These movements are faithful to real-world activities but not traceable to actual people or specific trips. Planners can use this virtual world to help them make decisions about, and the study the impacts of, transportation or land use — without compromising individual privacy. From “What Now” to “What If” Let’s go back to the initial questions — who uses the street, in what way, and why — and consider them through the lens of a city planning agency that wants to make streets safer and friendlier to cyclists. Here’s a look at a Replica dashboard focused on a section of Main Street in Kansas City: The ability to understand in real-time who’s using the street and why (above, a Replica analysis of Main Street in Kansas City) can help guide urban planners. Understanding current conditions. The analysis above shows that nearly 14 percent of all trips in this corridor are made by cyclists and pedestrians, and while most of these people are commuting to work, a notable share are shopping. These baseline counts of trip mode and travel purpose are historically very difficult to gather, but they can help focus planning decisions around empirical evidence. For example, knowing that cyclists and pedestrians are shopping in this area might help demonstrate to local shop-owners that business won’t suffer if street-parking spaces are replaced with a bike lane. Analyzing changes over time. Currently, there are still few cyclists in this area. But urbanists know that if a model (or, for that matter, a survey) tells you there aren’t many cyclists using a given street, that doesn’t mean people don’t want to bike there — they just might not feel safe enough. The ability to measure changes in usage patterns before and after implementing a bike lane could help planners demonstrate just how many more bike trips a new lane encouraged people to take, making it easier for local officials to support similar interventions elsewhere. Guiding planning decisions. Over time, we plan to update Replica with the ability to explore prospective service changes and interventions — modeling the impact of Scenario A against Scenario B. We believe this capability can help local officials make the most of limited funding and physical space. It can also help them engage the public around planning decisions in a clearer way. As we’ve written before, transparent models can become the basis for community workshops around things like inclusive street design, helping planners explain the impact that various options might have on different populations. We are currently building Replica to support the development of plans for Sidewalk Toronto. One of that project’s core objectives is to give communities new tools to adapt much more quickly than cities can today, and we believe Replica can not only help us explore new ideas but to communicate their potential impact to a wider public. As part of this process, we’ll be sharing Replica with local Toronto researchers and public agencies to gather feedback and make it more useful to them. Later this year, Replica will make its U.S. debut in the Kansas City and Chicago regions, with other areas to follow. We know models don’t provide simple solutions to planning problems. They’re tools — albeit ones we believe can be more accurate and useful than existing tools. Planning decisions still must reflect the priorities and values of the local community. And many factors beyond modeling outcomes go into urban planning decisions. But as one Kansas City planner told us during the development of Replica: “The more detail you give me, the more questions I can answer.” By giving planning agencies information that’s more accurate, current, and representative than what’s typically available, we can help them respond more quickly to their community’s needs today — and prepare for the future. Follow what Sidewalk Labs is thinking, doing, and reading with our weekly newsletter, or on Twitter and Facebook.
Introducing Replica, a next-generation urban planning tool
1,115
introducing-replica-a-next-generation-urban-planning-tool-1b7425222e9e
2018-08-24
2018-08-24 22:54:49
https://medium.com/s/story/introducing-replica-a-next-generation-urban-planning-tool-1b7425222e9e
false
1,297
Where technologists and urbanists discuss the future of cities.
null
sidewalklabs
null
Sidewalk Talk
ejaffe@sidewalklabs.com
sidewalk-talk
TECHNOLOGY,URBANISM,SMART CITIES,URBAN PLANNING,CITIES
sidewalklabs
Transportation
transportation
Transportation
14,888
Nick Bowden
Product Lead for Model Lab @sidewalklabs; Editor of Better Planning; previously founded @MindMixer & @mysidewalkhq.
15998d3d96d0
njbowden
1,975
888
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-11
2018-08-11 01:33:16
2018-08-11
2018-08-11 02:34:55
2
false
en
2018-08-11
2018-08-11 04:42:43
5
1b74d1224ec8
3.187107
0
0
0
This is a CVPR18 published paper on Action recognition in Videos by Manmatha and Smola et. al. Action recognition is an area of research…
4
Paper Review 4 — Compressed Video Action Recognition This is a CVPR18 published paper on Action recognition in Videos by Manmatha and Smola et. al. Action recognition is an area of research where given a video, the goal is to recognize what action is being performed. See below figure 1 for example. Taken from video action recognition dataset UCF101. Understanding Videos is arguably the next frontier in Deep learning and Computer vision field as Videos capture way more information than images can. This paper proposes an approach to consume videos using compressed videos directly. In Hindsight, it makes total sense. The approach is also shown to not only be faster than existing action recognition approaches (circa 2017) but also gives state-of-the-art results. Figure 1. UCF101 example Key Idea Motivated by that the superfluous information can be reduced by up to two orders of magnitude by video compression (using H.264, HEVC, etc.), we propose to train a deep network directly on the compressed video. The key idea is the use of compressed video instead of uncompressing the video into RGB frames. (taken from paper page 1 abstract) Benefits of this approach — taken from the paper (page 2) Consuming compressed video already removes superfluous information. Motion vectors in video compression provide us the motion information that lone RGB images do not have. With compressed video, we account for correlation in video frames, i.e. spatial view plus some small changes over time, instead of i.i.d. images Challenges Videos have very low information density. the “true” and interesting signal is drowned in boring and repeating patterns. RGB frames of the video hinders learning of temporal structure in video. This paper goes on to show how these challenges are overcome by their compressed video action recognition approach. Background One needs a little crash course in video codecs. I briefly describe here that MPEG codec split a video into I-frames, P-frames and B-frames. P-frames further have motion-vectors and residuals. For more information, read section 2 in the paper. In this work, only I-frames, P-frames are used. Figure 2. Compressed video background. I-frames, P-frames (motion vectors and residuals). Modeling the main modeling approach in short - the video is encoded into it MPEG format. this video is then used to train 3 different models — I-frame, motion vector and residual model. All these 3 models are stacked to give final action prediction at the end. The I-frame is processed as a normal image frames. i.e. approx. one I-frame for every P-frame exists in MPEG codec. This I-frame is just another image frame in the video. This is extracted and sent through a resnet-152 model trained supervised using back-propagation with Video action labels at the output. The novel innovation of this work is modeling representation for motion vector and residual data. For this, the authors simple accumulate the motion and residual data till the latest I-frame in the video and use this accumulated data as input to a shallower resnet-18 model. They show visually in fig 2. that accumulated motion and residual vectors show consider longer term difference and show clearer patterns better than pure/original data. Final prediction is a simple weighted sum of the predictions by all the three models. Experiments Main claims compressed video is a better representation — experiments in section 4.1 this representation gives higher accuracy — experiments in section 4.3 faster training speed — experiments in section 4.2 Train and test setup — section 4 Ablation study — section 4.1 Overall results — table 1 and 6 My opinions Pro: Really easy idea to understand. Pro: Video preprocessing into I-frames and P-frames is all that’s needed, the CNN and prediction aspects are not the novel part. Pro: The paper is really well written and the authors justify via experiments the claims made. Pro: The authors open-source the code for others to play around and replicate the results. Con: The whole system is not end-to-end, the three models outputs need to be stitched together to get final prediction. Wonder if this could be trained in a multi-task type model having a common back-bone CNN and three task specific heads — one for I-frames, second for motion vectors and third for residuals. Would that lead to an even better performance as there is joint-training involved and the back-bone CNN perhaps learns from I-frames and the P-frames. References Code https://github.com/chaoyuaw/pytorch-coviar Paper https://arxiv.org/pdf/1712.00636.pdf
Paper Review 4 — Compressed Video Action Recognition
0
paper-review-4-compressed-video-action-recognition-1b74d1224ec8
2018-08-11
2018-08-11 04:42:43
https://medium.com/s/story/paper-review-4-compressed-video-action-recognition-1b74d1224ec8
false
743
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Srikar Appalaraju
null
67a8276079
srikar.appal
4
5
20,181,104
null
null
null
null
null
null
0
null
0
f77989346bda
2018-01-19
2018-01-19 20:01:10
2018-01-22
2018-01-22 21:45:28
3
false
en
2018-07-20
2018-07-20 01:29:13
20
1b7529398cc2
9.293396
11
0
0
or, How our first approach to conversation management caused as many problems as it solved
2
Communication is hard (part 2) or, How our first approach to conversation management caused as many problems as it solved “Here I am, brain the size of a planet, and all you let me do is follow these silly rules.” In my last post, I said that the current generation of voice assistant platforms (Alexa, Google Home, etc.) are optimized for “conversations” that are extremely basic. For example, “Order my favorite pizza from Domino’s” works great, while, “Help me find a great gift for Mom” does not. The promise of voice assistants — and the expectation of consumers — is that more difficult requests will be resolved by conversational agents. At Pylon, we’re building the technology that can enable these assistants to address more robust queries from consumers. In this post, I want to walk you through our first attempt at Pylon to make a longer, more nuanced conversation work and some of the pitfalls we encountered with our initial architecture. This will get a little technical, but I’ll explain the concepts as I go, so don’t worry about getting lost if you’re not an AI researcher or software engineer. It’s something of a long read, but the organizational approach I’m talking about here is a fairly popular one. If I’m going to try to poke holes in it (and believe me, I am), it deserves a full treatment. By the end, I’ll have set the stage for introducing Pylon’s new approach in a future post. Inspiration + Instruction: How hard can it be? When we started, researched showed that 51% of Echos were in kitchens, so we assumed a conversational agent that helps you get dinner on the table might be a good way to reach customers. Our assistant works with you to figure out what to cook and then walks you all the way through preparation. It’s called “Tasted", and it’s currently available on Amazon’s Alexa Skill Store, Google Assistant, Facebook Messenger, and Slack. Here’s a video overview of the concept: A conversation with Tasted consists of two main tasks: choosing a recipe and preparing it. When you’re actually building the assistant, though, these tasks have to be broken down into individual interactions: Greet the user Let them search for a recipe (or make a suggestion to start the conversation) Present search results Handle the user’s selection from those results Give more detailed information about a recipe if the user requests it (ingredients, an overview of preparation, and special equipment required) Walk them through each step in preparing the dish In addition, the user may want a recipe’s ingredients sent via text message, or to save the recipe so they can make it later. After adding in support for some common phrases like ‘yes’, ‘no’, ‘next’, ‘go back’, etc., we’re somewhere around 15–20 different actions the user might want to tell us to perform (or, to use the industry term, 15–20 “intents”). Given the various information we need to collect from the user during the course of a conversation, there’s somewhere on the order of 10–15 distinct states that the system can be in at any given time (about to search for a recipe, presenting the results of a search, cooking a recipe, etc.). Currently, NLU platforms, like the one provided by Amazon as part of Alexa, or Google’s subsidiary Dialogflow, do not provide much help managing these different states. Amazon does have the concept of “session attributes” that can help you manage state, but you’re required to set them and read them entirely in code; the developer tools won’t help you with them. Dialogflow has “contexts” and “follow-up intents” that you can set in their UI, but you’re still forced to manage everything as if intents were the main component in the conversation, and keeping all the input/output contexts straight in your head can become its own problem as your conversation gets more complex. Again, “intent” is just a fancy word for “what the user just said”, which is going to be dependent on both what’s happened so far in the conversation and what the user wants to accomplish next. It makes sense why NLU platforms wouldn’t dictate how a conversational developer might manage a conversation’s state: It’s a hard problem for complex conversations, and it’s not much of an issue for simpler ones, so just let the developer deal with it. To recap: We have states, and we have intents that move us from state to state (saying “search for a chicken recipe” takes the user from the “about to search” state to the “here are your search results” state). This is starting to sound like a job for a finite state machine, right? As it turns out, maybe it’s not. The reasons for that are a bit technical, though, so you might want to grab a whiskey (or a coffee) while I tell you why organizing our system this way was a costly mistake. You *say* the states are finite, but they don’t feel like it Just in case you were following right up until I said “finite state machine”, here’s a quick crash course. Finite state machines (FSMs) are common, relatively simple computational models consisting of states and transitions that connect those states — all of these states might be interconnected, but not necessarily. For example, a standard vending machine can be thought of as an FSM. It starts out in the “waiting for money” state; when a user inserts money, it transitions to the “waiting for a selection” state, and so on. If the user makes a selection before inserting money, no transition happens. Those are the basics — user actions create transitions, but not all transitions are valid, depending on the current state of the system. A quick terminology note: When I say “FSM” in this post, I’m actually talking about nondeterministic finite automata (NFA). The “nondeterministic” gets thrown in because states in NFAs can have transitions that point back at themselves, which means you can’t determine ahead of time the maximum number of transitions that can take place between the start and end states. We need these repetitive sorts of transitions, though, because we need to support things like the user saying, “Could you repeat that?”. Here’s a picture of a relatively small NFA: Image credit: Wikipedia Managing a dialogue with one of these should be easy, right? Plenty of people have thought so; a quick Google search will lead you to a small army of tutorials and even a couple FSM libraries integrated with Alexa boilerplate to help you directly hook the FSM up to requests coming in from Amazon. In fact, if you hit upon just the right combination of search keywords, you’ll end up at this slide deck from a CS course taught by Dan Jurafsky, co-author of Speech and Language Processing, which is essentially the Bible of introductory NLP (and then some). The deck is a great overview of some popular dialogue agents throughout history and the basic concepts in play, but I mention it here because slide 13 nails the problem with FSMs in far fewer words than I’m using: “too limited”. Let’s elaborate a bit (…more). There are different ways to deal with things like knowledge of the outside world, user input processing, etc, but the basic analogy between an FSM and a conversation is: States => Things the system says Transitions => Things the user says You probably have a natural “flow” in mind for your conversation — a way that makes the most sense for users to interact with your agent. Do this, do that, do a third thing … DONE! It wouldn’t make sense for a user to say “let’s cook it” right after your system welcomes them, for example (“let’s cook what?”). So you only need to handle certain utterances at certain states, which saves you the work of making all those transitions… Here’s the problem with that: You know what state the conversation’s in, but does your user? They don’t have access to the road map that is your state diagram, so it’s entirely possible they’ll say something that doesn’t “make sense”, perhaps because their brain is still processing what you said at the last state they were on, not this state. Also, people change their minds. A user could get all the way to cooking a linguini recipe and say “You know what? Find me something with penne instead.” It’s looking like we’re going to have one of those almost-fully-connected state machines after all; if we don’t, our conversation is going to feel rigid, and our agent’s going to seem dumb — or worse, unfriendly. Before you know it, your conversation has gone from something like the neat little graph above to something more like this: This is (part of) an early Tasted FSM Notice how that’s messy enough that the visualization software gave up even trying to draw smooth lines for some of those transitions. It’s less “finite state machine” and more “flying spaghetti monster”. Of course, we’ve fixed this mess by now, but that’s a story for another post. Wait … you want that intent to do something different? Sure, you have clean, perfectly normalized database models backing all your prompts, transitions, conditions, and the back-end actions resulting from each intent. Inevitably, though, you’re going to have to change something about the conversation, and that means editing your user interactions via SQL queries. You could put a simple CMS interface in front of your database, but I wouldn’t recommend it. That’s likely to turn your visually informative state diagram into a spreadsheet, and you’re going to get a fresh tension headache from your eyes darting up and down trying to trace an imagined conversation flow. The system we started out on had a CMS by default, and we only used it once or twice for these very reasons. Our primary editing interface ended up being several people making conversation suggestions to an incredibly patient, competent, handsome engineer (who’s totally not the one writing this post). That engineer then edited some bespoke YAML files that had been lovingly hand-crafted from colons, dashes, and whitespace while questioning all his life decisions. The YAML represented system speech (several variations for each state of the FSM, based on the conversation context/user profile/user device at runtime) and the results of each user intent for each state. The database models were extracted from it by a simple(ish) import process. By the time we finally decided we’d had enough of all this, the YAML had accreted into over 4,500 lines of unmaintainable horror, and we started thinking there had to be a better way. Enough already! Before we go into the solution (or, I should say, pitch to the post that will talk about the solution), let’s distill the problems we encountered using an FSM for conversation management: Configuration gets redundant Part of our configuration maintenance problem came from not realizing up front that almost all user intents need to be “allowed” at every state to accommodate the way human conversation actually works. We could have made a separate YAML file that listed such “global” intents, but then we’d have had to duplicate the state names all over the place as we discovered, for example, that we’d need to handle the phrase “cook it” differently at the search results state than at the recipe info state. Such a refactor might have helped maintainability, but it wouldn’t have done anything for the next problem. Conversation isn’t linear “Neither are FSMs”, you say. “Just look at that first picture.” That’s true, but FSMs are a better fit when you only want to support a limited number of transitions in your interaction and actively forbid the user from taking certain paths (to push them down a mostly one-way path), not when you’re trying to cooperate with a fickle human to accomplish an evolving task, and trying to let that human more or less define that task (or think they are, at least). Subtasks are unnatural I haven’t really touched on this at all so far, but sometimes as a conversational developer you end up with interactions that involve more than one question and answer (a conversational “turn” in NLP parlance) but aren’t really part of the “main” conversation. Maybe these interactions are optional, maybe they’re required; either way, you want to end up on the same state in the “main” flow when you’re done with the tangential one. You’ll likely end up modeling this as some kind of stack, with the subtask being its own FSM. Voilá: mitosis for your maintenance problems. It’s all in your head Admittedly, a large part of the problem here is mental modeling — starting with an FSM as your architecture can steer you toward certain ways of thinking about your interaction and how it “should” work. You can preempt a lot of these issues if you know about them ahead of time and adjust your FSM to fit your conversation rather than the other way around. But we don’t think you should have to. At Pylon, we opted for another approach: Start over. Read and re-read some of the academic work out there on dialogue management (surprise: There’s a lot of it, and it’s kind of popular right now). Reframe the conversation model, starting from the concept that the user can take any action at any time. In the next post, we’ll spend a little more time talking about this new way of modeling a conversational agent. If you got all this way and were expecting the last section to solve all your problems, my apologies. As consolation, I offer you the sympathy of someone else in the same position and a commitment to be more constructive the next time around. Of course, I’d be remiss if I didn’t mention (again) that this is a tough problem. If you’re a company looking to enter the space but don’t want to get bogged down in configuration management, shoot Pylon an email; we’d be happy to help. Next: Communication is hard (part 3)
Communication is hard (part 2)
50
communication-is-hard-part-2-1b7529398cc2
2018-07-20
2018-07-20 01:29:13
https://medium.com/s/story/communication-is-hard-part-2-1b7529398cc2
false
2,317
Thoughts on conversational technologies, best practices and learnings from Pylon and our friends who are building the next wave of conversational services.
null
null
null
Navigating the Conversation
mike@pylon.com
navigating-the-conversation
ARTIFICIAL INTELLIGENCE,CHATBOTS,ALEXA,CONVERSATIONAL UI
pylonai
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Josh Ziegler
null
29e5016207e6
josh_z
13
1
20,181,104
null
null
null
null
null
null
0
null
0
476053a5f77
2018-03-14
2018-03-14 14:59:51
2018-03-14
2018-03-14 15:13:16
1
false
en
2018-03-18
2018-03-18 22:46:48
0
1b7570038dd0
2.550943
0
0
0
2018 has begun to shape how business and customers interact. Technology is replacing old business models and offering new opportunities
5
Technology Trends in 2018 2018 has begun to shape how business and customers interact. Technology is replacing old business models and offering new opportunities What does this mean for small businesses? What does this mean for workers? As we can’t predict which technological disruption will have the most impact, there a some trends that will shape future business patterns and process. It’s important to be aware on macro trends that could affect you. What are some trends to watch out for? Niche Markets & Personalization As consumer interests evolve and information is easily accessible, there’s a trend away from mainstream brands and markets. As every customer is unique, they expect unique offerings and and customization. This provides a lot of opportunities for small business or entrepreneurs to cater to specific audiences or markets. Niche or targeted marketing becomes more important; it allows businesses to tap into the preferences of individuals. Mobile to Artificial Intelligence We have seen tech behemoths like Amazon, Google and Microsoft invest heavily in an AI fist strategy. As a result other corporation have followed suit making it easily accessible to smallest business through third party tools. With Artificial Intelligence tools, small businesses can unlock a competitive edge in the ever changing landscape; mundane tasks can be automated, virtual assistants can be employed, and Artificial Intelligence information can provide better insights into the customers. Netflix and Spotify use Artificial Intelligence to tailor recommendations to your preferences. Fluid workforce it is estimated that by 2027, “the majority of U.S. workers will be freelancing by 2027”. According to Stephan Kasriel who is the CEO of Upwork, “the growth of the freelance workforce is three times faster than the traditional workforce.” This is an opportunity for business scale quickly without the increasing cost of having a full-time employee and to tailor their offerings to a different type of workforce. With this rise, businesses can scale up and down, quickly adapting to seasonalities and changes in markets. Privacy value platforms Last year, a lot customers had their personal data compromised and established companies experience security breaches. The US elections highlighted the importance of security and privacy. Business that prioritize privacy protection for their customers will have a selling advantage. How to stay ahead? Technology disruption is not an enemy but a friend; only those who don’t adapt get left behind. Here are some tips Flexibility Business will continue to experience changes, sometime sudden changes to their processes and strategies. As an owner or employee you’ll have to adapt as well. Leasing equipments instead of buying is a way to control cost without losing capabilities. Outsourcing is another way to flexible if you need to pivot quickly to stay ahead the competition. With sites like Upwork and peopleperhour, business could easily hire someone to work on a project. Refinement of skills Disruptions not only affects your processes or products; it impacts skillset and how you interact with customers. To stay up to date with changes, you’ll need a culture of continuous learning. Websites like Udemy, Coursera, and Khan Academy offer classes that are free or affordable that teach skills to keep you updated with technological disruptions Continuous Networking With the dynamic business environment, relationships with customers and other businesses become valuable. Having personalized offerings requires a better understanding of your customer. Sometimes you won’t be able to provide all the offerings needed to optimize a customer’s experience; you could partner with others to complement your products. Looking forward… Be comfortable being uncomfortable; Technological disruption is your friend. 2018 is an exciting year with boundless opportunities. You can start developing your own unique strategies to stay relevant in this dynamic environment Feel free to share this article and follow my blog for more weekly insights.
Technology Trends in 2018
0
be-aware-of-these-trends-in-2018-1b7570038dd0
2018-04-21
2018-04-21 16:01:05
https://medium.com/s/story/be-aware-of-these-trends-in-2018-1b7570038dd0
false
623
A place to get engaging insights
null
null
null
Insights Hub
null
insights-hub
GROWTH,INSIGHTS,TECHNOLOGY,RESEARCH,PRODUCT
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Tobe
null
7deda60451f
Tobz
1
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-02
2018-09-02 18:15:54
2018-09-02
2018-09-02 18:21:02
2
false
en
2018-09-02
2018-09-02 18:21:02
8
1b75a8106695
1.972013
0
0
0
Saving Data
3
Decision Trees — Day 10 #100DaysOfMLCode “landscape photography of splitted road surrounded with trees” by Oliver Roos on Unsplash Saving Data You have a phone with a limited data plan that only sends messages in binary. Every night you want to tell your friend where to meet you, which could be your house, the park, the movies, or your favorite restaurant. You could send 00 for your house, 01 for the park, 10 for the restaurant, or 11 for the movies. This is a waste of your data plan. You see movies only 10% of the times you go out. There is a 40% chance you will meet at your house, 20% chance you will go to the park, and a 30% chance of going out to eat. Since you are most likely to go to your house, this can be sent as a 0. If you aren’t staying in, the next decision to make is whether or not you will get food. If yes, then you can send a 10. If not, you need to pick if you are going to the park. A yes is a 110 and a no is a 111 (to the movies!). Most of the time, you will only be sending a 0, one bit instead of the original two for 00. Rarely will you send the three bit 111 for the movies. Over time this will day you data. Making Choices This graph is not linearly separable, but we can make some decisions lines that will isolate the green points from the red. The goal is to get the most information out of every decision. We could place the first split at y=1. This will ensure that everything below 1 is green. We will have to make additional decisions about the points above that line. We could also use y =6. This split would tell us that everything about y = 6 is red. There are more data points above y=6, which means this is the better first split. Then we can split at x = 4. If a point is less than y=6 and less then x = 4, then it is green. If it is not, then we have another choice to make. Our last split can be at x=1. Now we have a decisions tree for splitting red and green data. Udacity — Intro to Machine Learning, Sebatian and Katie Coursera — Machine Learning, Andrew Ng Machine Learning, Stephen Marsland (2015) Claude Shannon — Father of the Information Age [https://www.youtube.com/watch?v=z2Whj_nL-x8&t=203s] Information entropy | Journey into information theory | Computer Science | Khan Academy
Decision Trees — Day 10 #100DaysOfMLCode
0
decision-trees-day-10-100daysofmlcode-1b75a8106695
2018-09-02
2018-09-02 18:21:03
https://medium.com/s/story/decision-trees-day-10-100daysofmlcode-1b75a8106695
false
421
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
SciJoy
null
b65620178c67
SciJoy
3
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-27
2018-07-27 19:28:53
2018-07-27
2018-07-27 19:37:45
3
false
en
2018-07-27
2018-07-27 19:47:19
1
1b7a20a91306
1.870755
3
0
0
I have started doing the new Foundations to Machine Learning course by Bloomberg(https://bloomberg.github.io/foml/#about) and am loving it…
3
Excess Risk Decomposition I have started doing the new Foundations to Machine Learning course by Bloomberg(https://bloomberg.github.io/foml/#about) and am loving it so far. Here are some notes on a very interesting lecture. I highly recommend people to do this course. Lecture 5: Excess Risk Decomposition The main goal of ML is to solve the Bayes decision function which finds a function of inputs that minimises the loss function. When we search over all possible functions this is called Bayes decision function. Usually we restrict ourselves to a hypothesis space. This prevents us from over fitting and makes training much easier. Most famous ML methods rely on some hypothesis space. The difference between the perfect function that could exist and perfect function within the hypothesis space is called the Approximation Error. Risk = Approximation Error However, we cannot get this perfect function within this space as we are limited to our data and not all possible data. This creates another error: Estimation Error(Diagram 1). Generally as our hypothesis space size increases, we expect estimation error to increase as there are now more ways in which we could be far away from the perfect function in the space. Risk = Approximation Error+Estimation Error Notice there is some trade off. Generally, as our hypothesis space F gets larger, our approximation error will decrease as there is probably a better function that can model the data and estimation error would increase for reasons given above. Now as we are training our model in batches we start at some mildly optimised function and slowly converge to the perfect function represented by our data(Diagram 2). The difference between our current function and most optimised function is called the Optimisation Error. We define this error as even though we can reach the most optimised version with our data, the last 1 or 0.01% of accuracy requires massive amounts of training and utilising things like second order optimisers to get the best out of our network. So we forgo some accuracy for practical purposes. So finally our total error is: Risk=Approximation Error + Estimation Error + Optimisation Error #100daysofML
Excess Risk Decomposition
5
excess-risk-decomposition-1b7a20a91306
2018-07-27
2018-07-27 19:47:19
https://medium.com/s/story/excess-risk-decomposition-1b7a20a91306
false
350
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Rahul Deora
null
c0b8e0e6e9b0
rahuld3eora
7
27
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2017-12-05
2017-12-05 23:11:58
2017-12-05
2017-12-05 23:15:40
4
false
en
2017-12-05
2017-12-05 23:15:40
0
1b7a36ca5da7
2.126415
5
0
0
Unless you know precisely how Shanghai’s 364 subway stations align with the metropolis above them, navigating the subterranean tangle of…
5
Shanghai’s Subway Smartens Up With AI Unless you know precisely how Shanghai’s 364 subway stations align with the metropolis above them, navigating the subterranean tangle of China’s largest city can leave you frustrated and hopelessly lost. On December 5th Alibaba, Ant Financial, and Shanghai Shentong Metro Group jointly launched a new voice interaction system for purchasing subway tickets. Riders can now tell a machine their destination — for example, Zhongshan Park — and the system will use the AutoNavi (高德地图) cloud map to issue a ticket for the nearest station. Alibaba CEO Jack Ma was among the first to try out the new system, which is expected to be available on all Shanghai’s ticketing machines next year as part of China’s project to build smart cities’ IoT infrastructure. Current voice dialogue systems such as smart speakers and voice assistants require “trigger words.” The iPhone’s voice assistant, for example, activates when it hears “Hey Siri.” Alibaba’s new system uses multi-model interaction. Zhijie Yan, head of the Alibaba voice team, says the goal is to eliminate trigger words altogether. “You just need to approach the machine and it will interact with you naturally.” Says Yan, “Real life environments are most likely noisy, and that remains to be the biggest technical challenge.” Voice recognition is difficult in open, noisy environments, which is exactly what Shanghai subway is, especially during rush hours. Alibaba’s new ticketing system uses computer vision to identify speaker’s lip movements and measure the distance between speaker and machine before finalizing its voice input. Visual signals are combined with audio signals captured by a large microphone array, with a supporting software signal processor suppressing noise and interference. Passenger: “Two tickets to Oriental Pearl Tower” Voice Ticketing Machine: “Recommended stop is Lujiazui, 285 meters from your destination.” Passenger: “One ticket only.” Voice Ticketing Machine: “Order changed to one ticket.” Last summer Yan led a five person team on the subway project, identifying stability and rapid learning ability as further challenges to meet, as public service facilities like the subway need to function smoothly 24/7. The Shanghai subway is also introducing Alibaba’s facial recognition and Alipay for digital payment at subway entrances. This is just the first step: airports, train stations, event spaces, restaurants and shopping malls will soon be able to use multi-model technology to open new human-machine interaction possibilities in information inquiry, interactive advertising, and direction navigation applications. Journalist: Yi Wang, Meghan Han | Editor: Michael Sarazen
Shanghai’s Subway Smartens Up With AI
7
shanghais-subway-smartens-up-with-ai-1b7a36ca5da7
2018-05-01
2018-05-01 17:15:16
https://medium.com/s/story/shanghais-subway-smartens-up-with-ai-1b7a36ca5da7
false
378
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
global.sns@jiqizhixin.com
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
China
china
China
27,999
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-14
2018-07-14 17:24:57
2018-07-14
2018-07-14 18:19:09
3
false
en
2018-07-14
2018-07-14 18:24:13
21
1b7b8a7f2ba4
2.063208
0
0
0
Though I never had the opportunity to be a part of, you either just finished your Data Analytics/Science bootcamp or class…
3
Finding Data Professional Mentors or Meetups in San Francisco Bay Area Though I never had the opportunity to be a part of, you either just finished your Data Analytics/Science bootcamp or class. #BelatedCongrats! Or maybe you are just part of the following: Career change into the Data profession Recent college graduate Forgotten what places to go to to network Well, don’t worry. The following information are some of the network opportunities I found in the Bay Area to help get you started. Note: If you have some additional resources, let me know via Linkedin! I am happy to add and author your contribution. Data Science Photo by Mika Baumeister on Unsplash A university’s group under name of San Francisco Data Science DataKind, SF A community of top data scientists and social sector leaders working to tackle the world’s toughest problems with data science. AI & Deep Learning Enthusiasts (organized by same individual[s]) Group of individuals remotely or in person diving into AI & Deep Learning education and research space. Accel.Ai Have not attended, but a friend has referred me to it. Organized by Laura Montoya (Also organizer of the following) 2. Latinx In AI New group discussing AI ethics and other sessions. Great sessions based on what a friend has told me. I actually have not attended, but give it a try! Programming or Python Photo by Max Nelson on Unsplash SF Python Bay Area Lead by Grace Law and other members, SF Python hosts bi weekly Wednesday sessions where you can grow your technical career by learning and sharing what you know with other local Python developers. Also organizing PyBay PyLadies, SF Chapter International mentorship group with a focus on helping more women and people who identify as women in a way significant to them become active participants and leaders in the Python open-source community. Code for San Francisco- Civic Hack Night Group of volunteers focused on civic tech and making government services better in San Francisco. Other Networking Opportunities Photo by Christian Fregnan on Unsplash Toastmasters (SF) Group organized to help you public speaking skills Kaggler Noob Slack Channel Want to partner with a group of n0obs that aim to compete in ML/AI related Kaggle Competitions? Well here is the place to be! Techqueria Slack Channel Hispanics for networking. I don’t participate in it, but it looks great! Recommendations from Others To be edited Resources Meetup.com Facebook Events Galvanize Events Metis Word of Mouth You can reach me through LinkedIn for any questions or suggestions. Cheers!
Finding Data Professional Mentors or Meetups in San Francisco Bay Area
0
finding-data-professional-mentors-or-meetups-in-san-francisco-bay-area-1b7b8a7f2ba4
2018-07-14
2018-07-14 18:24:13
https://medium.com/s/story/finding-data-professional-mentors-or-meetups-in-san-francisco-bay-area-1b7b8a7f2ba4
false
401
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Raul Maldonado
Data Analyst, Data Bootcamp Teaching Assistant, Salesforce Certified Administrator
16018de16db1
CloudChaoszero
17
25
20,181,104
null
null
null
null
null
null
0
null
0
8054bf9f4c47
2018-09-06
2018-09-06 07:49:18
2018-09-10
2018-09-10 08:59:40
5
false
en
2018-09-11
2018-09-11 08:05:46
4
1b7d3178d8eb
3.467296
16
0
0
It is an easy mistake to make.
4
Chatbots — how do we talk to them? Photo by Janni Kalafatis It is an easy mistake to make. In this day and age, knowing whether you are talking to a human or a robot in a chat can be difficult to figure out. For us at Convertelligence, it is very important to make sure that our clients know how crucial it is to let their customers know they are talking to an automated service. Not only because of privacy issues and the fact that some customers might feel “tricked”, but also because of how they interact and talk to the chatbot. Humans are able to decipher and understand long passages of text and will, in most cases, have no trouble answering the problem in question. A chatbot, however, will be programmed to only understand short, to-the-point sentences. If a chatbot receives a short story containing a complex issue, it will most likely produce an answer looking something like this; “I’m sorry, I didn’t quite understand. Would you like to try again?” (to which the customer probably will reply something like “f**k off”). To quote my colleague Petter Hohle’s article: “In order for the chatbot to answer questions for you, the chatbot needs to understand what your customer is saying when they send a message your way. This type of understanding is called Natural Language Understanding (NLU for short), […] Humans understand natural language effortlessly (their mother tongue, at least), but with machines, it’s a different story. They lack a very important piece of the puzzle: the human brain.” How do we solve it? So how do we tackle this issue? It is a tough one because as chatbots become more “human” and more widespread, customers will expect more and more from them. It is expected that chatbots should be able to answer complex issues. One way is to subtly tell the customers what issues the chatbot is programmed to answer and hope that the instructions serve the purpose of guidance rather than being a hindrance. However, it is easy enough telling people what to do, the hard bit is actually making them do it. One way we are trying to solve the issue is by gently pushing the customers in the right direction. By adding a “remember that I respond best to short sentences” or something similar, in the bot’s welcome message, we have already hinted to the customer that there is no human being at the receiving end of the conversation. Our hope is that the customer will then abandon the novella they are preparing to write and instead go for a simple “can you help me?”. Keep it short and sweet As humans, we mimic each other and so our aim is to make the customer mimic the chatbot’s short and precise language. This way the customer will most likely receive the answer and help they are looking for. This will please them and they will return to use the chatbot at another time (and tell all their friends about it). The company to which the chatbot belongs will also be pleased and will tell all their friends about how we (Convertelligence) make chatbots that work the way they are supposed to. All in all, by making the human users utilise the chatbots correctly by talking to them in the way that is intended, everyone is happy. The use of chatbots is undoubtedly increasing, and even though robots and AI are on the rise, we are a long way away from making them clever enough to understand extremely long texts. However, the more used we get to chatbots and the more common they become, our hopes are that talking to/with them, using short and precise sentences will become the norm. Until then, the next time you come across a chatbot, maybe try to steer clear of the long rant about how your electricity bill was high, even though you have been on holiday in Spain and not used your washing machine for the past 3 weeks, and instead go for a simple “why is my bill so high?”. Who knows, you might get the answer you are looking for. Frenchie’s icon was made by Freepik from www.flaticon.com.
Chatbots — how do we talk to them?
271
chatbots-how-do-we-talk-to-them-1b7d3178d8eb
2018-09-11
2018-09-11 08:05:46
https://medium.com/s/story/chatbots-how-do-we-talk-to-them-1b7d3178d8eb
false
698
Software company specializing in chatbots and artificial intelligence
null
convertelligence
null
Convertelligence
contact@convertelligence.no
convertelligence
CHATBOT DESIGN,CHATBOT PLATFORMS,NATURAL LANGUAGE PROCESS,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE
null
Chatbots
chatbots
Chatbots
15,820
Anette Marie Berge
Copywriter @ Convertelligence (We're hiring 🚀 convertelligence.com/career)
d0069be55050
anette.berge
6
4
20,181,104
null
null
null
null
null
null
0
export API_KEY=<YOUR_API_KEY> { "document":{ "type":"PLAIN_TEXT", "content":"Joanne Rowling, who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote the Harry Potter fantasy series." }, "encodingType":"UTF8" } { "entities": [ { "name": "Robert Galbraith", "type": "PERSON", "metadata": { "mid": "/m/042xh", "wikipedia_url": "https://en.wikipedia.org/wiki/J._K._Rowling" }, "salience": 0.7980405, "mentions": [ { "text": { "content": "Joanne Rowling", "beginOffset": 0 }, "type": "PROPER" }, { "text": { "content": "Rowling", "beginOffset": 53 }, "type": "PROPER" }, { "text": { "content": "novelist", "beginOffset": 96 }, "type": "COMMON" }, { "text": { "content": "Robert Galbraith", "beginOffset": 65 }, "type": "PROPER" } ] } ] } { "entities": [ { "name": "Robert Galbraith", "type": "PERSON", "metadata": { "mid": "/m/042xh", "wikipedia_url": "https://en.wikipedia.org/wiki/J._K._Rowling" }, "salience": 0.7980405, "mentions": [ { "text": { "content": "Joanne Rowling", "beginOffset": 0 }, "type": "PROPER" }, { "text": { "content": "Rowling", "beginOffset": 53 }, "type": "PROPER" }, { "text": { "content": "novelist", "beginOffset": 96 }, "type": "COMMON" }, { "text": { "content": "Robert Galbraith", "beginOffset": 65 }, "type": "PROPER" } ] } ] }
94
e52cf94d98af
2018-04-02
2018-04-02 13:58:19
2018-04-02
2018-04-02 14:03:23
12
false
en
2018-04-08
2018-04-08 20:23:05
11
1b7d8f3302bd
6.082075
12
1
1
In this tutorial, we will learn how to do Entity Analysis using Google Cloud Natural Language API. This tutorial is a continuation of our…
5
Entity Analysis using Google Cloud Natural Language API In this tutorial, we will learn how to do Entity Analysis using Google Cloud Natural Language API. This tutorial is a continuation of our previous tutorial where we learned how to perform Sentiment Analysis using Google Cloud Natural Language API. Entity Analysis using Google Cloud Natural Language API The Cloud Natural Language API lets you extract entities from text, perform sentiment and syntactic analysis, and classify text into categories. So, we will explore all of these in this quick tutorial. Requirements: A Google Cloud Platform Project Browsers such as Chrome or Firefox Setup: First of all, make sure you already have a Google Account. Sign-in to Google Cloud Platform Console. Create a new project by going to Manage Resources Page and click on CREATE PROJECT. Remember your project ID as shown in the screenshot below. Also, make sure to add your unique project name. Bonus Tip: If this is your first time on Google Cloud Platform, new users are eligible for a 300$ credit. Also, you will have to enable billing for your account. Configuration: Since account creation and Project setup are complete, let us proceed to configure our project to enable the Google Cloud Natural Language API. Go to Dashboard menu and click on APIs & Services. In addition to that click on Enable APIS AND SERVICES. As a result, Search for Google Cloud Natural Language API and click on it. Also, click ENABLE to enable the Cloud Natural Language API. After few seconds, the API should be enabled. Activate Cloud Shell: Google Cloud Shell is a shell environment for managing resources hosted on Google Cloud Platform. It is a command line environment running in the cloud. Also, we’ll use Cloud Shell to create our request to the Natural Language API. First of all, click on the top right icon on Activate Shell Icon on the top right corner of the header bar as shown below. As a result, A Cloud Shell session opens inside a new frame at the bottom of the console and displays a command-line prompt. Wait until the user@project:~$ prompt appears Create an API Key: We need to generate an API Key because we will be using curl to send a request to the Natural Language API. To create an API key, navigate to the Credentials section of APIs & services in your Cloud console: Click Create credentials dropdown and choose API Key. Hence, you will see a pop-up window with your generated API Key. Copy the API key and save it safely within your system. Let’s go back to the Google Cloud Shell command line and enter the following command. Replace <YOUR_API_KEY> with the API Key that we copied in the previous step. Executing the above line in the terminal makes sure that the API_KEY has added to the environment variables and is not required to be called for each request. Entity Analysis using Google Cloud Natural Language API: The Natural Language API method we’ll use is analyzeEntities. With this method, the API can extract entities (like people, places, and events) from text. In this tutorial, you will learn how to extract entities (like people, places, and events) from the text that we pass into the Natural Language API. Finally, we have come to the part where all the machine learning and magic happens. The Natural language API lets you perform Entity Analysis on a block of text. First of all, let us create a JSON request with the text that we would like to perform Entity Analysis. In your Cloud Shell environment, create the file request.json with the code below. You can either create the file using one of your preferred command line editors (nano, vim, emacs) or use the built-in Orion editor in Cloud Shell as shown below. AnalyzeEntities Request: To try it out the API’s entity analysis, we’ll use the following sentence: Joanne Rowling, who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote the Harry Potter fantasy series. As a result, this will launch the editor in Cloud Shell. Click on File->New->File and enter file name as request.json Copy the contents from below and paste it into your request.json file. In the request, we tell the Natural Language API about the text we’ll be sending. Supported type values are PLAIN_TEXTor HTML. In content, we pass the text to send to the Natural Language API for analysis. The Natural Language API also supports sending files stored in Cloud Storage for text processing. If we wanted to send a file from Cloud Storage, we would replace content with gcsContentUri and give it a value of our text file’s uri in Cloud Storage. encodingType tells the API which type of text encoding to use when processing our text. Call the Natural Language API to perform Entity Analysis: You can now pass the request along with the API Key environment variable that we saved earlier to the Natural Language API through a curl command as shown below. We will send the request to the API’sanalyzeEntities endpoint. Copy the curl command from here and run it in your cloud shell command line. Your response should look like this: From the response, you can clarify how we do Entity Analysis using Google Cloud Natural Language API. Let’s deep dive into the response and understand what is going on. For each entity in the response, we get the following: entity type associated Wikipedia URL if there is one salience indices of where this entity appeared in the text Salience is a number in the [0,1] range that refers to the centrality of the entity to the text as a whole. Entity Analysis using Google Cloud Natural Language API can also recognize the same entity mentioned in different ways. Take a look at the mentions list in the response: the API is able to tell that “Joanne Rowling”, “Rowling”, “novelist” and “Robert Galbriath” all point to the same thing. Conclusion: That’s a wrap on how to perform Entity Analysis using Google Cloud Natural Language API. Let me know your thoughts on this powerful API. Entity Sentiment, Multilingual Natural Language Processing will be covered in the upcoming tutorials. Therefore, see you all in the next tutorial. Keep an eye on my blog section for more interesting tutorials. Also, check out the latest tutorial on Getting Started with Flutter #machinelearning #techwithsach #entityanalysis #googlecloud #naturallanguageapi #tutorials
Entity Analysis using Google Cloud Natural Language API
268
entity-analysis-using-google-cloud-natural-language-api-1b7d8f3302bd
2018-06-13
2018-06-13 02:15:12
https://medium.com/s/story/entity-analysis-using-google-cloud-natural-language-api-1b7d8f3302bd
false
1,254
A collection of technical articles published or curated by Google Cloud Platform Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.
null
googlecloud
null
Google Cloud Platform - Community
null
google-cloud
GOOGLE CLOUD PLATFORM,DEVELOPERS,CLOUD COMPUTING,DEVOPS,TECHNOLOGY
gcpcloud
Google Cloud Platform
google-cloud-platform
Google Cloud Platform
4,042
SACHIN KUMAR
CTO — TupeloLife | Head of Google Developers Group Doha
b4fe80997fc1
sachindroid8
114
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-31
2018-07-31 16:15:28
2018-08-02
2018-08-02 23:31:47
12
false
en
2018-08-03
2018-08-03 16:35:07
0
1b7ed0c030f4
9.482075
0
0
0
Look at this equation:
4
Math Behind Reinforcement Learning, the Easy Way Look at this equation: Value function of Reinforcement Learning If it does not intimidate you, then you are a mathematical savvy and there is no point in reading this article :) This article is not about teaching Reinforcement Learning (RL) but about explaining the math behind it. So it assumes that you already know what is RL but have some difficulty grasping the mathematical equations. If you don’t know RL, it is better that you read about it before returning to this article. We will go step by step into how and why the above equation came into being. States & Rewards Let’s consider a sequence of states S1, S2, …, Sn each of them has some kind of reward R1, R2,…,Rn. We know that an agent (ex: robot) has the job to maximise its total reward. Meaning that it will pass by the states that provides the maximum rewards. Suppose the agent is at state S1, there should be a way for it to know what is the best path that maximises its reward. Take into account that the agent does not see beyond its immediate neighbouring states. For this reason, in addition to the reward of each state s, we are going to store another value V that represents rewards of other states to which each state is connected to. Example, V1 represents the total rewards of all the states connected to S1. The reward R1 is not part of V1. But the reward R2 at S2 is part of V1 at S1. This way by simply looking at the next state, the agent will have an idea what lays behind. The value V(s) stored at state s is computed from a function called “Value function”. The Value function computes the future rewards. Notice that the final states also called terminal states do not have value V (V = 0) since there are no future states and no future rewards. Value function also uses depreciation when computing future rewards. This is similar to what is done in finance, where 1000$ that you will receive in 2 years are less worthy than 1000$ that you receive today. To express this idea, we multiply the 1000$ by a certain discount factor 𝛄 (0≤𝛄 ≤1) raised to the power t where t is the number of time steps until you receive the payment. For example if you expect 1000$ in two years and the discount factor is 0.9, then today’s value of those 1000$ is 1000 * 0.9²= 810$ Why is this important in RL ? Suppose the final state S(n) that has R(n) = 1 and all intermediary states S(i) have R(i)= 0, multiplying R(n) by 𝛄 to the power t on each state will give a lower V from the previous state. This will create a sequence of increasing V from the origin till the end, which constitutes a hint to the agent on which direction maximises its reward. So, until now we have established that each state has a reward R (possibly zero) and a value V that represent future rewards. In other words V(s) is a function of the future rewards coming from other states. Mathematically it is written as Where St are all the states that are connected directly or indirectly to S. However it is easier to express V(s) in terms of next states instead of solely in terms of rewards R, this has the advantage of computing the V of current state when we know the V of the next states, instead of summing all the rewards of all future states. The formula becomes V(s) = r + 𝛄 * V(s’) So far we have assumed that all states are connected in sequence, however this is rarely the case, each state can be connected to multiple other states to which the agent can potentially move to. Suppose S1 is connected to S2, S3, S4, and S5, the value V at S1 should reflect this situation. V(S1) = (R1+R2+R4+R5 + 𝛄 *( V2 + V3 +V4+V5))/4. What we are saying here is that V(S1) is the average of all values of states to which it is connected. Remember that each V contains the value of future rewards, so by averaging the neighbours we also get an idea of what comes beyond them. This formula suggests that from S1 we can go to S2, S3, S4, and S5 without any preference to any particular state, however this is not accurate, because we know that there is a certain probability to go to each neighbour state and these probabilities might not be the same. For example, there might be 50% chance to go to S2, 30% chance to go to S3, 10% to go to S4 and S5. Lets call these probabilities p2, p3, p4, p5 respectively. So V(S1) becomes V(S1) = p2*R2+p3*R3+p4*R4+p5*R5 + 𝛄 *( p2*V2 + p3*V3 +p4*V4+p5*V5). Unsurprisingly these probabilities are called transition probabilities because they express the likelihood of transiting from one state to another. We can also express them as a matrix P where Pij is the probability of transition from a state i to a state j. When no transition is possible the Pij will be zero. Let’s arrange the formula to become more appealing: V(S1) = p2*(R2 + 𝛄*V2)+p3*(R3+𝛄*V3) + p4(R4+𝛄*V4) + p5*(R5 + 𝛄*V5). Or in the following form: Where P(Sk|S) is the probability of reaching state Sk knowing that we are at S1. In a general form, we can write it as : Notice that k goes through all the states, which means we are summing on all the states! If you are surprised, don’t be. As we previously said the transition probability is a matrix that gives individual probabilities of transition from one state the other. Since not all states are connected to all others, this matrix is sparse and contains lots of zeroes. Example let’s consider the previous example where S1 is connected to S2, S3, S4, and S5, also let’s suppose that the total number of states is 100, so we have S1 till S100. Doing ∑(P(Sk|S) * (R(k) +𝛄 *V(Sk)) where k goes from 2 to 100, is the same as doing sum from k = 2 to 5 since for all k ≥ 6 P(Sk|S)=0. Stochastic Rewards Now take a deep breath, because we will add a new layer of complexity! The reward itself is not deterministic, which means you can’t assume that R is precisely known at every state. In fact it is probabilistic and can take different values. To illustrate this fact, consider that you are an archer and you are aiming on a target, we will suppose there are only three states, S1 (aiming), S2 (hit), S3 (miss). As you surely know the target is a bunch of concentric circles, the more inner circles you hit the more reward you get. Suppose you are a rather good archer and you have 80% chances to hit the target, so 20% risk that you miss it. The reward to hit the center is 100, the outer circle is 50 and the outermost circle reward is 10. The probability of hitting the center is 10%, hitting the outer circle is 30% and hitting the outer most is 50%. Finally we will also assume that completely missing the target will result in -50. So the value at state S1 will be : V(S1) = .8*(.1*100 + .3*50 + .6*10 + 𝛄 *V(S2)) + .2*(1*-50 +𝛄 * V(S3)) Since S2 and S3 are terminal states, then V(S1) and V(S2) are zero, but they are mentioned above to keep reminding of the general formula. You can clearly see that rewards in every state are multiplied by their respective probability then summed together ex: (.1*100 + .3*50 + .6*10) and (1*-50) , and each state is multiplied by the transition probability that leads to it ex: .8*(.1*100 + .3*50 + .6*10 + 𝛄 *V(S2)) and .2*(1*-50 +𝛄 * V(S3)). From the above we can deduce the general formula: It is worth clarifying the expression p(si, rj|s), which is read as the probability of transiting from state s to si with a reward rj. For example p(S2, 100|S1) is read as the probability of going to S2 (hit state) with a reward of 100 (hitting the center) given that we are at state S1. The answer is .8 * .1 = .08 (8%). If you haven’t noticed yet this constitutes the second part of the initial formula (at the top of the page). Simply replace si,rj by s’ from S and r from R and you will get: S is the set of all states and R is the set of all rewards. We will discuss action a and policy 𝜋 next. Actions and Policy So far we have said that we transit between states by random chance. For example we have 20% chance to move from S1 to S2 and 80% chance to move S1 to S3. But we didn’t say how is this done! what triggers this transition? The answer is simply a certain action that the agent does when it is in a certain state. This action might not be unique and there might be several that are available at a given state. So far, we have assumed that there is one implicit action called “move forward” or “do something” which will take us from one state to the other with a certain probability. As if the stochastic reward is not enough, also the action is not deterministic! You have no guarantee that if you perform an action you will have a 100% success. Let’s take the example of the archer, your aim is to hit the center and get 100 points, so you aim to the center and shoot the arrow. However there are plenty of factors that might cause you to miss. Probably you are not focusing enough, or your hand shook when releasing the arrow, or it was windy. All of these factors will affect the trajectory of the arrow. So you might hit the center, or one of the outer circles, or you might completely miss the target! Let’s consider another example where you are controlling a robot on a grid. Suppose using a remote control you ordered the robot to move forward. The action is “move forward” and the expected state is the square that is in front of the robot. However, the remote control might be broken, or there might be some interference, or robot wheels were badly positioned and instead of moving forward the robot went right, or left , or backward. Bottom line, the probability of going from state s after performing action a, to the state s’ and getting reward r is not 100%. That’s why we write p(s’,r|s, a) which is the probability of transiting to state s’ with reward r given a state s and an action a. As said earlier every state might have several actions available to it. For example a robot might have “go forward”, “go left”, “go right”, “go backward” in every state (or square). A hunter hunting a prey might have different actions such as “fire a gun”, “shoot an arrow” or “throw a spear”, each of these actions come with a different reward. The strategy that dictates which action to use at a certain state is called policy 𝜋. Guess what! It is probabilistic too! I know life is hard :) So a hunter has some probability to use his gun, another probability to use his bow and a third probability to use his spear. Same for a robot, that has a certain probabilities to “go forward”, “go left”, “go right”, “go backward”. As usual we quantify this in the V(s) by averaging all these possibilities. Which gives the following : 𝜋(a|s) is the probability of using action a following the policy 𝜋 given that we are at state s. V𝜋 (s) is the value at state s when applying policy 𝜋. f(a, s, r) is used here as a shorthand for the Value function V(s) that we have established in the Stochastic Rewards section, with the addition of using p(s’,r|s, a) in order to reflect the dependency on the action a. The use of f(a, s, r) is simply meant to alleviate the complexity and emphasise the role of 𝜋(a|s). So finally after expanding f(a, s, r) we obtain the initial formula that is the subject of this lengthy article: Value function of Reinforcement Learning Conclusion Hopefully, this article was able to demystify the math behind the Value function in Reinforcement Learning. As a takeaway from this article you can understand the Value function V(s) at state s as the average rewards offered by the actions available at this state s(due to a certain policy 𝜋) in which each action has a certain probability of transiting the agent to a state s’ with an immediate reward r and future rewards 𝛄 *V(s’).
Math Behind Reinforcement Learning, the Easy Way
0
math-behind-reinforcement-learning-the-easy-way-1b7ed0c030f4
2018-08-05
2018-08-05 09:36:02
https://medium.com/s/story/math-behind-reinforcement-learning-the-easy-way-1b7ed0c030f4
false
2,155
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ziad SALLOUM
https://www.linkedin.com/in/ziad-salloum/
1f2b933522e2
zsalloum
20
41
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-13
2018-02-13 04:21:40
2018-02-13
2018-02-13 04:29:37
0
false
en
2018-02-13
2018-02-13 04:31:12
0
1b7efd224077
0.826415
1
1
0
I’m running out of creative titles for each post, so I think they’ll just be titled “Week #” from now on. Anyways, here’s what I did this…
1
Week 6 I’m running out of creative titles for each post, so I think they’ll just be titled “Week #” from now on. Anyways, here’s what I did this past week: Data science: did some exploratory data analysis on a dataset that contains information about all significant earthquakes (magnitude 5.5+) from 1965–2016. Used Python, Pandas, and Matplotlib to create some pretty basic graphs to show simple trends. Most earthquakes have been of magnitude 6.5 or less. This was our first assignment for the class, so other then the simple analysis, there wasn’t much else. Data communications/computer networks: I’m really starting to enjoy this class. Lots of hands-on projects/labs. We’ve learned how to use PuTTY to SSH into certain addresses, how to make an application that uses the Twitter API, the basic structure of the Internet, and a lot more. Looking forward to what else I’ll be able to learn! I also did some exploratory analysis on a diamonds dataset for my R programming class. Simple graphs, but also did some things with vectors and finding the minimum values through summary. All in all, I have been keeping busy with coursework, and am really enjoying everything so far. It’s great to take classes that are relevant to my interests and using my current skills/learning new skills. Thanks for reading!
Week 6
1
week-7-1b7efd224077
2018-02-13
2018-02-13 13:23:55
https://medium.com/s/story/week-7-1b7efd224077
false
219
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Everett Yee
Studying Data Analytics at Chapman University in Orange, California.
40f412da4a4
eyee19
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-02
2018-04-02 18:33:40
2018-03-30
2018-03-30 15:59:00
0
false
en
2018-04-02
2018-04-02 18:34:28
1
1b8068aab092
2.528302
0
0
0
Digital signage is now a familiar site in the modern world. From displaying menus at restaurants to offering directions in a building…
4
4 ways AI is transforming digital signage Digital signage is now a familiar site in the modern world. From displaying menus at restaurants to offering directions in a building, digital signage feeds audiences information in a dynamic way. But how much more valuable could this channel be? Can digital signage take on a role to personalize messages or respond to a situation in real-time? Yes, with help from artificial intelligence (AI) and deep learning. Deep learning With deep learning, AI-driven platforms evaluate large data sets, typically in real-time, leading to specific reactions. AI engines have access to huge amounts of data. Data, of course, isn’t any good unless it’s analyzed and delivers an actionable response. AI is all about automation. It doesn’t necessarily think for you. What it can do is draw conclusions, find patterns and react to situations. The platform can learn over time, making it an even more valuable tool. So, what does the future of digital signage look like with a boost from AI and deep learning? Personalized experiences Every customer wants to feel important and have a personalized experience. AI and deep learning are the tools to make it happen. Soon, digital signage platforms, powered by AI and deep learning, could actually recognize customers. Just like local stores once new all their customers’ names, digital signage could act as a greeter. The digital signage could recognize the customer, say hello, and offer them useful information like what’s on sale, based on the customer’s buying history. While an amazing feat of technology, organizations should present this as a way to personalize what you see, becoming a benefit rather than a privacy concern. Relevant content In-store shopping continues to decline in favor of online shopping. That means retailers need to create experiences for shoppers who visit their brick-and-mortar stores. Many have already been using digital signage to promote sales or offer customers an in-depth look at products. AI can take it to the next level with personalization. A business already has historical data on its customers and their behaviors. Specific content is already created that plays at certain times or days. That’s the baseline that informs what content these consumers would most want to see. With AI and deep learning, there are two ways to improve content: either by putting the data in context or creating personalized ads. With context, the system is already starting with known behaviors like an increase in purchases of sunglasses after sunny days. But that won’t always be true. Deep learning adds context to this rule by capturing and integrating content that informs the situation. Maybe it’s a rainy day, which the system could detect with weather data. Or, the store knows, via sensors, that no one is shopping for sunglasses. This learning allows for the signage to overrule the sunglasses promo, switching it to items shoppers were currently looking at or umbrellas. Deep learning by an AI platform enables targeting down to the individual. If a male shopper enters a clothing store, digital signage could detect that the shopper was in his 20s wearing hiking boots. The system takes this information then reviews what items are in stock or on sale that men who purchased hiking boots also bought. What it finds could then be communicated to the shopper in almost real-time. Not only is the customer seeing personalized information it will prompt them to look at these items and make more purchases. How will AI evolve your digital signage? The investment in digital signage and AI will continue to grow. The global digital signage market is expected to grow to $31.71 billion by 2025. While the AI market is predicted to rise to nearly $60 billion by 2025. These sectors are seeing phenomenal growth, which means organizations all over the world are investing in them to deliver better results. Intelligent digital communications are changing the world. Are you ready to be a part of it? Originally published at www.digitalsignagetoday.com on March 30, 2018.
4 ways AI is transforming digital signage
0
4-ways-ai-is-transforming-digital-signage-1b8068aab092
2018-04-02
2018-04-02 18:34:29
https://medium.com/s/story/4-ways-ai-is-transforming-digital-signage-1b8068aab092
false
670
null
null
null
null
null
null
null
null
null
Retail
retail
Retail
16,358
Digital Signage Today
The leading source for news and information about #digitalsignage and #DOOH media and advertising. Get news to your inbox: http://ow.ly/7B7g30j6vDJ
2e9ae3157f35
DigSignageToday
6
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-25
2018-05-25 19:46:26
2018-05-25
2018-05-25 19:48:57
2
false
tr
2018-05-25
2018-05-25 19:48:57
10
1b830dea4516
1.575786
1
0
0
Toplamda 750 Milyon Pdata tokeni üretilecektir. 1 PData tokeninin değeri 0.10 Amerikan Doları’na eşittir. Ethereum bazlı fiyatı günlük kur…
5
2 — OPIRIA PDATA: TOKEN YAPISI Toplamda 750 Milyon Pdata tokeni üretilecektir. 1 PData tokeninin değeri 0.10 Amerikan Doları’na eşittir. Ethereum bazlı fiyatı günlük kur değişimine göre bildirilecektir. Ön Satış Dönemi Ön satış dönemi 20 Nisan’da başladı ve 15 Haziran’da sona erecek. Minimum işlem tutarı 2500 dolar olarak belirlenmiştir. Ön satış döneminde herhangi bir soft cap belirlenmemiştir. Cap gizli tutulmaktadır. Eğer bu seviyeye ulaşılırsa satış durdurulup 24 saat içinde ana satış sürecine geçilecektir. Bu durum gerçekleştiğinde https://opiria.io/ adresinde, PData Telegram grubunda ve diğer sosyal medya platformlarında duyuru yapılacaktır. Ana Satış Dönemi Yukarıda da belirttiğim gibi ön satış dönemi bittikten 24 saat sonra ana satış başlayacaktır. Bu da normal şartlarda 16 Haziran’da başlayacağı anlamına gelmektedir. Eğer belirlenen gizli soft cap seviyesi geçilirse ana satış daha erken tarihte başlayabilir. Ana satış dönemi 14 Temmuz’da sonra erecektir. Satışın ilk günü %15 bonus verilecek ve bu bonus her gün %1 düşerek sonunda sıfıra ulaşacaktır. Yani satışın son 14 gününde herhangi bir bonus verilmeyecektir. Satışın ilk dört saati için maksimum işlem tutarı belirlenmiştir. Bu da 1 Ethereum’dur. Dördüncü saatten itibaren bireysel olarak belirlediğiniz miktarlarda işlem yapabilrisiniz. Toplamda 30.000.000 Dolar’lık bir satış hedeflenmektedir. Kabul Edilecek Kripto Para Birimleri Özel satışlar sırasında Ethereum (ETH), Bitcoin(BTC) ve Ripple (XRP) para birimleri kabul edilecektir. Etherwallet adresi kullanılarak PData tokeni alınacaktır. Ön satış ve ana satış aşamalarında ise sadece Ethereum (ETH) bazlı ödemeler kabul edilecek olup bu ödemeler akıllı sözleşmeler ile güvence altında olacaktır. Katılım Gösterebilecek Ülkeler Dünya üzerindeki birçok ülkenin vatandaşları token dağıtım etkinliklerine katılabilirler. Fakat bazı ülkelerin bu konu ile ilgili yasal düzenlemeler yapmaları gerekmektedir. Bunları sıralayacak olursak; Amerika Birleşik Devletleri, Kanada, Çin, Afganistan, Bosna-Hersek, Guyana, Irak, Laos, Suriye, Libya, Uganda, Vanuatu, Yemen, İran, Kore, Myanmar, Etiyopya katılımın kabul edilmeyeceği ülkelerdir. Proje KYC istemektedir. Web sitesi: https://opiria.io/ Teknik İnceleme: https://opiria.io/static/docs/Opiria-PDATA-Whitepaper.pdf Telegram: https://t.me/PDATAtoken BTC ANN: https://bitcointalk.org/index.php?topic=3076122.new#new BTC Bounty: https://bitcointalk.org/index.php?topic=3081090 Medium: https://medium.com/pdata-token Twitter: https://twitter.com/PDATA_Token Facebook: https://www.facebook.com/pdatatoken/ Reddit: https://www.reddit.com/r/PDATA/ My BitcoinTalk Profile: https://bitcointalk.org/index.php?action=profile;u=1780407
2 — OPIRIA PDATA: TOKEN YAPISI
5
2-opiria-pdata-token-yapisi-1b830dea4516
2018-05-26
2018-05-26 09:38:27
https://medium.com/s/story/2-opiria-pdata-token-yapisi-1b830dea4516
false
316
null
null
null
null
null
null
null
null
null
Data
data
Data
20,245
Burak Koçyiğit
Industrial Engineer / Cryptocurrency Enthusiast
34eedb2284dc
burakkocyigit1
927
1,203
20,181,104
null
null
null
null
null
null
0
null
0
8a546d9d472d
2018-02-12
2018-02-12 05:59:02
2018-02-12
2018-02-12 06:01:41
1
false
en
2018-02-12
2018-02-12 06:06:09
0
1b8360001f6d
1.132075
0
0
0
Known as “the most popular Chinese station in the world” ,KAZN AM900, is part of the Multicultural Group (MRBI). This station has millions…
5
[Interview] RealChain Partner Was Invited to “The Voice of the Blockchain in North America” Known as “the most popular Chinese station in the world” ,KAZN AM900, is part of the Multicultural Group (MRBI). This station has millions of online listeners and is a significant influence in Chinese communities at home and abroad. 2018 is the year of Blockchain and digital currency. Mr. Wang Yi, the CEO of TaoDangPu (the first strategic partner of RealChain), was invited to The Voice of the Blockchain in North America. He was hosted by KAZN AM900 and interviewed this January. During the interview he interacted with online listeners all over the world regarding the role that RealChain plays in the market of high-end luxury appraisal. Through interviews, we know that RealChain is aiming to solve the problem of fraud, inefficiency and information asymmetry in the high-end consumer products industry. RealChain combines AI hardware and encrypted blockchain data. RealChain then cross checks the information collected by the smart hardware with the data in the back-end. For example, the field exam of a product is completed through the checking the spectral information with back-end chain information. Then through the combination of AI image recognition and information stored on the tamper proof blockchain, an authenticated verification quickly This brings great convenience to high-end luxury transactions. TaoDangPu, RealChain appraisal data provider, will promote the development of RealChain. As more and more data institutions join, the vision of RealChain will be realized!
[Interview] RealChain Partner Was Invited to “The Voice of the Blockchain in North America”
0
interview-realchain-partner-was-invited-to-the-voice-of-the-blockchain-in-north-america-1b8360001f6d
2018-02-12
2018-02-12 06:06:10
https://medium.com/s/story/interview-realchain-partner-was-invited-to-the-voice-of-the-blockchain-in-north-america-1b8360001f6d
false
247
RealChain Official Blog
null
RealChainFoundation
null
RealChain
null
realchain
REALCHAIN,BLOCKCHAIN,APPRAISAL
RealChainFund
Blockchain
blockchain
Blockchain
265,164
Real Chain
null
c71a37037b1d
realchainfund
69
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-21
2017-09-21 18:33:36
2017-09-21
2017-09-21 18:34:38
0
false
en
2017-09-21
2017-09-21 18:34:38
1
1b836818426d
0.396226
0
0
0
Vanderbilt University has been working to create an artificial intelligence (AI) system that incorporates the cognitive processes of people…
5
AI Incorporates Cognitive Processes Typical of People with Autism Vanderbilt University has been working to create an artificial intelligence (AI) system that incorporates the cognitive processes of people with autism into the code. The research informs both development of a robust AI and enhanced understanding of the cognitive processed of people on the autism spectrum. The system replicates models of human cognition and is trained to solve many cognitive tests to improve its problem-solving abilities. Assistant Professor of Computer Science Maithilee Kunda said, “One of the big mysteries we have… Click to continue reading the article in the Wireless RERC’s Newsroom at Georgia Tech.
AI Incorporates Cognitive Processes Typical of People with Autism
0
ai-incorporates-cognitive-processes-typical-of-people-with-autism-1b836818426d
2018-04-20
2018-04-20 08:22:44
https://medium.com/s/story/ai-incorporates-cognitive-processes-typical-of-people-with-autism-1b836818426d
false
105
null
null
null
null
null
null
null
null
null
Autism
autism
Autism
6,934
Center for Advanced Communications at Georgia Tech
The Center for Advanced Communications Policy (CACP) and Wireless RERC are Georgia Tech research and policy development centers
deb28d7ae215
GT_CACP
4
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-05
2017-09-05 16:48:23
2017-09-05
2017-09-05 16:48:56
0
false
en
2017-09-05
2017-09-05 16:48:56
1
1b83e4fffcf0
0.845283
0
0
0
A great find — Kai-Fu Lee, only 4 days ago, giving an absolutely fabulous talk (for the first half) about AI, and specifically about deep…
4
A Great Find. A great find — Kai-Fu Lee, only 4 days ago, giving an absolutely fabulous talk (for the first half) about AI, and specifically about deep learning. He gives example after example about deep learning from his own portfolio (in China, of course, because in many ways it’s ALREADY superior to the ridiculous USA, and only getting better very quickly), and really scares the audience, and rightly so. Then he says “oh, we’re going to be fine, better than ever, even AI scientists completely agree with me”, and most of the audience is very relieved. He’s very good at the next ten years, but very bad at the following twenty years — although many of you disagree with me right now, which is funny. But he says that job loss will be real and very vitally important even over the next ten years, and I agree with him. And military AI arms will be unbelievably terrible and critical, although he does not venture into that dangerous water, which he shouldn’t. But the world is much more complex and chaotic than even he believes, and it’s going to play out very different in the following twenty years, to his amazement. That’s why Elon Musk is a visionary, a genius of the first order, and he’s just a very talented man. https://www.youtube.com/watch?v=SWMZ-sGx1Rk
A Great Find.
0
a-great-find-1b83e4fffcf0
2017-09-05
2017-09-05 16:48:57
https://medium.com/s/story/a-great-find-1b83e4fffcf0
false
224
null
null
null
null
null
null
null
null
null
Music
music
Music
174,961
Peter Marshall
I am extremely interested in AI, especially the not-so-good side of AI weapons and AI war, although the good parts are magnificent and wonderful too, naturally.
f6bab8ee3d29
ideasware
1,765
276
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-10
2018-08-10 18:25:37
2018-08-10
2018-08-10 18:31:50
6
false
en
2018-08-10
2018-08-10 18:31:50
8
1b862f56bcdd
4.014151
8
0
0
Every second, more than 65,000 search queries are processed by Google, 8,000 tweets are posted on Twitter, and just under a thousand photos…
4
Putting Data Visualization to Work for Your Analytics Projects Every second, more than 65,000 search queries are processed by Google, 8,000 tweets are posted on Twitter, and just under a thousand photos are shared on Instagram. These numbers shouldn’t be a surprise. We are being inundated by data. In fact, every day, 2.5 quintillion bytes of data are created. This avalanche of numbers and information is making the ability to analyze data increasingly important, and unsurprisingly, Data Scientist roles have increased by 650 percent. How will these data scientists share findings and trends? Data visualization. Image source: tableau.com This chart shows rising global temperatures; it takes average temperatures between 1961 and 2010 as the baseline, and then compares temperatures between 1850 and 2016 to this baseline. Even if you didn’t read the descriptions, you probably noticed the upward trend, and can quickly make sense of this data. The thing is, there are over 100 points of information on this chart. That’s the power of data visualization; It allows us to make sense of a large amount of information quickly. Data Visualization in the Visual Analytics Process Contrary to popular belief, data visualization is not simply the last step of an analysis — there’s more to it than simply creating a quick chart for a presentation to management; it’s part of visual analytics. Imagine you were tasked with comparative analysis of dietary trends across 150 countries. Looking at all this data in the form of a table would take a long time. Now take a look at the chart below. Image source: tableaupublic.com This chart shows dietary trends for 150 countries over 50 years in four categories: calories, protein, fat, and food weight. The world average is taken as our baseline, and each country is then compared to this average. Analyzing this data using visual analytics techniques makes this process much more interesting and less time-consuming. Let’s filter to category ‘fat’ for example. Image source: tableaupublic.com It is easy to spot which countries have something interesting going on. Look at Canada: the consumption of fat is declining. Now look at Kuwait: here, the consumption of fat is increasing. It’s probably worth diving in and looking at what’s driving these trends. And that’s the power of visual analytics. It allows us to use data visualization not as the final step of the analysis, when we want to share our findings with the team, but as the means to making sense of large amounts of information, quickly. Why is Data Visualization so Powerful? A large portion of our brain is dedicated to visual processing. Image source: olgatsubiks.com Our brains can actually process 10 billion bits of visual information per second, which means that as soon as we see a chart, our brains start making sense of it. The large size of our visual cortex also allows us to take care of visual information in what psychologist Daniel Kahneman calls “System 1,” which is “is the brain’s fast, automatic, intuitive approach. System 2, meanwhile, is “the mind’s slower, analytical mode, where reason dominates.” “System 1 is…more influential…guiding…[and]…steering System 2 to a very large extent,” Kahneman says. Being part of System 1, data visualization is more likely to result in actions and decisions than any other type of analysis. Data Visualization Is Part of Communication As a data scientist or a data analyst, you need to offer clear insights and trends, and then be able to communicate them effectively. In fact, the ability to communicate is what often determines the success of analysts and data scientists. Knowledge of math and statistics, experience in coding, and even domain knowledge is not going to be sufficient to move the needle in any organization. Data scientists and analysts need to communicate their findings in a clear, concise, and easy-to-digest manner. And as you’ve probably realized by now, data visualization does this best. What Skills Do You Need for Data Visualization? A quick search on LinkedIn for “data visualization” returned 9,000 jobs worldwide. I then used Python to collect job descriptions. Here’s what I found in terms of necessary skills: Image source: olgatsubiks.com Data visualization overlaps with other roles in data, including software development, data engineering, and data science. Knowledge of Tableau, a data visualization tool, is in high demand, but the use of other data visualization tools is often required as well, including Microsoft PowerBI, Qlik, MicroStrategy, and Adobe Cloud. The ability to wrangle data using SQL, and understanding of core databases such as SQL Server and Oracle can also be essential. The rapid growth of data volumes won’t be slowing anytime soon. Data visualization removes barriers to data analysis. Those who can manipulate these tools will turn data visualization to their advantage and move ahead in their careers. Are you ready to harness the power of data visualization? Check out BrainStation’s Data Analytics course to get core skills and learn about data visualization. This article originally appeared on the BrainStation Blog.
Putting Data Visualization to Work for Your Analytics Projects
14
putting-data-visualization-to-work-for-your-analytics-projects-1b862f56bcdd
2018-08-13
2018-08-13 16:49:24
https://medium.com/s/story/putting-data-visualization-to-work-for-your-analytics-projects-1b862f56bcdd
false
812
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
BrainStation
Build your digital skills in business, design and technology. Offering courses in New York | Toronto | Vancouver | Costa Rica
14749378970b
BrainStation
2,401
1,165
20,181,104
null
null
null
null
null
null
0
null
0
700bae695f4b
2017-11-15
2017-11-15 15:26:35
2017-11-15
2017-11-15 18:56:08
12
false
en
2018-03-15
2018-03-15 17:41:44
8
1b865721430a
4.583962
31
1
0
IBM Watson Studio offers the only collaborative ecosystem of analytics tools, cognitive services, and data discovery, management, and…
5
Data Science, U and I: 10 powerful features on Watson Studio, no coding necessary IBM Watson Studio offers the only collaborative ecosystem of analytics tools, cognitive services, and data discovery, management, and governance capabilities designed for teams to minimize the time it takes to solve problems. To succeed in providing this unified and valuable experience to diverse data-driven teams, Watson Studio takes of advantage of many graphical, UI-based tools alongside familiar, lower-level features such as API development, data science notebooks, and GitHub integration. In this post, we’ll cover 10 powerful features on Watson Studio you can use without writing a line of code. We’ll show how seamless integration with IBM Cloud services leverages the power of IBM and open-source technologies combined. 1. Find and explore rich assets in the Community The Community provides access to a variety of assets and educational material on data science, other data topics, and Watson Studio from myriad IBM sources and from across the web and the public domain. We’ll cover a few in this post — first among them is Data Sets. Searching for “transactions” data in the Community Use the Community to find data and incorporate the data sets into your Projects. Importantly, we maintain a large and growing repository of Notebooks, Tutorials, and Articles to easily get new users started and empower experienced data scientists with new tools and techniques. 2. Manage and discover data in Data Catalog Data Catalog helps you reclaim lost time with intelligent, automated, and simplified data discovery and governance. Home screen for the Great Outdoors Sandbox catalog With Data Catalog, you can manage secure connections to databases, govern permissions, and share and discover data. This data control panel integrates with projects, model development, and visualization tools so that you can quickly gain insight and develop solutions without forcing your whole team to navigate through a complex journey of discovery, permission, and extrapolation. 3. Refine and shape data with IBM Data Refinery IBM Data Refinery contains everything you need to refine, shape, and inspect your data. The UI provides an interactive environment where anyone on your team can connect to data wherever it resides and then quickly analyze and visualize. Read more about it. IBM Data Refinery for the Kidney Disease data Ont the Operation pane, you can define transformations and aggregations for your data graphically or with code. Summary after refinement 4. Create visualizations in a flash IBM open sourced the Brunel data visualization language, the backbone of the advanced charting and data visualization capabilities on Watson Studio. Use a combination of drag-and-drop intuition and code-level customization. Histogram of “Age” in the Kidney Disease data set Generate a histogram instantly, if you’d like. Configure the Columns, Chart types and Brunel syntax to develop more complex visualizations. 5. Rapidly build models with the Watson Machine Learning model builder The automated model builder uses the power of Watson Machine Learning to automatically prepare data and build models. In the video below, I upload a data set of customer churn, train three models, and consider the performance metrics when deciding which to save and deploy. Build a powerful customer churn model in a few easy steps You can drag and drop a data set, or use refined and connected data. Then, manually choose your models or let WDP do it for you. 6. Save, version, and deploy models with one click After creating models with Watson Machine Learning tools in the platform, you can automatically integrate version control, save and manage the model, and even generate an API endpoint to consume in a variety applications. Model overview page 7. Monitor performance, enable continuous learning After you’ve built and saved models, you can automatically monitor the performance of your model over time. Then, select the metrics you’d like to track, the number of records, or whatever feedback you’d like to use as a trigger. Retrain and deploy. Configuration for continuous learning for ML models 8. Test your API and make predictions After deploying a model, create an API endpoint for that model version. You can test the API from the model’s page on the platform. This should give you an indication of how the model will behave given a particular observation. Testing the API for a multiclass classification problem You can also grab the code used to make the API request and the response from the icon on the top left of the chart. Simply copy/paste into your application to embed your machine learning model. 9. Define your flow Build complex Flows on the Watson Studio. We’ve got another post detailing how to build to expressive and powerful machine learning models in this interface. Flexibly leverage SPSS or Spark runtimes. The Flow interface for a customer churn model Above, we build an SPSS Modeler flow for the customer churn data. On the top left, notice how easy it is to merge data sets. Follow the data through transformations until we build a model and evaluate the model on unseen data. Create visualizations, write data to tables and files, and persist models with Watson Machine Learning from a graphical, intuitive interface. 10. Unite your team Watson Studio does more than integrate a suite of powerful tools and services — it unites your team. The concepts of Projects and Collaborators incorporate team members of various levels and roles into a managed environment. All you need to do is look them up. Add collaborators to your project You can control different permissions and access from the project level and from the Data Catalog, expanding your collaboration and governance capabilities without delay.
Data Science, U and I: 10 powerful features on Watson Studio, no coding necessary
132
data-science-u-and-i-10-powerful-features-on-watson-data-platform-no-coding-necessary-1b865721430a
2018-06-04
2018-06-04 14:10:12
https://medium.com/s/story/data-science-u-and-i-10-powerful-features-on-watson-data-platform-no-coding-necessary-1b865721430a
false
857
Build smarter applications and quickly visualize, share, and gain insights
null
null
null
IBM Watson Data
dsx@us.ibm.com
ibm-data-science-experience
IBM,DATA SCIENCE,MACHINE LEARNING,DEEP LEARNING
IBMDataScience
Machine Learning
machine-learning
Machine Learning
51,320
Adam Massachi
Watson Studio @ IBM
fe353759d0c
adammassachi
154
254
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-09
2018-05-09 12:58:08
2018-05-14
2018-05-14 02:13:23
0
false
en
2018-05-14
2018-05-14 14:14:45
10
1b872891e53e
6.943396
14
0
0
To paraphrase Socrates, all I know is that my opinions are my own, and even that is debatable.
5
Good Intentions and Goodhart’s Law To paraphrase Socrates, all I know is that my opinions are my own, and even that is debatable. I waited to post this, and am glad I did because there are now so many other journalists and experts who are able to point out the serious ethical issues around consent and power dynamics with the Google Duplex demo. I am glad not to be alone in feeling queasy about a conversational assistant that can impersonate a human, that takes away the discomfort and inconvenience of having to interact with another human who is a non-native English speaker, and can be the gentle robot that demands children say pretty please. Don’t get me wrong, I clapped like everyone else at the sheer technical excellence of a machine passing the Turing Test effortlessly, and I can see so much value in the feature itself. To help people with hearing loss avoid social isolation from being unable to answer the phone, for instance, something I’m acutely familiar with. To help people communicate in a language they don’t speak. So many amazing things that could be done. The polite digital assistant has tremendous potential to manage communities at scale, where moderators are scarce and struggling. An automated defuser of tensions would be a great thing to have on 4chan, or on spiraling reditt threads. Let’s go further. An automated, welcoming, gender-blind reviewer for code-commits to open-source may not be far away, which could go a long way to reducing the gender-bias that is known to exist when contributors are identifiable as women. And I’d probably feel a lot more comfortable practicing new languages with a conversational assistant than with a taxi driver when I need to get to my flight on time. But as my soul-sister Zeynep Tufekci points out, the issue isn’t about the demo. A robot that avoids deceptive slang and admits to being a robot only solves a part of the problem. Sadly, the news-cycle around this issue has died down, because the vast majority of people (technologists and luddites) are unable to articulate exactly what the problem is here. And if we can’t make this clear (and possibly even if we can), the feature will launch, possibly stripped of some um’s and ahh’s, and nobody inside or outside Google will be able to prevent it. For the rest of this essay, I’m going to talk about a hypothetical executive named Sridhar Pillai, who has nothing to do with Google, and a hypothetical senior engineer by the name of Jake Dent. Part #1: Good Intentions SP and JD are fundamentally optimists. Their fatal flaw, or hamartia in the Greek tragedy sense of the word, is their commitment to be good, do good and only see good in others. They are not products of deep and daily trauma, and their success has involved a sudden catapulting to the top. When people raise issues that threaten their rosy view of the world, these leaders are uncomfortable with “the negativity,” or they dismiss it as an aberration. “By and large, people are good,” they might say. “We have a few bad apples, but there’s no systemic issue.” They say this because to both SP and JD, a “systemic” issue is one that involves problems that are widespread and people who are rotten at the core, like a building built for cheap with faulty wiring leading to a terrible fire. They don’t get that a systemic issue is not just a case of fundamental incompetence or malice. A systemic issue arises when you fail to protect your system against attack. “For the most part, people are good,” may be true, at least at first. But if you have no plan to deal effectively with the few who are not, sooner or later, evil sneaks under the gate. Both SP and JD are not just better than the random assortment of terrible tech leaders infecting Silicon Valley, they are deeply, fundamentally good people who want to do the right thing. They are servant leaders, humble and easygoing, the kind of leaders who introduce themselves to you so you’ll open the door for them, or who make you a coffee if they’re making one for themselves. Since SP and JD are hypothetical, they have never really had technology used against them, nor have they ever desired to use it that way against others. They would be horrified by the satires of Google Duplex where people outsource conversations with their parents or delegate breakups to a digital assistant. They would never understand or believe people could do such things. But SP and JD aren’t going to read those satires. They aren’t going to hear Zeynep’s voice. Because hypothetical leaders like SP and JD will have a hypothetical communications team whose job it is to read the news for them and let them know, “Yeah there’s some controversy, but we knew it would happen and it’s managed. We made a statement.” SP and JD will likely also have hypothetical (non-digital) executive assistants who triage their email for them, and Jane Admin knows that SP doesn’t have time for ragey rants from random people before the next board meeting. Even on the off-chance that SP and JD read Twitter, they will have a team of lawyers telling them that under no circumstances should they respond. And let’s say there’s someone close enough to SP and JD who is trusted to give constructive feedback, such a person likely has a priority list a mile-long, and there just might be other things on the agenda. Technology that is built by greedy and unethical people is, in some ways, simpler to dismantle. The fault-lines are clear. You can, with some investigation, find out where they cut corners, who they bullied, what lies they told to get the job done for faster or cheaper. You can burn down the house if you need to. Technology built by fundamentally good people is a harder problem, because you can’t justify burning down the house. Such technology institutionalizes the rapid execution of good intentions, but consistently fails at preventing malicious use. For example, a smart pacemaker, connected to the cloud, provides for software upgrades that don’t require surgery, better data analytics and more responsive care. But your pacemaker can also testify against you in court. Who thought that would happen? With fundamentally good people, railing against their lack of moral compass doesn’t work well. Activists lose the moral high-ground the minute they fault ignorance as equivalent to malice. And pointing out someone else’s lack of moral intuition can backfire if there’s even the slightest chink in your own moral authority. But there may be an alternative, one that scales to a leadership team instead of placing the burdens of humanity’s future on the shoulders of a single good person with the power to change the course of history. Part #2: Goodhart’s Law This article is the best layperson’s explanation of Goodhart’s Law, which states roughly that when you aim your efforts towards a proxy metric for the thing you actually want to optimize, over time the proxy metric and the real metric you’re aiming for start to diverge, making your proxy metric useless in achieving your real goal. Let’s say that you’re developing a cool new app that classifies people’s gender based on appearance. I have no idea why anyone would do this, it’s a terrible and dangerous idea, except of course this is happening already with all the usual lack of understanding of the gender-spectrum. Let us say that you want to optimize this system to be able to classify anyone. If your success metric is the number of successful classifications, and let’s say you can’t get further funding unless your success rate is > 95%, you’re not going to test out your classifier on any society that has a population of >5% of transgender, third-gender, or non-binary people. It’s not in your interest to prioritize this work until you get the next round of funding. If your success metric is the number of daily active users, then you succeed as that number goes up. If that means building a chat community for people to discuss a particular gender-classification and argue “I don’t think she’s a woman, there’s got to be a bug with how we classify Japanese jawbones” sure, you’ll do that without thinking about it, because it’s in your interest to do so. Tech companies are deeply, deeply metrics driven, and they very often have the wrong metrics. The Board (if the company is publicly held) looks at these metrics and expects them to get better. Changing the metrics that get looked at isn’t just the most effective way to drive culture change, it’s often the only way. So how might we change the numbers that get looked at by our hypothetical leaders SP and JD and by the Board to which they are beholden? Part #3: Beyond Good and Evil I have spent too much of my life studying (and teaching) ethics to ever make a moral claim. The reality of our situation is that a business justification beats a moral argument, almost every time. There are factors that go into that beyond anyone’s control, for instance that busy leaders (even hypothetical ones) don’t respond well to anger. They are trained to de-escalate and delegate, and so their response to any raised issue is fundamentally one of “How fast can I calm this person down and empower them to go fix their own problems?’ Expecting that these leaders build enough moral intuition to avoid the daily dose of crises is unrealistic. No single human being is going to be able to see every blind spot, to imagine all the ways technology can be abused for evil. Moreover, it is unrealistic to expect anyone to see something when their job, their well-being, their happiness or even their sense of self and humanity are all dependent on their not seeing it. When working with such hypothetical leaders, what we might do instead is present a virtuous cycle. Better proxy metrics (for user happiness, for trust and long-term brand value) that are intuitive enough that they make sense to the Board, the media and the world. A path to reduce frictions in optimizing those metrics. Incentive structures that make the old metrics harder to achieve. And a change management plan that allows for the press to follow along so that leaders are more focused on making the change rather than on crafting “We take this very seriously” statements on a weekly basis. None of this work will be as fundamentally satisfying as having a leader with the moral intuition and fiber to stand up in front of a crowd and take a stand. But we have been talking about hypothetical leaders. Maybe no real leader, in this day and age, dares to do that. We all have our hands a little dirty.
Good Intentions and Goodhart’s Law
84
good-intentions-and-goodharts-law-1b872891e53e
2018-06-02
2018-06-02 13:15:00
https://medium.com/s/story/good-intentions-and-goodharts-law-1b872891e53e
false
1,840
null
null
null
null
null
null
null
null
null
Silicon Valley
silicon-valley
Silicon Valley
7,540
Anat Deracine
Author of "Driving by Starlight" https://goo.gl/N35aS1 I'm on Twitter at @anat_deracine
731a4ae60536
aderacine
424
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-16
2018-06-16 14:56:55
2018-06-16
2018-06-16 15:05:43
1
true
en
2018-06-17
2018-06-17 14:28:14
0
1b875e939452
0.860377
0
0
0
With sites like Toutian and Douyin, the situation has progressed beyond just “AI making us dumb”. Yes it traps us in our echo chamber, we…
5
How AI is changing China’s value system With sites like Toutian and Douyin, the situation has progressed beyond just “AI making us dumb”. Yes it traps us in our echo chamber, we read more of what we like to read already, but the far more dire consequence is its impact on the value system of a whole generation of (mostly young) Chinese. If having cosmetic surgery, wearing scantly, and doing “finger dance” in front of your smartphones will turn you into an internet sensation that earn you five figure sum every month, why bother “getting a real job”? Why put in the effort and study to become doctors, architects, engineers, when you make a fast buck being as outrageous and attention-grabbing as you can? What gets thrown in is the immediate gratification of having millions of views and followers. Who needs real world friends? Narcissism is becoming a national past time, and Douyin is helping to spread it fast in China. This has not gone unnoticed by the government. Bans will come. Stay tuned.
How AI is changing China’s value system
0
ai-making-us-dumb-1b875e939452
2018-06-18
2018-06-18 15:05:05
https://medium.com/s/story/ai-making-us-dumb-1b875e939452
false
175
null
null
null
null
null
null
null
null
null
China
china
China
27,999
China Startup CEO
Joy and misery of working inside the whirlwind
7d88285922a5
chinastartupceo
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-24
2018-01-24 16:40:44
2018-01-24
2018-01-24 17:35:47
1
false
en
2018-01-24
2018-01-24 17:35:47
6
1b8778edbf47
6.041509
7
0
0
There’s quite a lot of chatter these days about imposter syndrome, particularly in the data science community. Caitlin Hudon’s quick…
3
When The Imposter is Found Out There’s quite a lot of chatter these days about imposter syndrome, particularly in the data science community. Caitlin Hudon’s quick thoughts on why that might be are on point — data science is a new field, most of us are coming from something else, and it’s rapidly evolving, which all adds up to a lot of anxiety. Social media of course took off at the same time as data science, and I’d offer up that social media amplifies the signals that everyone else knows more than you. This goes double if you’re an introvert like me living in the extroverts’ social media world. In my experience, imposter syndrome also has had a lot to do with identity but I’ll get to that later. It was only a couple of years ago that I first heard that term used and it immediately resonated with me. It seems to me it’s a chronic condition, this imposter syndrome (hence the term “syndrome”), and I suspect FOMO is a cousin of this condition. I also doubt I’m the only one to have experienced some acute, triggering trauma that induced long spells of depression and utter depletion of self-worth. There was therapy, there were strained relationships. Here’s what I experienced and how I found my way out. I truly hope this helps some folks!* Seriously. https://errantscience.com/blog/2016/10/26/imposter-syndrome/ An Imposter’s Platform: Sneaking into a PhD Fellowship The beginning of imposter syndrome for me probably was being in an Ivy League institution at all. I’m from small town Ohio, my dad’s a car technician and my mom worked for the county, I studied traditional mechanical engineering, and the idea that I’d be at a premier academic institution like Columbia University for a masters degree in engineering was just silly. But once I completed my masters, my academic advisor asked if I’d want to stay and try for a PhD. I couldn’t say no, and inside my head my career was off to the races. I previously hadn’t seen myself as “ambitious,” but now I saw a path to becoming a renowned academic expert in the energy sector (a major leap from getting a PhD, I know). My department was mechanical engineering, and I was studying energy systems quite intensely. Columbia was just founding their own data science institute (this was 2011–2012ish) and it was becoming increasingly relevant to my research. So I jumped head first into that rabbit hole too. Dropping Out of ML I was lucky enough to be in graduate school at the right place and right time for data science — Columbia University in 2012. As part of my research, we had installed a host of electricity, heat and hot water, and temperature sensors in my advisor’s house. We had a couple of research questions — how long of a data capture/training period do you need to be able to predict a house’s energy use? can we detect certain energy end-uses (called “NILM”)? — that were clearly going to need machine learning. With only the basics of statistical inference jumbled somewhere in my brain from undergrad five or six years prior, I made the mistake of enrolling in graduate level machine learning. A lot of it made sense. I was totally on board with perceptrons and neural networks. Even the practical application of support vector machines was intuitive. But it was the theory behind support vector machines and the VC dimension where I ran off the rails. I ended up dropping out because at the same time I was trying to study for my qualifying exam and could only fight so many battles. Sounds practical, but if you’re a doctoral candidate chances are you’ve done pretty well in school up until then and have never had to do so. So that was one failure. (For aspiring data scientists out there, this shouldn’t be news, but you really, really need to start with a sound foundation in statistical inference.) Failing Qualifying Exam and Leaving Academia Here’s a side lesson not wholly unrelated to imposter syndrome: I can’t recommend leaving Facebook and reading Cal Newport’s Deep Work enough. I didn’t until it was too late. I can recall spending days on end in the library at Columbia “studying” for my qualifying exam by doing practice problem after practice problem, but intermittently and with half my attention on Facebook or elsewhere. I had zero concentration and I paid the price by failing my qualifying exam. Could I hunker down and take it again in a year and probably succeed? Sure. But I took that failure to heart and didn’t have the grit or right support system within academia (who does, really?) to see that. I sought counseling and therapy to try and help figure out where I had gone wrong and what would be my path forward. It helped. I dropped out and moved to Baltimore. Loss of Identity & Confirmation of Imposter Status I mentioned where I came from and the sudden acquisition of ambition because identity has everything to do with imposter syndrome. Arriving at Columbia and getting my masters, I had stuffed the imposter syndrome away for a bit and was riding high on being a doctoral candidate at a prestigious institution. Having a PhD, being an expert, and shaping energy policy someday all became wrapped up in my identity. But dropping out of a class (something I had never had to do before) and then failing my qualifying exam were clear signals that I was, indeed, an imposter. I didn’t belong in academia, I didn’t know the first thing about machine learning (and therefore couldn’t be a data scientist), and I’d therefore have a worthless, middling career. In fact, I convinced myself of the exact opposite: I now had hard evidence that I would not be good at anything, and that I really had no business trying to make contributions to society. The human mind is a magical, beautiful, and often dark place. The Road to Redemption I was depressed for probably a year thereafter. After I moved to Baltimore I accepted a position as a consultant in DC to the U.S. Department of Energy. This should have been a sign that I wasn’t an imposter, that I had talents that were needed, but the mind didn’t latch on to that at the time. Imposter syndrome can be deafening. Here are some things that have helped me. Take a good look at that paycheck. The fact that you are being paid to do a job means you are capable of doing the job. Human resources and your team’s supervisors selected you because you had skills and experience and could learn whatever else you need on the fly. Unless you flat-out lied on your resume, you’re qualified. Period. I know, I know — you’re a cynic like me and immediately point to the hordes that eek by on a paycheck and have no skills. We’re not talking about them. This is you. Right now. This is a fact you can lean on when number 2 below rears its ugly head. Shut up, head. The explanations in your head when you’re going through some crap are probably not right. If you fail at something, it is not a sign that you aren’t good enough or aren’t smart enough or dog gonnit people don’t like you, and I know it’s tough to see that at times. Your mind will convince yourself of anything; in fact, that’s how we got to feeling like an imposter to begin with. Write it down, set it aside, read it a few days later, burn it, repeat. Remind yourself that everyone is faking it somehow. Get out of the hole by digging deeper (on a problem). Seriously, quit Facebook, lay off Twitter for a bit (or just use it to put stuff out and don’t ingest from it). Read Cal Newport’s Deep Work and get to work. Imposter syndrome vanishes as you become more experienced and more expert, and you’ve got drown out the noise and contribute to get there. Work on important problems. This is a bit of conjecture on my part, but there aren’t many ills that I don’t think can be relieved by working on problems that actually matter. Malaise, cynicism, lack of career direction, anxiety, FOMO, and yes even imposter syndrome — what better cure than showing up to work every day and instead of selling a product to someone that doesn’t need it or can’t afford it, how about connecting citizens to social services or each other? How about helping get us to the moon or to Mars? Seek professional help. Psychotherapy can help. I admit I’ve had mixed results but I suspect it’s part of my personality and lack of effort in finding the right therapist every time. But imposter syndrome can slide you into full-blown depression and if you haven’t been into that particular cave before I don’t recommend trying it. *I recently took the Myers-Briggs Type Indicator test and found I’m an INTJ. If you know what this means, hopefully you’ll recognize that this type of vulnerability and openness about what’s going on in my own head (to say nothing of social media in general) does not come naturally. I’m working on it.
When The Imposter is Found Out
88
when-the-imposter-is-found-out-1b8778edbf47
2018-04-12
2018-04-12 13:08:08
https://medium.com/s/story/when-the-imposter-is-found-out-1b8778edbf47
false
1,548
null
null
null
null
null
null
null
null
null
Imposter Syndrome
imposter-syndrome
Imposter Syndrome
1,516
Justin Elszasz
Data scientist for Mayor’s Office of Innovation in Baltimore, Maryland.
d2497ed867b
justinelszasz
3
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-09
2018-05-09 23:20:09
2018-05-09
2018-05-09 23:27:10
1
false
fr
2018-05-09
2018-05-09 23:43:15
4
1b894cc7e8cc
4.701887
0
0
0
Google annonçait cette semaine Duplex, une addition à Google Assistant, qui permet à ce dernier de faire certaines actions pour nous comme…
4
Le problème d’éthique derrière certaines nouvelles technologies Google annonçait cette semaine Duplex, une addition à Google Assistant, qui permet à ce dernier de faire certaines actions pour nous comme prendre des rendez-vous ou faire des réservations. Le hic est que s’il n’est pas possible de le faire en ligne, avec Duplex, l’assistant électronique pourra faire un appel à votre place en se faisant passer pour un humain qui serait votre assistant personnel. Oubliez les phrases coupées des répondeurs automatiques ou des GPS. Google s’est assuré que la façon de parler de l’assistant sonne à s’y méprendre comme un humain. Ils ont même ajouté des « mmm » et des « um » et autres tics de langage. Impressionnant? Certes! Inquiétant? Oh que oui! Il y a un juste milieu entre sonner comme une personne âgée qui s’emporte contre toutes les nouvelles technologies et applaudir tout sans réfléchir. Je pense qu’ici est un bon moment pour prendre un recul et évaluer le tout. Il est important de comprendre que Google n’est pas le premier à faire ce genre de chose. Il est déjà possible, en ligne, de voir des gens en faire parler d’autres, avec vidéo, ou encore d’ajouter le visage d’une personnalité connue sur une vidéo pornographique. Toutes ces possibilités sont en soi inquiétantes. Dans un monde où les « fake news » font beaucoup de dommage, ce genre de technologie n’aide en rien. Mais que Google s’y mette, c’est un pas de trop! Ça soulève un immense problème d’éthique d’encourager ce genre de chose. À l’instar des règles de la robotique, il n’y a aucune règle de base pour la technologie. Les répercussions des buzz de certains programmeurs et créateurs n’ont aucune limite et nous sommes déjà à l’époque où il y a de vraies victimes. En mars dernier, une femme de 49 ans a été tuée par une voiture autonome d’Uber. Ces technologies ne sont pas à point encore et pourtant elles se trouvent parfois sur de vraies routes, entourées de vraies personnes qui n’ont jamais consenti à potentiellement mettre leur vie en danger. Pourtant, ça n’arrête aucune de ces compagnies. On paie un dédommagement à la famille de la victime et la vie continue… sauf pour la personne qui est décédée, mais bon la technologie doit vivre elle! Le pire est que la voiture avait vu la femme. Elle a juste décidé que c’était un faux positif! Dans un registre similaire, il existe depuis quelques années des voitures connectées à Internet (vous savez, les voitures où dans les publicités, on voit quelqu’un aller sur Facebook sur un écran à même le tableau de bord? Parce que Facebook ne saurait attendre!). Une règle de base dans la vie est de considérer qu’absolument tout se pirate. Si c’est connecté à Internet alors c’est une porte ouverte pour ces crétins qui s’amusent à créer la bisbille sans réfléchir aux dommages qu’ils causent. Wired a fait un test en 2015 où le journaliste Andy Greenberg était derrière le volant d’une Jeep Cherokee 2014 connecté à Internet pour l’utilisation d’une de ces consoles. Personne n’avait modifié le véhicule et pourtant deux pirates (engagés par le journaliste) ont réussi à jouer avec des bidules inoffensifs comme l’air conditionné ou le klaxon avant de s’attaquer à des fonctionnalités beaucoup plus dangereuses comme le volant, l’accélérateur et les freins! Évidemment, la compagnie a publié un correctif pour boucher la vulnérabilité. Mais, soyons honnêtes, beaucoup de gens ne mettent même pas leur système d’exploitation à jour! Imaginez s’ils doivent en plus mettre leur voiture à jour. Et s’ils ne le font pas et meurent dans un accident, qui est à blâmer? La compagnie pour avoir mis dans ses véhicules un logiciel avec des trous de sécurité ou le conducteur qui n’a pas fait la mise à jour? Maintenant, imaginez ceci. La « mode », si on peut dire, chez les terroristes en ce moment semble être de louer une camionnette pour foncer dans des foules. Mais lorsque tous les véhicules seront connectés au Web, si l’un de ces fous décidait de faire une attaque de masse sur tous les véhicules d’une certaine marque à travers le monde? Un acte terroriste, fait à partir de son sous-sol, qui frappe mondialement, sans avoir à se bâdrer de sortir de son trou ou d’être exposés aux forces de l’ordre. C’est tout un avantage non? Ça semble alarmiste, je sais. Mais ce sont des possibilités. Il ne faut pas fermer les yeux sous prétexte que la technologie facilite notre vie. Il faut regarder tous les aspects avant de laisser quelque chose entrer chez nous. La même logique peut s’appliquer à des choses qui ont moins d’impact, mais qui sont quand même potentiellement coûteuses. Imaginez quelqu’un qui pirate votre réfrigérateur intelligent et vous fait perdre tout ce qui se trouve à l’intérieur, un autre qui s’attaque à votre système de chauffage et d’éclairage en allumant absolument tout alors que vous n’êtes pas présent. Lorsque vous verrez votre facture d’électricité, vous aurez envie d’arracher tous ces gadgets et revenir au temps où il fallait appuyer sur un bouton pour allumer le chauffage et un interrupteur pour les lumières. Mais alors que faire? Jusqu’à maintenant, l’industrie s’autorégule… donc prend des décisions douteuses si elles sont à son avantage. Cette voiture d’Uber n’aurait jamais dû être sur une vraie route. Ce Jeep non plus. Et les gens devraient pouvoir savoir quand ils parlent à une machine et vous devriez être certain que votre photo de profil Facebook ne se retrouve pas sur le corps d’une pornstar donnant l’illusion que vous l’êtes. Les gouvernements sont trop lents à répondre et les règlementations sont encore trop morcelées. Vous pourriez par exemple voir les États-Unis interdire l’utilisation d’un logiciel qui imite la voix de quelqu’un, mais si ce dernier se trouve sur un serveur en sol canadien, ce sont les lois canadiennes qui s’appliquent. Oui, c’est ridicule à ce point. Il faut que tous les pays s’unissent et créent un règlement mondial pour la gestion d’Internet. Il est complètement ridicule que ça n’ait jamais été fait d’ailleurs. Ça s’explique très certainement par le fait que les gens qui gouvernent sont souvent complètement dépassés par tout ce qui est technologique. Les multiples échecs de l’encore inexistant dossier patient électronique au Québec en sont un bon exemple. Ou la récente comparution de Mark Zuckerberg devant le Sénat américain où beaucoup de sénateurs n’avaient aucune idée de ce dont il était question. Mais tant que ça ne changera pas, les compagnies de technologies continueront de créer. C’est leur travail. Sans réglementation, nous verrons de plus en plus d’événements malheureux, de décès ou de fausses nouvelles relayées comme étant des faits. La société doit profiter des technologies, et non pas en souffrir. En ce moment, nous approchons d’un gouffre et personne ne semble capable de prendre le gouvernail pour tourner le bateau avant la catastrophe. Il ne s’agit pas ici d’être alarmiste. Mais il faut reconnaître que nous commençons déjà à voir les conséquences, parfois funestes, du manque de leadership et d’actions des gouvernements à travers le monde. Il faudrait que quelqu’un réagisse avant que quelque chose de plus grave n’arrive.
Le problème d’éthique derrière certaines nouvelles technologies
0
le-problème-déthique-derrière-certaines-nouvelles-technologies-1b894cc7e8cc
2018-05-09
2018-05-09 23:43:16
https://medium.com/s/story/le-problème-déthique-derrière-certaines-nouvelles-technologies-1b894cc7e8cc
false
1,193
null
null
null
null
null
null
null
null
null
Facebook
facebook
Facebook
50,113
Daniel Lalonde
null
291c0db35190
dlalonde
4
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-05
2018-04-05 07:55:31
2018-04-05
2018-04-05 07:56:12
1
false
en
2018-04-05
2018-04-05 07:56:12
0
1b8b03b2b7ab
0.811321
0
0
0
null
1
The advance technology like artificial intelligence provides a reasoning mechanism to deal with the raster and spatial data information. In combination with the GIS technology, the Artificial Neural Networks (ANN) provides an appropriate decision making system for dynamic spatial data and hence models the real world situations. Neural Artificial Intelligence is used for predictive analysis which is very important to make decisions about real geospatial phenomena.The study, published in the journal Geophysical Review Letters, identified a hidden signal leading up to earthquakes, and used this ‘fingerprint’ to train a machine learning algorithm to predict future earthquakes. The machine learning techniques deeply employed to generate accoustic signals coming from the faults as they are moved as a result of seismic interactions and search for the patterns. The characteristics of this sound pattern can be used to give a precise estimate of the stress on the fault and to estimate the time remaining before failure, which gets more and more precise as failure approaches.
The advance technology like artificial intelligence provides a reasoning mechanism to deal with the…
0
the-advance-technology-like-artificial-intelligence-provides-a-reasoning-mechanism-to-deal-with-the-1b8b03b2b7ab
2018-04-05
2018-04-05 07:56:12
https://medium.com/s/story/the-advance-technology-like-artificial-intelligence-provides-a-reasoning-mechanism-to-deal-with-the-1b8b03b2b7ab
false
162
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
SATPALDA
SATPALDA is a privately owned company and a provider of satellite imagery and GeoSpatial services to the user community.
d0a28a37b4df
SATPALDA
11
102
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-03
2018-02-03 01:16:40
2018-02-03
2018-02-03 01:26:59
0
false
en
2018-02-03
2018-02-03 16:02:12
2
1b8b947c6ef6
0.909434
3
0
0
I think that almost every university publishes its grants on its web page. like this. https://volgenau.gmu.edu/research/grants . I am a…
5
Hottest Research Topic in the US I think that almost every university publishes its grants on its web page. like this. https://volgenau.gmu.edu/research/grants . I am a George Mason Data Science Graduate Student. One of the courses I took in GMU in my masters in Data Science was Data Visualization. The first project assignment was to take a database like the one in that link and re-visualize it. Design a visualization that follows the rules taught in class, that confirmed maximum visual cognition. In simple words, you see it once and you get the answer to your question. You ponder on it , and you don’t feel dizzy. My question was “Which is the hottest research topic in GMU?!”. I defined hottest to be one getting most research funding. I had the data staring at me, so I web scraped it and reformed it into this : https://public.tableau.com/profile/emad.mohamed#!/vizhome/Grants_6/Dashboard1 . Hover over the lines on Tableau to see grant details. The visualization was clear cut. You can see the grants sorted by amount , and hence you can see the outliers. Obviously I am not interested in the bulk of grants that did not win much money and I am more into the ones that won so much. This visual had my question answered. Now I wonder if I can repeat the operation for all universities in Virginia around me. If not even USA, if i can find a big grants database.
Hottest Research Topic in the US
13
hottest-research-topic-in-the-us-1b8b947c6ef6
2018-05-10
2018-05-10 18:47:51
https://medium.com/s/story/hottest-research-topic-in-the-us-1b8b947c6ef6
false
241
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Emad Ezzeldin
null
33d6a2645b66
emad.ezzeldin4
2
21
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-26
2018-07-26 19:17:52
2018-07-26
2018-07-26 19:19:50
1
false
en
2018-08-22
2018-08-22 17:46:51
3
1b8e3fe557e0
1.686792
0
0
0
The industry of fintech is a rapidly growing one in which dozens of organizations invest in and implement every day. Along with the growing…
5
FinTech Trends 2018 The industry of fintech is a rapidly growing one in which dozens of organizations invest in and implement every day. Along with the growing interest comes new technologies that could possibly change the entire industry. It can be difficult to stay on top of all of the new releases; so to help save you time, here are 3 of the top trends of 2018. Blockchain Technology - This decade-old technology has and will continue to change the way that finances are looked at. Through this technology, banks are able to create digital ledgers, accessible to anyone in the community, which record transactions in real-time making all financial processes more efficient and secure. Currently, IBM, the world’s 9th largest information technology company by revenue, is creating a blockchain-based trade platform called Batavia, which already has six financial institutions on board. The project is due to be completed in 2018. Due to increasing interest, blockchain could likely become an integral part of the financial industry in the near future. According to a recent report, 20% of trade finance will incorporate blockchain by 2020. Power of AI - the impact that AI has can be seen in almost every market. Recently, AI has become increasingly popular as more companies implement NLP (Natural Language Processing) virtual assistants, like Siri and Alexa. As the public embraced these new technologies, banks took notice of the value and began towards creating virtual assistants to help with bill pay or balance checks. AI will play a large role in financial institutions’ personalization of customer accounts and understanding their spending habits. Chatbots powered by AI are expected to cut business costs by $8 billion by the year 2022. Within the financial industry, digital transformation has already begun with a majority of institutions embracing the technology. For example, Capital One created an AI-driven SMS chatbot named ENO to offer guidance and help to customers. Mobile Trading - An new mobile fintech apps hit the market, the more likely it is that larger firms and start-ups alike will begin developing their own apps and strategy software. For example, recently released app Matador allows individuals to buy and sell stocks with friends in a simple, secure and commission-free way. This app allows users to invest without the high broker fees, making it easier for those with less revenue to invest.
FinTech Trends 2018
0
fintech-trends-2018-1b8e3fe557e0
2018-08-22
2018-08-22 17:46:51
https://medium.com/s/story/fintech-trends-2018-1b8e3fe557e0
false
394
null
null
null
null
null
null
null
null
null
Fintech
fintech
Fintech
38,568
Ajay Nagpal
Ajay Nagpal is the Chief Operating Officer of Millennium Management and a Board member of nonprofit organization Echoing Green.
3fa45f3e5da2
ajaynagpal
28
355
20,181,104
null
null
null
null
null
null