audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2017-09-08
2017-09-08 13:51:49
2017-09-08
2017-09-08 13:46:40
1
false
pt
2018-02-27
2018-02-27 14:42:47
2
15a178571043
0.49434
0
0
0
Existem diversas questões de negócio que podem ser resolvidas de maneira eficiente por meio de um processo de Inteligência. Por isso, neste…
5
Conheça as principais questões de negócios e saiba como usar a IC para resolvê-las Existem diversas questões de negócio que podem ser resolvidas de maneira eficiente por meio de um processo de Inteligência. Por isso, neste guia, saiba como a Inteligência Competitiva pode fazer diferença na sua empresa. Aqui você vai encontrar 12 principais questões de negócio e conhecer a melhor forma para solucioná-las! Clique aqui para baixar! Originally published at www.plugar.com.br on September 8, 2017.
Conheça as principais questões de negócios e saiba como usar a IC para resolvê-las
0
guia-conheça-as-principais-questões-de-negócios-e-saiba-como-usar-a-ic-para-resolvê-las-15a178571043
2018-06-06
2018-06-06 04:16:46
https://medium.com/s/story/guia-conheça-as-principais-questões-de-negócios-e-saiba-como-usar-a-ic-para-resolvê-las-15a178571043
false
78
null
null
null
null
null
null
null
null
null
Negocios
negocios
Negocios
3,439
Plugar Inteligência
Transformamos dados em informações que geram valor e apoiam a tomada de decisão das empresas.
8b0b87e36263
plugar
16
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-16
2017-10-16 06:56:29
2017-10-16
2017-10-16 07:19:12
0
false
en
2017-10-16
2017-10-16 07:24:47
15
15a1b9ec8b3c
1.290566
1
0
0
Our Presentation
1
People Analytics : Creating and Deploying in Minutes No-Code HR Predictive Analytics Our Presentation https://github.com/Abuamany/Our-Presentation/blob/master/PA-IKA-UNPAD%20Morning.pdf https://github.com/Abuamany/Our-Presentation/blob/master/PA-IKA-UNPAD%202.pdf Developing Machine Learning Strategy for Business in 7 Steps If you've succumbed to the hype around machine learning, you've likely heard hundreds of ML evangelists claim that data…www.altexsoft.com 7 steps to master Machine Learning with python - Coding Security 0https://codingsec.net/2016/05/7-steps-master-machine-learning/20Follow In this article I am going to give you the 7…codingsec.net R Learning Path: From beginner to expert in R in 7 steps Learning R can be tricky, especially if you have no programming experience or are more familiar working with point-and…www.kdnuggets.com Learning R in Seven Simple Steps Guest blog post by Martijn Theuwissen, co-founder at DataCamp. Other R resources can be found here, and R Source code…www.datasciencecentral.com LeaRning Path on R - Step by Step Guide to Learn Data Science on R One of the common problems people face in learning R is lack of a structured path. They don't know, from where to start…www.analyticsvidhya.com Our Rpubs https://rpubs.com/heruwiryanto Application R: The R Project for Statistical Computing R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX…www.r-project.org Home Home RStudio makes R easier to use. It includes a code editor, debugging & visualization tools. Download Learn More…www.rstudio.com MLJAR: Platform for building Machine Learning models MLJAR is a platform for rapid prototyping, development and deploying pattern recognition algorithms.mljar.com Download JASP - JASP - Free Statistical Software 0.8.3.1 (Hotfix) Hotfix p-value chi-square Logistic Regression (#2054) 0.8.3 New features: Added logistic regression…jasp-stats.org BigML is Machine Learning made easy BigML.com is a consumable, programmable, and scalable Machine Learning platform that makes it easy to solve and…bigml.com Assessment Online : Apply Magic Sauce - Prediction API - Apply Magic Sauce translates individuals' digital footprints into psychological profiles. It generates a Big Five…applymagicsauce.com pymetrics | play games to find your ideal job and optimal career path Discover your strengths through neuroscience games and get recruited by your best fit companieswww.pymetrics.com
People Analytics : Creating and Deploying in Minutes No-Code HR Predictive Analytics
1
people-analytics-creating-and-deploying-in-minutes-no-code-hr-predictive-analytics-15a1b9ec8b3c
2018-04-27
2018-04-27 20:32:24
https://medium.com/s/story/people-analytics-creating-and-deploying-in-minutes-no-code-hr-predictive-analytics-15a1b9ec8b3c
false
342
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Heru wiryanto
Psikolog yang senang dengan psikometri, matematika, teknologi komputasi dan data science.
507770480f4f
heruwiryanto
129
67
20,181,104
null
null
null
null
null
null
0
null
0
33391d4d5793
2018-09-05
2018-09-05 22:18:22
2018-09-05
2018-09-05 22:18:42
1
false
en
2018-09-09
2018-09-09 15:47:54
0
15a25790944a
1.611321
0
0
0
ELIZA​ ​was​ ​a​ ​system​ ​​designed​ ​by​ ​Joseph​ ​Weizenbaum​​ ​that​ ​allowed​​ ​“human​ ​correspondents” (Weizenbaum,​ ​1966)​ ​to​…
2
Eliza ELIZA​ ​was​ ​a​ ​system​ ​​designed​ ​by​ ​Joseph​ ​Weizenbaum​​ ​that​ ​allowed​​ ​“human​ ​correspondents” (Weizenbaum,​ ​1966)​ ​to​ ​communicate​ ​through​ ​a​ ​typewriter​ ​to​ ​a​ ​simulated​ ​psychologist.​ ​“This mode​ ​of​ ​conversation​ ​was​ ​chosen​ ​because​ ​the​ ​psychiatric​ ​interview​ ​is​ ​one​ ​of​ ​the​ ​few​ ​examples of​ ​categorized​ ​dyadic​ ​natural​ ​language​ ​communication​ ​in​ ​which​ ​one​ ​of​ ​the​ ​…​ ​[participants​ ​in​ ​the psychiatric​ ​interview]​ ​is​ ​free​ ​to​ ​assume​ ​the​ ​pose​ ​of​ ​knowing​ ​almost​ ​nothing​ ​of​ ​the​ ​real​ ​world” (Weizenbaum,​ ​1966)​ ​and​ ​allows​ ​“the​ ​speaker​ ​to​ ​maintain​ ​his​ ​sense​ ​of​ ​being​ ​heard​ ​and understood.”​ ​(Weizenbaum,​ ​1966)​ ​ELIZA​ ​ultimately​ ​led​ ​its​ ​creator,​ ​Joseph​ ​Weizenbaum,​ ​to​ ​be “​revolt[ed]​ ​that​ ​the​ ​doctor’s​ ​patients​ ​actually​ ​believed​ ​the​ ​robot​ ​really​ ​understood​ ​their problems…[and​ ​that]​ ​the​ ​robot​ ​therapist​ ​could​ ​help​ ​them​ ​in​ ​a​ ​constructive​ ​way.​”​ ​(Wallace) Regardless,​ ​ELIZA​ ​demonstrates​ ​how​ ​influential​ ​the​ ​establishment​ ​of​ ​an​ ​environment,​ ​in​ ​which a​ ​user​ ​is​ ​comfortable,​ ​is​ ​on​ ​the​ ​outcome​ ​of​ ​a​ ​conversation. ​ELIZA​ ​was​​ ​successful​ ​in​ ​establishing​ ​a​ ​common​ ​“environment​ ​and​ ​mindset,”​ ​by establishing​ ​the​ ​context​ ​of​ ​a​ ​​psychiatric​​ ​appointment.​ ​Users​ ​were​ ​able​ ​to​ ​immediately​ ​recognize the​ ​limits​ ​of​ ​the​ ​interface,​ ​allowing​ ​them​ ​to​ ​concentrate​ ​on​ ​the​ ​successful​ ​“interchang[ing]​ ​…​ ​of thoughts​ ​and​ ​words.”​ ​(OED​ ​Online,​ ​2017) ELIZA​ ​was​ also successful ​in establishing​ ​a​ ​“shared​ ​language,”​​ ​in​ ​that​ ​it​ ​employs​ ​the​ ​language​ ​of​ ​the​ ​user​ ​to construct​ ​a​ ​dialogue​ ​between​ ​that​ ​user​ ​and​ ​the​ ​interface.​ ​Both​ ​examples​ ​show​ ​how​ ​the establishment​ ​of​ ​a​ ​shared​ ​understanding​ ​can​ ​allow​ ​for​ ​a​ ​more​ ​effective​ ​exchange. By​ ​engaging​ ​in​ ​“mutually​ ​beneficial,​ ​peer-to-peer​ ​exchanges[s],”​ ​(Dubberly​ ​&​ ​Pangaro,​ ​2009)​ ​a conversational​ ​interface​ ​provides​ ​the​ ​climate​ ​for​ ​the​ ​successful​ ​exchange​ ​of​ ​“thoughts​ ​and words.”​ ​(OED​ ​Online,​ ​2017)​ ​ELIZA​ ​was​ ​particularly​ ​effective​ ​in​ ​creating​ ​“[an]​ ​engagement​ ​in mutually​ ​beneficial,​ ​peer-to-peer​ ​exchange.”​ ​(Dubberly​ ​&​ ​Pangaro,​ ​2009)​ ​Implementations​ ​of “categorized​ ​dyadic​ ​national​ ​language​ ​communication”​ (​​Weizenbaum,​ ​1966)​​ ​like​ ​ELIZA​ ​or similar​ ​instruments,​ ​especially​ ​when​ ​users​ ​are​ ​committing​ ​to​ ​engage​ ​in​ ​a​ ​conversation,​ ​would allow​ ​for​ ​improved​ ​interactions​ ​on​ ​conversational​ ​interfaces​ ​and​ ​potentially​ ​improve​ ​these interfaces’​ ​“naturality.”​ ​(Lopez,​ ​Quesada,​ ​&​ ​Guerrero,​ ​2017)​ ​By​ ​doing​ ​this,​ ​interfaces​ ​would provide​ ​environments​ ​for​ ​improved​ ​“interchanges”​ ​(OED​ ​Online,​ ​2017)​ ​and​ ​the​ ​systems powering​ ​those​ ​interfaces​ ​would​ ​be​ ​able​ ​to​ ​provide​ ​improved​ ​responses,​ ​because​ ​of​ ​a​ ​greater willingness​ ​from​ ​users​ ​to​ ​interact​ ​with​ ​CIs​ ​resulting​ ​in​ ​improved​ ​“exchange[s]”​ ​(Dubberly​ ​& Pangaro,​ ​2009)​ ​with​ ​users.
Eliza
0
eliza-15a25790944a
2018-09-09
2018-09-09 15:47:54
https://medium.com/s/story/eliza-15a25790944a
false
374
Updates, findings and other things from my thesis, Conversational Symbiosis Amongst Humans and AI in the Context of Plateaus in Romantic Relationships
null
null
null
Men Are from Kepler-438b, Women Are from Kepler-442b
null
men-are-from-kepler-438b-women-are-from-kepler
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Scott Dombkowski
null
f6d1a37ec0ee
sdombkow
24
30
20,181,104
null
null
null
null
null
null
0
null
0
7f60cf5620c9
2017-09-30
2017-09-30 13:46:12
2017-09-30
2017-09-30 13:59:47
2
false
en
2018-08-04
2018-08-04 21:50:37
2
15a2d59be264
2.968239
0
0
0
Waiting for the killer app
5
AR, AI, Ay Yi Yi: 2 Takeaways from NYC Media Lab 2017 Waiting for the killer app I was a journalist, so my heart and intellect is more in content creation than product development. As far as getting media out there, I work for non-profits and artists, so I want to try everything that comes out of the pipe, but to maximize what we can afford beyond all expectations. So, my biggest takeaway from NYC Media Lab 2017 September 28 at The New School was a broader understanding of why augmented reality (AR) is so freaking hot. It engages more senses to enhance media, and should eventually let us use media hands-free. I knew it! I knew the natural advancement would be a levitating, invisible map, or a game, or a copy of the Goblet of Fire. The AR among us, like Snapchat selfie filters and Pokémon Go, augment the reality of our physical, real world environment, our face and location. That’s what’s augmented. The vision, though, is to free AR from the phone, and house it in something potentially wearable (and liked, unlike Google Glass). AR should shake up consumer convenience, and myriad other sectors. Cardiologists leave a sterile environment now to work with existing programs that overlay a 3D image of a heart over 2D scans enhanced by radioactive substances. Columbia University’s vasAR team presented a 3D heart model that virtually suspends in front of a surgeon as he or she guides instruments through valves during a minimally invasive surgery. A dentist toggling his attention between a screen and a patient’s mouth would appreciate this technology, said Jeannette M. Wing, Avanessians Director of the Data Science Institute and Professor of Computer Science, Columbia University. But back to how AR may or may not affect the general public— Google Glass never caught on, noted Amy Webb, Founder and CEO of the Future Today Institute, an adjunct professor at NYU Stern School of Business and author. She moderated the speculative Media 2020 panel. It doesn’t change the fact that everyone already, to some degree, uses Google Glass’s functionality, said John Borthwick, Founder and CEO, betaworks. New York is a city of walkers, all reading their phones. “The technology will improve,” Jeannette Wing said. “What will be the killer app? For sure, the medical profession is looking for it.” Then, we make the app disappear, said Francis Shanahan, Senior Vice President of Technology at Audible. The industry is advancing to interact with technology through voice (Alexa and Echo), swipes and other methods beyond QWERTY. (Respectfully) Ellen Ullman battles the pink robots Digital is how we access information and, therefore, how we make decisions. It’s “moving into our environment,” for god’s sake, free from its screen form to enter our human dimension in some eventual, irresistible mode. We’re immersing into technology, and it’s immersing into us. So, a conversation on ethics is understandable. What’s an unreasonable limit on free speech and enterprise, and what’s responsible? Facebook is singularly influential and may be cordoning two billion monthly users into opinion silos, but it’s also just a platform. Right? Except, it “thinks.” Artificial intelligence learns from experience apart from a human engineer, said Ellen Ullman, who has programmed since the 1970s, and authored Life in Code. She delivered the NYC Media Lab keynote conversation with author and podcast host Manoush Zomorodi. The machines are biased, limited because they draw on information from the past. Humanity needs to give more guidance to software engineers creating algorithms. Engineers should provide more guidance to the machines. “Think about the human impact of the things you’re making,” Manoush Zomorodi said. The goal should be a more discerning consumer, said Francis Shanahan, SVP, Technology at Audible later during Media 2020. “I wouldn’t expect a person to be burdened with the entire responsibility,” Jeannette Wing said. “On both ends (engineering and consuming) there are human beings.” Photo: South Park. Second photo: from the NYC Media Lab presentation of “Calling Thunder,” immersive media gathered around Manhattan’s natural history using 360 soundscapes and illustration. “Interactive mobile and WebVR experiences coming soon,” promises the website, www.unsung.nyc.
AR, AI, Ay Yi Yi: 2 Takeaways from NYC Media Lab 2017
0
ar-ai-ay-yi-yi-2-takeaways-from-nyc-media-lab-2017-15a2d59be264
2018-08-04
2018-08-04 21:50:37
https://medium.com/s/story/ar-ai-ay-yi-yi-2-takeaways-from-nyc-media-lab-2017-15a2d59be264
false
685
Sharing concepts, ideas, and codes.
towardsdatascience.com
towardsdatascience
null
Towards Data Science
null
towards-data-science
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,ANALYTICS
TDataScience
Digital Media
digital-media
Digital Media
2,015
Sara Harvey
Comics, Brooklyn, Doberman pinchers http://saraharvey.xyz/
4157e45138b4
saraharveynyc
166
205
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-29
2018-01-29 18:44:36
2018-02-18
2018-02-18 18:07:08
1
false
en
2018-02-20
2018-02-20 14:27:18
10
15a447f34f9
4.350943
4
0
0
So, it is my first post so I probably should introduce myself briefly. I am currently studying in Loughborough University, UK under the…
5
Predicting building energy consumption with tensorflow So, it is my first post so I probably should introduce myself briefly. I am currently studying in Loughborough University, UK under the London-Loughborough Centre for Doctoral Training in Energy Demand. Before that I did stuff in different industries mostly related to energy, in short. If you really are so interested, see my profile on LinkedIn or Twitter. Skip the next paragraphs if you want to just skip to the technical coding part. One of the things that has struck me during these past years is the widening gap between applying computer science to other engineering disciplines. The simple reasons seems to be that computer science has progressed so rapidly that engineers haven’t been able to keep up. We do not really seem to know how to use the opportunities properly while computer science engineers do not know what to use their skills for. And we end up with this situation where our brightest minds and cutting edge technologies are used to maximizing the impact of internet ads… But let’s not go there now. What I want to do is to bring computational techniques a bit closer to “real-life” and see how to realize their value and what could be a better application than energy-use with all its complexities. So more or less as a hobby I have been developing my hands-on skills in utilising different digital technologies and wish to share what I have been up to and hopefully share ideas with like-minded people. First of all, I am a mechanical engineer by background without extensive background in coding so I apologies for all the messiness and errors which I will be making. As you will notice the approach is to get stuff work, instead of focusing on the nitty-gritty. I hope that this approach provides some value, especially for those who are (like me) not so into the “under-the-hood” things. Applications first! Let’s get started. What I tried to do was to use a Python package called tensorflow for creating models for energy consumption of a building. I wanted to have a look at whether I could use tensorflow to create a simple trainable model for predicting energy consumption of a building by using outdoor temperatures. I basically followed the tensorflow tutorial structure but tweaked the code to better suit the energy domain. So really nothing revolutionary or advanced but definitely a start. A big thanks also to Arthur Juliani for his tutorials, they helped me a lot. The full code and the files are available in github. My next plan will be to develop the code more modifiable and versatile. I also might try to create a learning controller by using Raspberry Pi3 along with tensorflow. I want to see how easy it would be to create an IoT device embedded with machine learning capabilities for the energy-specific domain. Feel free to contact if you have any inputs, critique, ideas or comments — I would be happy do discuss further! Below is the main programme which calls the functions in pred_func.py during various stages. The workflow is pretty intuitive, first the data is handled, turned into inputs that tensorflow can use. The out-of-box estimators provided by tensorflow are then used to train two models, a linear regression model and a deep neural network with two hidden layers. After training the model is evaluated. Finally, outputs of the evaluations are also printed and plotted. The main programme. The pred_func.py defines the functions needed in the main programme. The read_data function reads the data from the csv-files and returns the testing and training data sets. The training and testing data is created from our original data by randomizing the data and then splitting it. Functions train_tf and evaluate_tf are more interesting, they basically define the training and evaluation methods used. In tensorflow “features” is the input to our model, i.e. air temperature in this case. “Labels” are the outputs, gas consumption. In training data is shuffled, batched and used to map the input with the outputs. Evaluation works similar to training except that only features are input and then the model is used to predict the labels. For now the basic tensorflow methods are used to keep this straight-forward. A cool feature of tensorflow is tensorboard which can be used to visualize the models created in tensorflow. Just modify the following code to point to your directory with the python files, run it and open the page https://your_hostname:6006 on your browser to check it out. It provides interesting visualizations regarding the structure and training of the models. So, how did the models do? The DNN-model was run with a learning rate of 0.3, batch size was five for both models and regularizations were set to zero since I wanted to demonstrate the difference to traditional linear regression. Interestingly the DNN-model predicts a significant step-change in gas consumption temperatures are low and can thus provide such features for a data-driven modeller. If the underlying data were more interesting we might see more features in our model but since it was pretty straight-forward we only see this one major difference to the linear model. However, as is apparent from the figure, not much data existed for those very low temperatures and since regularization was set small the outliers can sway the model quite a lot and the models really are as good as was the data used to train them. Thus the results should be interpreted always with some skepticism, especially when data can be extremely noisy or hard to acquire as is typically the case with buildings. We have built a very simple gas consumption predictor using tensorflow, so what? The point was to show that applications of AI and machine learning are not as far away as some might think. I believe these applications will also break-through to the energy sector where quantities of gathered data increase rapidly. This applies especially to the demand-side as it becomes an active component of the system. This development is already driven forward by regulation which is making aggregation of flexible demand, real-time trading of energy and deeper integration of small-scale renewables increasingly common. I wish that these developments are used for the common good of us and the environment by creating a healthy energy system characterized by words like efficient, democratic and reliable. But equally machine-learning will probably have a role in any future scenario, also the more ominous ones. From a practitioner to practitioners, -eramismus
Predicting building energy consumption with tensorflow
45
predicting-building-energy-consumption-with-tensorflow-15a447f34f9
2018-05-23
2018-05-23 09:06:21
https://medium.com/s/story/predicting-building-energy-consumption-with-tensorflow-15a447f34f9
false
1,100
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
eramismus
PhD student in energy demand looking for the wider perspective.
92b0a829daaf
eramismus
2
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-06
2018-03-06 15:11:31
2018-03-06
2018-03-06 17:52:05
1
false
en
2018-03-06
2018-03-06 17:52:05
0
15a5499256c3
2.275472
2
1
0
Civil Engineering
5
The Evolution’s Revolution Futuristic Approach — Part 1 Civil Engineering Revolution has always shown miracles in every industry, and when it comes to civil engineering, it has witnessed massive changes these days. Development of technology and application of the expert system have made a lot of achievements mainly in Structural design , project management construction techniques, decision making and prediction, road and bridge health detection, and many such fields. The most talked and worked upon area ‘Artificial Intelligence’ (AI) has also turned its magic wand in the field of Civil Engineering and so has Machine learning (ML), Artificial Neural Network (ANN), Genetic Algorithms and 3D printing, may sound not related to Civil for miles away, but have made complex structures to be constructed in no time. Digging out high tech software from the ground has given geotechnical engineering a new approach. You would be surprised, but these techniques are used right from ‘Surveying’ till completing the ‘Execution and Quality Check’ The word surveying only brings to your mind GPS, theodolites, staff, plain tables etc. with all due respect to our old evergreen methods, when we say futuristic approach, it’s now time to embrace the new ones. The modern day technology has an all new ‘avatar’ for surveying. Unmanned Aerial Vehicle Survey like Drones with inbuilt surveying software, Multimedial and WEB GIS, Vision LiDAR technology, SAR, INSAR, Hyper spectral Remote sensing, Digital and 3D mapping, are the newbie technologies which are now used in the industry Planning too, is nowhere behind. A ‘Genetic Algorithm’ based multi-objective optimization model for scheduling of linear construction projects is used which minimizes both project time and cost. In the field of Structural Engineering, Artificial Intelligence (AI) based techniques are particularly found suitable for modeling complex structures and complex behavior of different materials which include damage factors and prediction of slope failures. An Artificial Neural Network (ANN) has been developed to predict the 28th day strength of a normal or high strength self compacting concrete in no time than the conventional method. Prediction of Maximum Dry Density and Confined Compressive Strength of cement stabilized soil also can be done by the AI techniques. All these redefining accuracy and precision, the modern technologies and techniques are more result oriented that to within hours and are hustle free. But the question is- How much awareness do we have on where the world is leading. We often come across engineers who state with swag, that they are so busy with engineering that they do not even know who their neighbor is. But the point is, are you investing your time in the right direction when you say you’re “busy because of engineering”. Apart from what we study from our text books, do we really know what are the latest trends and technologies the industry uses? Well, we all know what our answers would be! but now, instead of talking about where we are going wrong, I believe we should talk about how we can overcome those in-capabilities, and the prime step towards it, is to channel your thinking in that direction. Try, initiate and inculcate within yourself that ability to think different. Think out of the box! Read! There are so many, literally so many new things coming up in the industry everyday that you’ll be Astonished! To be continued…
Futuristic Approach — Part 1
15
futuristic-approach-part-1-15a5499256c3
2018-06-19
2018-06-19 13:49:39
https://medium.com/s/story/futuristic-approach-part-1-15a5499256c3
false
550
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Shreyas Tannu
Founder of Finite Elements, endeavoured Civil engineering, philanthropist by heart and a passionate writer ...
141a0b6c9c45
shreyas.tannu
11
11
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-10
2018-09-10 18:47:52
2018-09-10
2018-09-10 20:12:39
0
false
en
2018-09-11
2018-09-11 02:21:37
0
15a69aabb915
1.30566
1
0
0
My name’s Rahul, and I’m a second year computer science at University of Toronto. My interests lie in machine learning and data science.
2
Hello My name’s Rahul, and I’m a second year computer science at University of Toronto. My interests lie in machine learning and data science. In my free time, I’m probably on freecodecamp.org, playing tennis, or watching Impractical Jokers. I didn’t have much interest in computer science as a kid, but that all changed in grade 11. I took an introductory CS course to fill my schedule, and didn’t expect much from it. However, soon as I got my first project I was hooked. The project was to make any game we wanted to using GameMaker, and what I came up with was a platformer. Just the freedom that the assignment allowed felt unlike other classes I had taken prior (besides Art); I could make whatever I wanted, more or less with no constraint. Then the class moved onto more open ended assignments, but this time in a better language, C# (sorry GameMaker). At home, I’d learn HTML and CSS as making websites seemed like another fun creative endeavor. The year went on, and I had made my first site. When the time came to pick grade 12 courses, I knew I’d be taking CS again. Gr12 CS was more of the same course wise, but on the side I had switched my gaze to a field I found more interesting: AI. On the side, I tried (and failed) to grasp machine learning, and felt like this is the field I’d like to work in. Then I found out it pays well also, so that doesn’t hurt. Fast forward to now, I chose to specialize in CS, and am making sure to take electives related to statistics, as I want to build a strong mathematical foundation for the stuff I’ve been learning on the side (easy-ish to use sklearn, harder to understand what goes on under the hood). I started this blog for a course requirement (CSC290), but to be honest it’s enjoyable to write. My goal is to better articulate my thoughts, and I’m excited for this course to aid me in that aspect. ily
Hello
1
hello-15a69aabb915
2018-09-11
2018-09-11 02:21:37
https://medium.com/s/story/hello-15a69aabb915
false
346
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
rahul n
null
eaa198d94a3f
rahulnakre
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-23
2017-10-23 22:52:47
2017-10-23
2017-10-23 23:36:38
1
false
en
2017-10-23
2017-10-23 23:36:38
0
15a7b6e25c77
2.833962
1
0
0
AI and education have been linked since the first formative experiments in the AI revival of the 1970s. Those first efforts yielded some…
4
An AI Tutor for LifeLong Learning One of AI’s longstanding goals has been to create a Socratic tutor for lifelong learning. AI and education have been linked since the first formative experiments in the AI revival of the 1970s. Those first efforts yielded some remarkable innovations such as the idea of misconceptions as inappropriate frameworks of understanding built naturally out of the knowledge construction efforts of the human mind. Knowledge representation and curriculum sequencing became essential building blocks of AI tutors. Another powerful idea was that of qualitative simulations that introduced students gradually to a more detailed version of reality, making it easier to understand, and building on existing knowledge even when the details were somewhat inaccurate. The most powerful insight was that it was necessary to have a detailed description of a student’s knowledge to leverage it forward in something taken from Piaget: the zone of proximal development (ZPD). Within this ZPD new knowledge could be added easily without the creation of new misconceptions. Out of this understanding there arose a powerful new idea: a Socratic AI tutor that accompanied a student from his or her earliest days throughout life, with a complete record of their knowledge and its advances. Everyone thinks they are competent to teach, and also that they can have a powerful effect on growing minds. The truth is far from this. Kids learn in remarkable diverse ways. Think about learning vocabulary. A kid learns about 2 or 3 words a day but encounters less than one new word a day. How does that work? Every word she hears, or sees, or reads, connects to every other word she already knows part of and increments its meaning. So, the more words you know, the more words each new word affects, even if only in small ways. These tiny increments add up quickly like compound interest to become an incredible force. If someone starts off slow, they are forever behind. This is an insight that neural networks have given us years ago, and we still don’t know what to do with it. What other insights are lurking in AI that human teachers have never guessed? Lots. How could any individual, no matter how much of a helicopter parent, know their child thoroughly, especially after they begin to have friends and a life outside the family. But a Socratic tutor built to listen carefully and with a huge fund of knowledge, accompanying the child throughout its life like a pair of intelligent glasses, could realistically provide such a Socratic guide. AI has given us a powerful new understanding of knowledge and its organization. Words can be arranged into semantic hierarchies and dictionaries of meaning on the basis of similarity and generality. These semantic networks can be computed from libraries of text, automatically. Images can be composed and decomposed in a similar way. Ditto for sounds and tastes and touch. We can even add supernormal senses, such as radar and telescopes and microscopes to the mix. And of course simulations and theories of every kind. There is so much that is possible from our current understanding. Imagine how much more will be possible as we advance these new tools. AI can form patterns of learning and sequences of presentation that we have never experienced. AI is all about learning from experience. With a powerful AI we could theoretically simulate millions or billions of different learning sequences in history or physics or psychology, and see which of them prepares a student best for adulthood. The standard curriculum, even in as clearcut a domain as arithmetic, has never been studiously explored to determine a best path that avoids all the standard misconceptions. An AI that provides this path can check for misconceptions and provide guidance that a human teacher could never dream of. An AI could become a Socratic tutor that follows a student throughout his development, providing guidance with a full history of his level of development and where his next advances could be most readily made, in a deliberately contextualized way, just in time as he or she needs it. Yes, we are not there yet; but neither do we have the vision and funding to begin this fundamentally essential project if humans are to help their AIs build a better world.
An AI Tutor for LifeLong Learning
5
an-ai-tutor-for-lifelong-learning-15a7b6e25c77
2017-10-23
2017-10-23 23:36:39
https://medium.com/s/story/an-ai-tutor-for-lifelong-learning-15a7b6e25c77
false
698
null
null
null
null
null
null
null
null
null
Education
education
Education
211,342
Joe Psotka
Joe is a bricoleur, trying to understand the complexity of the place of values in a world of facts, using only common sense.
1f62ed7c4bf1
joepsotka
80
180
20,181,104
null
null
null
null
null
null
0
null
0
4ba8f1c8199f
2018-06-20
2018-06-20 11:16:19
2018-06-20
2018-06-20 11:25:47
2
false
it
2018-06-20
2018-06-20 12:19:50
1
15a8da73d12d
0.896541
0
0
0
La velocità alla quale procede l’Intelligenza Artificiale (latu sensu) è mostruosa.
5
Machine Theory of Mind La velocità alla quale procede l’Intelligenza Artificiale (latu sensu) è mostruosa. Molti non ne hanno la minima idea (confondendo addirittura automazione, Machine Learning, AI, ecc… tutto infilato in un unico, più che discutibile, calderone fumante). Quando parlo di velocità mostruosa intendo per esempio questo: esiste una Machine Theory of Mind, con l’abilità di astrarre e rappresentare stati mentali di altri soggetti, inclusi desideri, credenze, intenzioni. Un algoritmo (chiamato ToMnet, e primariamente basato su Deep Reinforcement Learning) che, autonomamente, impara a modellizzare altri soggetti e i loro stati mentali. Una macchina, seppur relativamente semplice per ora, che fa questa cosa tipicamente umana. Potete eccitarvi o tremare: entrambe sono reazioni razionali. Comunque, ecco il paper — notevole, vi suggerisco caldamente di leggerlo: https://arxiv.org/pdf/1802.07740.pdf E, vi prego, notate l’azienda dalla quale provengono tutti gli autori.
Machine Theory of Mind
0
machine-theory-of-mind-15a8da73d12d
2018-06-20
2018-06-20 12:19:52
https://medium.com/s/story/machine-theory-of-mind-15a8da73d12d
false
136
We publish stories worth reading on quantitative investing and data science (especially in Finance), and stuff like this . And, we have beers.
null
null
null
Qwafafew-Italy
null
data-drinking-and-thinking-abnormally
DATA SCIENCE,FINANCE,FINTECH,BIG DATA,DATA ANALYSIS
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Raffaele Zenti
Nato per sbaglio sulla terraferma, sto meglio in mare ma corro sui monti. Dati e Data Science per campare: ideatore e fondatore di Virtualb.it e AdviseOnly.com.
75d3483836ae
RockZen
268
255
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-28
2018-05-28 23:34:58
2018-05-29
2018-05-29 00:13:44
0
false
en
2018-06-04
2018-06-04 15:12:50
0
15aabd1f0554
8.449057
0
0
0
Disclaimer: The following views are my own and not associated with any company or business owner. Also, I don’t write like most people. I…
5
AI — You But Better Disclaimer: The following views are my own and not associated with any company or business owner. Also, I don’t write like most people. I try to hold on to that. PSA: So I’ve been absent for a bit, needed a social media break off of my own accounts but let’s forget why since if you really want to know we’ll chat about it at some point and more importantly part of it was research. I’m relaunching my LLC soon. I have a purpose finally to launch a company that means a lot to me. This post is nothing about me or directly with what I do. It’s about you. There’s a lot of changes to the way social media and international digital communication works particularly with businesses like with GDPR for instance that just went into effect a couple days ago but digital advertisers and people who do business online and/or advertise online as well as their marketers (some who are obviously on this platform and are welcome to elaborate in the comments) have been preparing for the deadline for months now. That’s what the emails asking you to rejoin email lists are for (only rejoin the ones you want. This is a one time gift that they all need to reauthorize permission and they will swindle you into doing it for a fucking frisbee.) These changes happen slowly without your notice as you become entrenched in the new look and feel of FB etc. GDPR? That’s one good thing. Private citizens should be celebrating. It cost businesses a tiny amount to adjust but don’t worry, they’ll live. You heard bitching and moaning but GDPR was to protect EU citizens right to privacy. Something we don’t have and we should push to have. Immediately. I lost about the majority of any digital commerce that would ever work with me by saying that. Fuck ‘em. I like working for honest businesses, artists and professionals that are transparent with their marketing and what data they collect. This is not an ad. I can’t boost it as a ad considering it shits all over Facebook and google, two service providers I plan to continue using. This is a public service announcement that I will make public and shareable if you’d like to share it but this is for people on my friends list mostly. Please read it. I am going to be advertising the businesses I’ve been working hard at in the coming days. I mention that again because i left myself open to every marketing technique I know these value-less or unknowing “marketers” use. I differentiate because someone during my researching called me out for calling them a sell-out on people of color when they were trying to get by and didn’t know the services they used gathered data given to other companies. Hell I even used some without knowing it because I didn’t always read the privacy policy through to the end. But I documented them a lot of them to demonstrate how your data is being used in the simplest way I know how (which means someone else should write it). However, before I can do that I need everyone to to remember Facebook is a business. It’s why all my clients have at least one or more SM accounts and I push them to have the ones they need for their business. The platforms make *a lot* of money as do your other social media accounts. Not just on ads. They do this with your data they sell via third parties. Not unidentifiable, codes like they tell you. Your IP address linked to your name, and the word that pushed me into this research hell: Your Avatar. It’s old news for digital marketing folks, something used for sometime now but in less disastrous ways. Social media, particularly Facebook, has the most identifiable info on you. Your IP is linked to it. No, your proxy server didn’t protect you, trust me. I’ll explain in the full data in another post. Anyway, they have the most amount of personally identifiable data and is now the core spend for digital advertisers. Do you know how much money of my own I pour into services that help me advertise for my clients and manage their accounts? Or how much they (not all) spend on their ads because I can niche down the audience that sees it to exactly who would want their product using FB and IG native ad tools? You spend based on views and clicks, so I save clients money by making sure those are the most likely buyer eyes to land on their ad. ‘Be contributed to this indirectly and unknowingly. Those lucrative businesses and the Social Media platforms are not happy with privacy laws. They lose a lot of money the more privacy you gain. But they aren’t the worst of the bunch. You may be under the illusion these companies only possess what you choose to share (ie name, city, likes, email, life events, moves, alma matter, grade school, graduation date, aged, login times, active hours, connections, etc) but no, they pool this data to predict your behavior; how you behave online and off. That’s where shit gets extra heinous. What ads you click on, how you communicate, What groups you’re in or randomly were added to (more on this below and the reason for this post prior to posting the full data I still need to organize; the info that was so slimy I decided to give it to the ACLU before publishing it myself with my launch for an LLC unrelated to this personal post. It was so much i figured there has to be something they can find illegal there but we have a barely existent consumer protection bureau and no rights to individual privacy over crony capitalism that I doubt the below activities broke any laws.) See, if you read the privacy policy and let yourself be open to every tracker on every website and tracking how they behave including what LLC owns them like I did so I could inspect the site code to find the names of the trackers not listed under the SSL, you realize there’s hundreds of LLCs tracking your data. Not just for analytics for better site performance, either. If you’re following before we get serious: your IP linked to your social media identifying you > tracking your behavior across all your demographic sections > follow your entire web browsing history. Some of you think I’m joking, the privacy policy’s for the LLCs I’ll post if need be clearly state that by using the service, not signing or consenting, just using, you agree to let them track you off their website onto the next pages you go to > the clauses in nearly every privacy policy have you consent they by using a website or app you are clearing a company from sharing that data with third party services or companies they don’t need to tell you about or give the names of but hopefully some folks dig deeper. So here we are. Tons of companies tracking you to create a digital image of your person so they can “Make your online experience custom and special to your needs, likes and wants 🤗.” But honestly most marketing folks, developers and business owners believe that’s as far as it goes. Not too terrible but kinda. But don’t put yourself in a bubble. You’re basically living out what advertisers tell you to buy, read, and be interested in based on your digital…avatar. That’s a Sanskrit word being used so incorrectly it’s gross to say. Appropriation. Not cool. The really ugly: GDPR is a great thing for the EU but it’s in turn protected you from some pretty heinous ways of trapping you in a data bubble. Transnational companies had to adjust their tracking policies across the board to still operate in the EU. Thanks, guys. But, unless you had all your cookies blocked, we’re using a vpn client, using Tor (open-source heroes needing donations) with your vpn client, no passwords saved or auto logged in (chrome being the worst with google log in), your browser defaults changed to block cookies (they still don’t): You have tracking-ware on your comp and phone (even if you have those things, chances are you still have aggregate data on you somewhere. Actually, you definitely do.). Also, no, clearing out your temporary files does not get rid of them. Going back, why are there so many bots collecting your data and storing it? Well, turns out when you follow who owns the LLCs or who funded them as silent partners the same way as if you followed a political candidate, it’s actually just a handful (literally I found less than 5) about six-degrees-of-ownership-entities removed, funding, or acquiring the hundreds of smaller companies and the data they gather that links your identifiable information with the rest of it and your IP and Social Media accounts. So, yes they know you by name, ethnicity, gender, what you buy, who you’re fucking whatever. All somehow linked back to these few companies I’m not dumb or rich enough to name. But I’m sure anyone smart enough can figure it out or name them. Market Research. Used once as a nickname for me in an odd marketing group; only that’s what I really was doing but in reverse for consumers not businesses… All of us are consumers. On;y few are business owners. Fewer still are ethical business owners. Why is that worst than creating your own digital online “personal bubbles/echo chambers” to make you see the news you find interesting and agreeable with your worldview and see the ads for products you’re most likely to buy completely different from what your neighbor sees? Because they don’t give a shit about any of that specifically. They’re using it to build AI (artificial intelligence) that you can’t tell apart from real humans. They proved on a stage that customer service jobs are a thing of the past pretty soon and shortly after, algorithms that sufficiently learned consumer behavior take mine. Like Sophie or Googles latest: Duplex. “Conversational AI for Better Customer Service”. That was built with the years of aggregated data from multiple sources from consumer behavior, social media demographics and usage, to speech recognition and so much data you could build… well a human with it. Only it won’t ever serve this much body, grace and face. How do you protect yourself? You can’t. You can send a letter to every company you touched online asking for what data they collected about you but that doesn’t matter, it’s the combined data from multiple sources that AI is built from. Not that AI is all bad either. I think the current trend is. Read privacy policies. Demand your legislators 1) Protect human rights — please learn about the tearing apart of families, children from their parents — in deportation camps in the US. and 2) Protect your data privacy rights. There’s people out there fighting the good fight. But if they’re being tracked with zero privacy then who needs a dictator? Just put a clown in office to keep people fighting each other about issues we never seem to move one way or the other on. We have awesome robots now 👍🏾. Oh and 3) Personal plug — hire humans. Talk to humans (many don’t know they’re currently interacting with bots already in many places as flawed as they are.) Listen and read human stories, and fucking hold your right to privacy over crony capitalism and convenience. Part 2 coming soon with the receipts that we already out there for you to see. You may read it written by me or someone else just trying to get people to listen. No one is subverting the outspoken; they’re distracting the people we’re trying to have listen. Oh and btw Facebook lets you add anyone on your friends list to a group. They’re a member automatically without ever asking for their permission. It gives the smaller advertising companies making a buck off selling your data access to more info about you. Go scan what groups you’re in, most are shams, and leave all of them you don’t actively participate in, has no ownership to confront or any you don’t really want to be in. Also, dislike business pages you don’t recognize. Love, Probably the Dumbest, Poorest 2018 Digital Marketer Ever. PS: I once told someone I confronted I’d hold my values over my profit. Told you I wasn’t being holier than thou. My new marketing services launch with dad’s name the way he communicated. Person to person. Human to human. Grassroots, community-level connections with direct access to people to talk to and groups based on shared interest. Not ninja adds to groups or tracking software to target your… avatar. I hope other marketers out there want the same and are doing it already. For as long as there’s a living wage in it. Immigrants want freedom, a chance. Except they land on our shores or at our boarders to be treated like animals and those who survive realize they came looking for a job only to find out they’re being blamed for taking all the jobs that don’t exist anymore. Immigrants aren’t taking your jobs, you idiot. Unregulated Capitalism is.
AI — You But Better
0
disclaimer-the-following-views-are-my-own-and-not-associated-with-any-company-or-business-owner-15aabd1f0554
2018-06-04
2018-06-04 15:12:51
https://medium.com/s/story/disclaimer-the-following-views-are-my-own-and-not-associated-with-any-company-or-business-owner-15aabd1f0554
false
2,239
null
null
null
null
null
null
null
null
null
Marketing
marketing
Marketing
170,910
Khanja MaDemons
Digital Marketer, Story-Teller. Grassroots Organizer. Artist. Hinduism Scholar. Writer. QPOC. Drag Queen. Activist. Human. Demon. patreon.com/hausmademons
f0db658b8a90
pritesh.pillay
25
68
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-29
2017-10-29 06:49:30
2017-11-12
2017-11-12 17:56:16
0
false
en
2017-11-13
2017-11-13 06:52:37
1
15abe8d82bbf
6.728302
5
0
0
The universe has been here for 14 billion years with about 10²⁴ stars. It’s a number humans can not comprehend because we never deal in…
3
Alien dashboard for humanity The universe has been here for 14 billion years with about 10²⁴ stars. It’s a number humans can not comprehend because we never deal in such large numbers. So an easy way to understand it is to divide it by the number of humans we have on this planet. So, even that number is about 30 trillion stars per person on this planet. What that means is if we want to explore the universe in the future we would only have 1 person per 30 trillion stars to scan it or represent humanity as an ambassador. On the contrary, it would be difficult to imagine less than one alien civilisation per trillion stars out there. Assuming these exist, then the chances 50% of them running more than a million years ahead and the other a million year behind in technology is very likely. What would an advanced civilisation look like ? The best way to judge that is to see where our technology is headed in a million years and look at evolution of species in respect to core building blocks. Energy revolutions Humans started to control fire a 125,000 years ago. They first used it to transform the food that made human brain evolution possible and took humans to a different path as opposed to other animals. You will notice no other animal species cook their food and thus lack the energy to fuel the intelligence that humans have. Then, humans learnt to tap other animals for external energy like horses, elephants, bulls and eventually settle as an agricultural society. Then next breakthrough — was when humans learnt to channelize energy through steam engine that led to the industrial revolution. It led to massive substitution of human energy with industrial machinery. The fourth breakthrough — came in with the invention of electricity. Humans could now regulate and flow energy to long distances that were earlier not possible. It provided variable control on energy for the first time. We could bundle basic intelligence inside electricity flows. The fifth revolution — came in form of atomic energy. This revolution as such is incomplete due to the negative consequences it posed to the species. Otherwise the real promise of atomic power is unlimited portable energy at will. The sixth revolution — in energy is solar. This allows the species to truly decentralise. We do not have to carry the fuel or electricity to remote places. Now people and devices can be anywhere and stay unconnected. Homes, vehicles, or devices can stay remotely powered without limits of fuel capacity. The next revolution — wireless energy. Where energy can be remotely fed to devices. The future — quantum energy, where energy could be wirelessly transported to remote places without distance limitations. Communication revolutions Oral communication is the most native form of collaboration across all species. Humans were the first and the only ones to start written communication about a few thousand years ago. This allowed humans to communicate over time and location. A huge advantage that led to all the scientific progress. The ideas now became additive over time. One could record history and learnings for generations to come. Communicate over distance without degradation of the message. Accumulate a volume of related ideas. The next big revolution was mass media (1-to-many)— printing press allowed humans to communicate over masses and over time. It allowed ideas to spread more uniformly. It allowed the society to stay in sync every morning through the news paper. Students to have access to a common body of knowledge. Basically it allowed humanity to get in sync for the first time. The third revolution was real-time long distance communication (1-to-1) — telephone allowed 2 people over a long distance to sync up in real time which would other wise take months. The fourth revolution was wireless realtime mass media(1-to-many) — radio was again a mass-media revolution that allowed large populations to stay tuned in real time but over a limited area. The fifth revolution of visual mass media (1-to-many) — television transformed the society by long range visual communication that could be also real time and for mass consumption. Video was part of the same space but allowed communication over time with mass public. The sixth revolution was voluminous data exchange (machine-to-machine)— where large amounts of data could instantly be exchanged allowing real time commerce and markets to expand globally. This included cryptography as an added means of trust in this layer. The seventh revolution wireless — where one could talk to any one on the planet while roaming freely. It was further adapted to all other forms of communication be it mass media or machines or people. The eight revolution the many-to-many communication over internet — this allowed for the first time for public to engage in a many-to-many form of communication as apposed to 1-to-many or 1-to-1 in all other forms. It allowed instant collaborative knowledge creation across distance and time in a decentralised form. The ninth revolution many-to-one — block chain allows permanent decentralised record of consensus to emerge between unlimited remote stake holders. Causing species wide consensus and governance possibilities. The tenth revolution many-to-one over time — smart contracts is a consensus about a future event and its outcomes.This allows species wide agreement on future events to be hard coded. The interstellar communication — quantum communication will allow us to extend real time communication over inhuman distances measured in light years. This is necessary for humanity to be space faring. It will allow real-time collaboration during space missions and remote colonisation. Remote viewing — quantum communication one day would allow us to see anything from anywhere across space and perhaps across time. Communication over time — as we can see most communications have been across space or time but only allowing real-time and from past to future. One could read about the past and communicate to future readers. As our technology progresses, quantum communication would further allow future events to be made more certain by dictating the past. This is the holy grail for humanity to become future proof and timeless. This can be achieved by information leaks into the past creating a feedback loop. Intelligence revolutions To understand intelligence one needs to appreciate knowing reality as an intrinsic component. Without the correct facts even the world’s largest super computer would look like unintelligent. And also without the end goal even google maps can not tell you directions even if know all the facts and has all the cloud power. So intelligence = computing + reality + intention. It all started with humans discovering fire and achieving the ability to process complex tasks. It started recording facts and beliefs that was a personal reality. It used this knowledge to pursue the primary intention of maximising survival. Then came along shared reality — where communities (based on advancement of communication) started having common facts and intention. Scientific reality — was the first breakthrough and mass agreement on testing assimilating facts or knowledge into human civilisation. Mass reality — was a result of mass media and mass communication technologies. There evolved a shared state of human intelligence mostly now in form of internet. Market intelligence — pricing of known facts and intentions resulting in delegation of autonomy and resources to drive future intentions using crowd intelligence. Broad computing milestones — Machine computing — was invented during intense world war as a survival instinct to break communication code of warring parties and there has been never looking back. Auxiliary machine intelligence — think of smart phones and cloud as a means to have unlimited portable machine intelligence anytime, anywhere. They act as auxiliary interface to the singularity or mass intelligence of humanity. Advanced auxiliary machine intelligence — where the machines can accept wider input from reality and process wide band of data. However, it remains devoid of its creating its own intention. Example autonomous cars. Fact deduction engines — Machine learning allowed computers to quickly deduce facts given the intention using mathematics. Future of intelligence— Future Autonomous intelligence — where a machine have the option to autonomously choose its intention. It will be subject to mass agreements and protocols as bounds within which it can operate like humans for co-existence. Fact Markets — wherein each fact, knowledge or news can be bet against. The primary purpose of humanity is to increase intelligence by uncovering the true reality. In post intelligence revolution and 3d printing — every research project or news item will have a betting market. Evidence and research will be rewarded base on quality and decisiveness. It may counter fake news that today threatens the human survival itself. Quantum Intelligence — is a possible phenomena that is controversial today but much in practise by great leaders. The power of intention influences the facts of the reality. It is an accepted fact that reality is built on quantum phenomena with no hard underling past/facts. Every fact is a probability and an outcome of intention/observation. The past events remain variable and are determined by the future. Where are all the aliens? If there are 30 trillion stars per person on this planet then there must be at least 1 alien civilisation per person on this planet. And with this rate of growth in the last 100 years we must have really caught there attention. With those many civilisations out there and half of them being a million or billion years ahead in technology. They would exactly predict how fast we would develop and perhaps how fragile we are today. Its very unlikely they would be hostile at all because any species that would be hostile and develops technology would end up self-destructing. For any species to survive decentralised technology would have to be very cooperative, compliant and caring for each others. The chances are that all these advanced civilisations would already possess far advanced language translators than ours. They already know everything about us but would not talk to us or share or even inspire any technology even by mistake till we learn to co-exist and especially learn to care for other species that are less powerful than us. They can see how archaic and cruel we are to animals and what we can do to less fortunate species. We would hardly be called civilised in their terms and probably quarantined as barbarians under some inter-galactic mandate. If we want first contact to happen then we need to stop cruelty to animals and need to become non-violent species.
Alien dashboard for humanity
19
alien-dashboard-for-humanity-15abe8d82bbf
2018-02-12
2018-02-12 13:28:45
https://medium.com/s/story/alien-dashboard-for-humanity-15abe8d82bbf
false
1,783
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Vishal Gupta
Founder & CEO— Diro Labs
dd5004c3f265
vishal144
63
45
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-09
2017-10-09 12:08:01
2017-10-09
2017-10-09 12:23:07
1
false
tr
2017-10-09
2017-10-09 12:23:07
2
15ae9b7e628
0.898113
1
0
0
Machine Learning’in kullanım alanlarına baktığımızda pek çok proje ve fikirle karşılaşıyoruz. Bir konuyu öğrenmek için en iyi yöntemin…
5
Machine Learning ile Öğrenci Yönlendirme Projesi Machine Learning’in kullanım alanlarına baktığımızda pek çok proje ve fikirle karşılaşıyoruz. Bir konuyu öğrenmek için en iyi yöntemin, konuyu pratik hayatta kullanmak olduğuna inanıyorum. Bunun için de kendime bir peoje seçtim. Öğrenim gördüğüm alan olan Elektrik-Elektronik Mühendisliği’nin pek çok alt dalı bulunmakta. Gerek kendi arkadaşlarım, gerekse benden sonra bu bölüme başlayan öğrencilerin en çok yardıma ihtiyaç duyduğu konulardan birisi de bu alt dallardan hangisini seçeceği. Bu konuya çözüm bulmak istemem rağmen mantıklı bir çözüm yolu bulamamıştım. Machine Learning ile tanışana kadar. Aklıma gelen fikir; öğrencilerin ilgi alanlarına, ders notlarına, sosyal hayattaki seçimleri dikkate alınarak; Machine Learning teknikleri kullanılarak Elektrik ve Elektronik Mühendisliği’nin alt dallarından en uygun olanı bulup öğrencilere alan seçimi konusunda yardımcı olmak amacıyla yapılan bir proje. Başta çok zorlanacağıma emin olsam da bu konunun arkasını bırakmak istemedim. Mesela zorlanacağım konulardan ilki, sistemi eğitmek için gereken veriyi nereden bulacağımı bilmiyor olmam. Buna çözüm olarak “dummy” veri kullanmaya karar verdim. Proje henüz sözlü aşamada fakat ayrıntılar netleştikçe sizlerle de paylaşıyor olacağım. Eğer yazıyı beğendiyseniz ya da sadece “merhaba” demek istiyorsanız aşağıdaki linklerden benimle iletişime geçebilirsiniz. Teşekkürler. Twitter - Mail
Machine Learning ile Öğrenci Yönlendirme Projesi
1
machine-learning-ile-öğrenci-yönlendirme-projesi-15ae9b7e628
2018-06-29
2018-06-29 13:54:49
https://medium.com/s/story/machine-learning-ile-öğrenci-yönlendirme-projesi-15ae9b7e628
false
185
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Onur Genes
CEO of everything
325b32bfdf2e
onurgenes
77
94
20,181,104
null
null
null
null
null
null
0
null
0
855952a2eb7d
2018-08-08
2018-08-08 05:56:03
2018-08-17
2018-08-17 14:31:00
8
false
zh-Hant
2018-08-17
2018-08-17 14:31:00
2
15b0c45d3cae
1.431447
0
0
0
Structural Change and Anomalies Detection
4
R語言自學日記(16) -結構性變動與異常值檢驗 Structural Change and Anomalies Detection “A man taking notes on a piece of paper while seated at a desk with an iMac” by andrew welch on Unsplash 前言 許多時候我們的資料會經歷許多的異常或是結構變動,舉一個最近的例子來說,國巨的股價在去年被動元件大漲時出現急遽拉高,這幾周因為市場對於被動元件後市信心等因素回跌修正: 如果我們以時間序列來預測,也許在17年以前是相對準確的,股價波動不大,趨勢僅有微微走高,然而,當我們用同一個模型去預測17年下半年的股價變動時,效果顯然就差上許多,股價的突然走高代表著序列資料的均數與變異數經歷了異常大的波動,我們通常稱這種情況叫做異常值,而今天我們就要來談如何檢驗異常值以及如何根據異常對模型做出結構調整。 檢定異常值 異常值跟結構性變動如果以直觀解釋來分類,可以說成是暫時性或是永久性,比方說法說會或除權息之後出現的股價跳空,我們就認為是異常值,至於有沒有伴隨結構變動,就要另外檢驗,比方說股價有可能一路向北,代表投資人認同該價位,但也有可能修正回檔。 檢測異常值最簡單的方式就是針對均數與變異數做 t-test 檢定,我們可以指定這個檢定的機率分配對象,比如 ' normal', 'Gamma’, ‘Poisson’ 等,在這裡我們就使用預設的normal來針對變異數與平均數做檢定即可,另外我們的懲罰項參數penalty被預設為AMOC,也就是單一變動點的意思,因此如果你想要多個異常點,就需要用遞迴的方式檢索資料,同時確保擷取資料段不要太小以免過度配適,通常我們會跟隨肉眼一起選擇時間區段,我們這邊就以單一變動點而言即可,代碼如下: 紅色虛線的地方標示著檢測出均值與變異數變動的地方,這就作為我們的異常值檢驗點,這些方法是在 Change in Normal mean and variance: Chen, J. and Gupta, A. K. (2000) 中被提及,這邊不再贅述,有興趣的朋友可以自己翻閱以獲得更好的參數調整方法。 找出變動點(Breakpoints) 我們可以根據迴歸模型去找出變動點,概念其實非常簡單,我們建構一個飯用的迴歸模型以後,先假設他會存在 m 個斷層點(breakpoint),這些點會讓迴歸模型出現係數變動,因此會將模型拆解成 m+1 段。 Source:R Documentation, breakpoints function 透過最小化上式的殘差平方和(Residual Sum of Squares, RSS),我們就可以得到最佳的breakpoints,很顯然這並不是人力可以完成的工作,是m的選擇必須夠多,才能夠檢測出最適當的breakpoints estimation,因此我們讓程式去迭代他,在這裡我們選擇參數 h = 0.01,表示一次取樣1%的樣本來嘗試,並且透過RSS與BIC資訊準則來檢定: 我們可以透過上面的圖片來觀察,以RSS而言,我們可以只選擇一個breakpoints,這裡的概念是尋找elbow point,比較直觀。summary(globtemp_brk)中我們可以找出該日期點,在序列資料的第157筆,因此,我們就知道了結構變動的發生起始點。 檢定結構性變動:Chow-Test 結構性變動在時間序列模型中可以有很多種,包括均值、變異數以及係數的變動,我們今天以AR(1)模型舉例,假設序列存在一次性變動,則: 顯然當 Dt (τ)等於一的時候,整個序列就會出現結構性變動,其中 τ 我們稱為變動點,而檢定結構性變動實際上就是在檢定 γ0 = γ1 = 0。 我們這裡可以使用Chow Test來檢測結構性變動,我們將AR(p)模型分成兩個,一個是具有變動項的未受限模型,另外一個是不具有變動項的不受限模型,並且使用F檢定與Wald統計量來檢定: 要注意的是這裡我們假設τ是已知的,而一般來說我們可以透過異常值檢驗的步驟去得到這個變動點,如果未知,我們可以根據τ, τ+1, … , τ+k的方式去找出Chow統計量的最大值,我們在上一段已經透過breakpoint知道變動點在第157筆,因此在這裡我們直接代入並透過代碼來檢驗看看: 整段代碼比較複雜一點,我們一個一個來看,首先,這裡的概念是建構一期落後回歸(實際上應該要做差分的,但這裡就不特別把所有步驟都演繹一遍了,前處理有興趣的朋友可以參照第14篇ARIMA模型建構的流程與檢驗),因為我們知道157是變動點,我們將迴歸模型分成三部分:全期回歸,以及從157為中斷的兩部分回歸。 接著,我們計算誤差總和平方,並計算F-Score以及Chow統計量,在這個例子我們會發現Chow統計量遠遠大於F-Score,因此我們應該拒絕沒有結構變動的虛無假設,表示存在結構性變動。 結尾 這個系列我自己預計是25篇左右,也許額外多寫一點自學的心得與資源等等,所以到現在已經不知不覺超過一半了,時間也過了一個多月,希望在開學前我能夠寫完全部的文章。下一篇會談到向量自迴歸模型(VAR模型),這是目前在金融預測上非常實用且強大的模型,如果你喜歡我的文章,還請不吝下方按Clap唷!
R語言自學日記(16) -結構性變動與異常值檢驗
0
r語言自學日記-16-結構性變動與異常值檢驗-15b0c45d3cae
2018-08-17
2018-08-17 14:31:00
https://medium.com/s/story/r語言自學日記-16-結構性變動與異常值檢驗-15b0c45d3cae
false
79
About a self-taught diary on R Language programming and practical Time Series Analysis, made by a python user and BBA student. Hope you like it:)
null
null
null
R 語言自學系列
poiuy8568@gmail.com
r-語言自學系列
DATA SCIENCE,TIMESERIES,R LANGUAGE,SELF TAUGHT,DATA ANALYSIS
null
Data Science
data-science
Data Science
33,617
Edward Tung
Senior BBA Student, Data Science, Consulting Intern
b8b9ba7ac6eb
poiuy8568
69
21
20,181,104
null
null
null
null
null
null
0
null
0
b230ea2a6eb8
2017-11-12
2017-11-12 11:07:01
2017-11-12
2017-11-12 11:41:14
11
false
en
2017-11-13
2017-11-13 16:03:46
48
15b15420df21
15.779245
111
2
2
This is the SECOND in a series of posts on applying Tim Ferriss’ accelerated learning framework to Data Science. My goal is to become a…
5
Deconstructing Data Science: Breaking The Complex Craft Into It’s Simplest Parts This is the SECOND in a series of posts on applying Tim Ferriss’ accelerated learning framework to Data Science. My goal is to become a world-class (top 5%) Data Scientist in < 6 months, while open-sourcing everything I find and learn along the way. The purpose of this post is to empower others to start accelerating their own learning by: deconstructing the complex craft of Data Science into it’s simple micro-skills identifying the 20% of skills that contribute to 80% of outcomes And if you stick around until the end, you’re in for a special treat. Estimated reading time: 15 min ( to save you hours of spinning in circles ;) The Problem A simple Google search of “how to learn Data Science” returns thousands of learning plans, degree programs, tutorials, and bootcamps. It’s never been more difficult for a beginner to find signal in the noise. Everyone seems to have a different opinion, and the only common approach appears to be dumping a long list of courses to take and books to read, all the while providing little to no context into how these concepts fit into the bigger picture. This post is my attempt to convert all the buzzwords & fluffy terminology into explicitly-learnable skills. To do this, I’ll be walking through my application of the first two steps to Tim Ferriss’ accelerated learning framework: Deconstruction & Selection. Rather than jump right in to a roadmap of my own learning journey (that’ll be next post), I want to empower you to begin your own. And if you haven’t read my first post, I’d highly recommend starting there: www.ajgoldstein.com/learning-without-limits/ Deconstruction: The Data Science Process “The whole is greater than the sum of it’s parts.” — Aristotle I’ll be walking this infographic step-by-step below It’s true: Data Science is not a single discipline, but a craft at the intersection of many. So in order to appreciate how the seemingly disparate puzzle pieces fit together, I present to you a story. It’s called “The Data Science Process”, and it has six parts: Frame the problem: who are you helping? what do they need? Collect raw data: what data is available? which parts are useful? Process the data: what do the variables actually mean? what cleaning is required? Explore the data: what patterns exist? are they significant? Perform in-depth analysis: how can the past inform the future? to what degree? Communicate results: why do the numbers matter? what should be done differently? But before we begin, a couple quick caveats: 1) In large organizations, “The Data Science Process” is often carried out by an entire team, not a single individual. An individual can specialize in any one of the six steps, but for simplicity, we’ll be assuming a one-person team. 2) The insights that follow are a compilation of various expert interpretations; not my original ideas. I am not (yet) an expert Data Scientist, but over the past 6 weeks I’ve learned from many. Thus, I’m simply serving as the filter between hundreds of hours of research and the actionable insights you’ll find below. In particular, I’ll be pulling from favorite online articles (linked throughout) and conversations with the following 10 experts: Chris Brooks — Director of Learning Analytics at the University of Michigan Andrew Cassidy — Freelance Data Scientist & Online Educator Jim Guszcza — US Chief Data Scientist at Deloitte Consulting Kirk Borne — Principal Data Scientist at Booz Allen Hamilton Michael Moliterno — Data Scientist + Design Lead at IDEO Chris Teplovs — Research Investigator at the University of Michigan Jonathan Stroud — Co-Founder of the Michigan Data Science Team (MDST) Josh Gardner — Data Science Research Associate, Team Leader on MDST Jared Webb — PhD Candidate in Applied Math, Data Manager at MDST Alex Chojnacki — Data Application Manager for Flint-Water-Crisis project And to bring each step of the process to life, I’ll be using my work at Calm.com, Inc. in San Francisco this summer as a real-world case study. While there, I leveraged analytics insights from Calm’s database of 11 million users to develop & launch Calm College — the first US platform geared toward using mindfulness to improve college student mental health. Alright, let’s get started! Step One: Frame The Problem The first step of The Data Science process involves asking a lot of questions. The exact manner in which you do this will depend on the context in which you’re working, but whether you’re in the private sector, public sector, or academia, the key idea is the same: before you can start to solve a problem, you have to deeply understand it. Your goal here is to get into the clients’ head to understand their view of the problem and desired solution. In the case of a corporation, this will first involve speaking with managers & supervisors to identify the business priorities and strategy decisions that’ll influence your work. It’s not uncommon for the first request that a Data Scientists’ receives to be entirely ambiguous (i.e. “we want to increase sales”). But it’ll be your job to translate the task into a concrete, well-defined data problem (i.e. “predict conversion rate & return-on-investment across customer segments.”) This is where domain knowledge and product intuition is crucial. Speaking with subject-matter-experts to cut through confusing acronyms & dense terminology can be incredibly helpful here. And familiarizing yourself with the product/service will be essential to understanding the intuition behind metrics. For example… With Calm College, the ambiguous request we started with was to establish partnerships with universities to offer the Calm app as a student wellness resource. To better understand our specific domain, we started by spending two weeks speaking on the phone with as many college administrators as possible. We asked questions like: How would you describe the mental health climate on your campus? How high of a priority is improving student mental health? What main resources do you currently offer students? What have been the greatest challenges? Is there precedence for offering 3rd party services? By the time we got to the final question, nearly every administrator had described their campus’ mental health climate as nothing short of “toxic”, and expressed improving it as their #1 priority. They explained that the greatest challenge to students seeking help has been overcoming logistical issues (i.e. wait-time, transportation, & money) with the counseling services they currently offer. Finally, here’s where our ambiguous request became a data problem… Administrators told us that, before a 3rd party service can be adopted, precedence requires evidence supporting it’s use. In other words, showing that students on campus are already using the Calm app would be crucial to getting a deal done. Step Two: Collect Raw Data The second step of the Data Science Process is typically the most straightforward: collect raw data. This is where your first technical skill — querying structured databases with SQL — comes into play. But fret not; it’s not as complicated as it may sound. Here’s an awesome tutorial by Mode Analytics that’ll get you started with SQL in just a couple hours. More important than the querying itself, however, is your ability to identify all the relevant data sources available to you (e.g. web, internal/external databases) and extract that data into a useable format (e.g. .csv, .json, .xml). Oftentimes, an analysis requires more than one dataset, so you’ll likely need to speak with backend-engineers in your organization who are more familiar with what data is being collected and where it currently resides. Communication is key. For example… With Calm College, this required me sitting down with Calm’s lead engineer and exploring ways to pull usage data for specific college campuses. Ultimately, I found out that we could simply query user activity by email address and school location. So for the University of Michigan, for example, I simply searched the database for emails ending in “umich.edu” or locations listed as “Ann Arbor, MI”. This approach wasn’t full-proof (turns out not all students were using their school email) but it did the job by giving us a representative sample of ~1000 users per college to compare different campuses’ activity head-to-head. Step Three: Process The Data The third step of the Data Science Process is the most underrated: process the data. This is where a scripting language like Python or R comes into play, and a data wrangling tool like Python’s Pandas is absolutely indispensable. To get started, here’s a breakdown of Python vs. R, intro to Python on Codecademy, 10-minute tutorial to Pandas, and colorful data wrangling cheat-sheet. Data cleaning is typically the most time-intensive part of data wrangling. In fact, in expert surveys it’s been estimated that up to 80% of a Data Scientists’ time is spent here: cleaning & preparing the data for analysis (more on this below). The reason this can be so time-consuming is because — before you can analyze data — you have to go column-by-column, developing an understanding for the meaning of every variable and then checking for bad values accordingly. The tricky part is that a bad value can be defined as many things: input errors, missing values, corrupt records, etc. And once you’ve identified a “bad value”, you have to decide whether it’s most appropriate (given the situation) to throw it away or replace it. For example… With Calm College, I faced two significant roadblocks here: There was little to no company documentation on database variables I didn’t know Python’s Pandas and felt too intimidated to try and learn Each of these presented their own challenge: It took me several days to figure out how to define an “active user” (i.e. should ‘active’ mean opening the app, starting a session, or completing a session?) I had to use an analytics tool called Amplitude rather than coding in a script file. After talking with Calm’s Product Manager, I was able to define an active user as someone who “starts a meditation session” and identify the right variables. Then I had to clean the data by filtering out students who hadn’t been active in the last 365 days. The thought process here was that administrators (i.e. our client) would primarily be interested in student activity from the past academic year, and non-active students (i.e. “null” values) were outliers that, if included, would only skew the results. Noticing a theme here? It’s about your clients’ interests, not your own. Step Four: Explore The Data The fourth step of the Data Science Process is where you explore the data, and the real adventure begins. This is where the core competency of scientific computing (i.e. Python’s numpy, matplotlib, scipy, & pandas libraries) comes into play. To begin, here’s an awesome breakdown of the “SciPy ecosystem” (a collection of libraries in Python), extensive guide to data exploration, and a conceptual handbook of assumptions/principles/techniques. Using these libraries, you’ll split, segment, & plot the data, in search for patterns. Thus, the key is becoming really comfortable with producing quick & simple bar graphs, box plots, histograms, etc. that’ll let you catch trends early on. Remember that analysts who produce beautiful externally-facing visualizations often have to iterate through hundreds of internally-facing ones first. So playing around with possibilities in this way is more of a guess-and-check art than a hard-and-fast science. Finally, once you’ve identified some patterns, you’ll want to test them for statistical significance to determine which are worth including in a model. This is where a strong grounding in inferential statistics (e.g. hypothesis testing, confidence intervals) and experimental design (e.g. A/B tests, controlled trials) is essential. For example… With Calm College, I started by exploring factors that would influence a potential partnership: monthly engagement, week-by-week retention, and subscription rate. My hypothesis going in was that elite schools known for student stress (i.e. Cornell, Harvard, MIT) would have significantly higher numbers across the three statistics. Or, in other words, I suspected that stressed-out kids need more calm. To test this, I began by segmenting universities into their regional groups and then splitting areas into specific college towns. From there, I was able to compare the statistical significance of schools’ activity across local, regional, and national averages. After several iterations of my experimental design (and hundreds of internally-facing visualizations), I found what I was looking for: a list of outlier schools that we would ultimately call “Calm’s Most Popular Colleges”. Step Five: In-Depth Analysis The fifth step of the Data Science process is where you create a model to explain or predict your findings. This is where most people lose the forest for the trees, as they enter into the land of shiny algorithms and fancy mathematics. Creating models is by far the most over-glorified part of Data Science, which is why most degree programs solely focus on this single step. But before jumping in to a particular solution, it’s important to pause and return to the bigger picture by asking yourself: “what am I really trying to do and why does it matter?”. From here, you’ll: apply your knowledge of algorithms’ contextual pros/cons to choose one approach best-suited for the situation carry forward statistically significant variables (from the exploratory phase) using what Data Scientists call “feature engineering” use a machine learning library like scikit-learn for implementation. The overall goal is to use training data to build a model that generalizes to new (unseen) test data. So while building, it’s important that you’re keenly aware of (and capable of recognizing) overfitting and underfitting. Here are some amazing free videos from Andrew Ng’s Machine Learning course and Harvard’s CS109 “Intro to Data Science” class that will teach you how to do this for different algorithm types. A great place to practice is through Kaggle tutorials. NOTE: I’d recommend starting by watching just one or two videos on a simple model type like logistic regression or decision trees, and then immediately applying what you’ve learned on a dataset you care about. For example… With Calm College, the model I was building was more “explanatory” than “predictive”. That is, I was simply trying to identify the universities most suitable for a partnership and understand what factors about a school were contributing to that. So what I ultimately built was a simple linear regression model (in Excel, no less) that used features like active user count, student enrollment, & university endowment to explain a university’s user activity over time. Sure, building a predictive model would’ve been the “cool” thing to do, but the goal wasn’t to predict sales leads for the future; it was to establish partnerships with universities NOW. Lesson learned: the job of a Data Scientist is NOT to build a fancy model; it’s to do whatever it takes to solve a real-world human problem. Step Six: Communicate Results The sixth step of the Data Science Process is where you bring it all together and communicate results. This is where you practice the most underrated skill in the Data Science toolbox; the X-factor that separates the good Data Scientists from the great ones: data storytelling. Speaking with experts, I heard it time and time again: your worth as a Data Scientist will be ultimately determined by your ability to convert insights into a clear and actionable story. In other words, the ability to create and present simple, effective data visualizations to a non-technical audience is the most sought after skill in business today. For a perfect example of how to do it right, here’s the most well-put-together data story I’ve ever seen on “Wealth Inequality in America”. And here’s a lecture by Harvard’s CS109 that’s a brilliant encapsulation of the art of data storytelling. The professor covers everything from understanding your audience to providing memorable examples. If you don’t have time to watch the lecture, you can check out my Evernote notes that sum it all up. Finally, to create beautiful data visualizations, I’d recommend going beyond Python’s basic matplotlib library and checking out seaborn (statistical) and bokeh (interative). For example… With Calm College, we had to weave our findings on student activity into an actionable story for campus administrators. First, I used our list of “Calm’s Most Popular Colleges” to generate sales leads, by reaching out to 50 schools that the model identified as most suitable for a partnership. Then, for each of the 50 schools, I crafted a personalized story about their students’ activity on the Calm app. For example, with Harvard, we reached out to the head of campus wellness to let her know that Harvard’s campus was a top 5 most popular college for the Calm app. Then we included 4 graphs depicting the following insights: 6% of the Cambridge, Massachusetts population (17,000+ people) are Calm users. More than 82% of Harvard users are active on a monthly basis, with an average of 15 (fifteen!) sessions/month! Week-by-week retention amongst Harvard users is 3x that of the average Calm user. Yet, despite all of this, Harvard student’s subscription rate is still well below average. The first 3 graphs told a story of extraordinary interest in the Calm app on Harvard’s campus. But what really drove home our program was the last point: “despite all this amazing interest, it’s clear that your students cannot afford Calm’s $60/year subscription. That’s why you need Calm College: to make the Calm app a FREE wellness resource for your students.” Rather than sell our product, we were selling their students’ past and present use of our product. And it worked like a charm. Repeating this approach for other colleges, we were able to successfully get our foot-in-the-door at many of the most elite institutions in the country. And eventually, thanks to this application of The Data Science Process, we were able to launch the program at 8 schools this Fall: the 8 schools Calm College launched at this Fall Selection: The Core 20% “You are not flailing through a rainforest of information with a machete; you are a sniper with a single bull’s-eye in the cross-hairs.” — Tim Ferriss, The Four Hour Chef The greatest mistake you can make in accelerated learning is trying to master everything. This is not Pokémon. You are not going to catch ’em all. Instead, the key is being relentlessly focused with the micro-skills you choose to develop. Through rigorous application of the 80/20 rule, it’s possible to cut down a long list of possibilities to the highest frequency material. Then, once you’ve cleared your plate, it’s depth over breadth all the way. In his book, the “Four Hour Chef”, Tim Ferriss discusses this selection process by introducing the idea of a “Minimum Effective Dose” (MED). Simply put, an MED is the smallest dose that will produce a desired outcome. Here, I’ve broken down the MED for all 6 steps of The Data Science Process: the 20% of Data Science skills that result in 80% of outcomes In conversations with experts, these 8 skills continuously came up as the most essential. In particular, Data Wrangling (i.e. Python’s Pandas) was said to be the #1 skill (in terms of time spent doing) by every Data Scientist I spoke with. Data cleaning is not sexy, but it encapsulates up to 80% of the job. You may be wondering where big data tools like Hadoop & Spark, or modeling techniques like neural networks & deep learning fall into all this. The answer: surely outside the core 20%. To my surprise, many Data Scientists I spoke with emphasized that only a small percentage of companies have data that even requires something as complex as a neural network! Instead, an overwhelming majority of employers need more simple services like data cleaning, exploratory analysis, and logistic regression models (as recently reflected in an industry-wide survey by Kaggle). When choosing what to learn, remember: you can always revisit the heavier topics later, but don’t weigh yourself down at the start. The goal is to accelerate learning. So wait until your house of expertise has a strong foundation before adding the shiny stuff. If you’re looking to master the fundamentals of Data Science in 6 months or less, you’ll want to simply focus on the core 20%. Next Steps “Live as if you were to die tomorrow. Learn as if you were to live forever.” — Mahatma Gandhi I do not believe knowledge is useful for the sake of knowledge; only if you use what you’ve learned to improve your life, or the lives of others. So I would encourage you to pause, reflect, & ask yourself: “what’s the smallest possible action I can take right now with what I’ve learned?”. For instance, a great place to start would be picking one of the six steps you’re most interested in and exploring the skills/resources associated with it. Then find a dataset that’s of interest to you and start learning by doing through a mini-side-project. The key is trusting yourself by following the path that you’re instinctually most drawn to… because that’s where you’re find the most short-term motivation & long-term fulfillment. Personally, after deconstructing data science and identifying the core 20%, I decided to enroll in Springboard’s Data Science Intensive online bootcamp (recently renamed to “Intermediate Data Science”). I chose this program because it was the only curriculum I could find that covered all 6 steps of the data science process while focusing in on all 8 skills of the core 20%. For more information on the program, I’d recommend checking out Raj Bandyopadhyay’s brilliant Quora answers (here and here) on the methodology behind Springboard’s approach to Data Science education. And here’s a discount code for $100 off any Springboard course. Whatever you choose to do with this information, the important thing is that you do something. Getting started is always the hardest part, so I challenge you to turn intention into action. Final Thoughts Over the past few weeks, the power of the internet has sure become apparent. In just the first 7 days, my first post — Learning Without Limits — had 3000+ views from 66 countries around the world. Never did I expect it to spread so far and wide, but I guess I have all of you to thank for that. So as long as you all continue to pay it forward, I’ll continue to be an open book. As promised, I’ve complied and will continue to open-source all my favorite resources, insights, and findings via this new page: ajgoldstein.com/resources. All I ask of you is that you share this with people you think would benefit. That’s my call-to-action. Share. Why? Because we’re all in this together and true happiness comes from other people. Follow along the journey via the original blog posting: Deconstructing Data Science: Breaking The Complex Craft Into It's Simplest Parts This is the SECOND in a series of posts on applying Tim Ferriss' accelerated learning framework to Data Science. My…ajgoldstein.com
Deconstructing Data Science: Breaking The Complex Craft Into It’s Simplest Parts
630
deconstructing-data-science-breaking-the-complex-craft-into-its-simplest-parts-15b15420df21
2018-06-12
2018-06-12 07:45:58
https://medium.com/s/story/deconstructing-data-science-breaking-the-complex-craft-into-its-simplest-parts-15b15420df21
false
3,837
We publish stories, videos, and podcasts to make smart people smarter. Subscribe to our newsletter to get them! www.TheMission.co
null
TheMissionHQ
null
The Mission
Info@TheMission.co
the-mission
TECH,ENTREPRENEURSHIP,STARTUP,LIFE,LIFE LESSONS
TheMissionHQ
Data Science
data-science
Data Science
33,617
AJ Goldstein
Data Scientist. Podcast Host. World Traveler. Part-Time Philosopher.
ef0d3fb06317
ajgoldstein
289
71
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-13
2018-04-13 13:43:51
2018-04-13
2018-04-13 16:14:51
12
false
en
2018-08-29
2018-08-29 15:01:01
4
15b4030b5e87
10.014151
5
0
0
Have you heard of Deep Learning? What about Reinforcement Learning?
5
Reinforcement Learning introduction for managers / people in a hurry Have you heard of Deep Learning? What about Reinforcement Learning? Deep Learning is mainly about deep artificial neural networks. It allows to perform complex computation tasks such as face recognition, language translation, etc. Reinforcement Learning is about how software agents ought to take actions in an environment, and learn from it. These two aproaches are not mutually exclusive. In recent years, they have been combined to tackle difficult problems. It’s called Deep Reinforcement Learning (DRL). For example, Google used DRL to beat the top player of the strategy game Go. This article is not about how these approaches can be used together. It focuses on the fundamentals of Reinforcement Learning. Why is it a tutorial “for managers”? I just finished my Masters in Business Analytics in Montreal. I realize a lot of people have an “a priori” on what people do in business schools. And they are probably right. All jokes aside, business schools also offer the opportunity to learn about quite technical topics. One of the sub-disciplines of management is Operations Research. Operations research, or operational research is a discipline that deals with the application of advanced analytical methods to help make better decisions. What does it have to do with Reinforcement Learning? Optimisation Have you read “The Goal” by Eliyahu M. Goldratt? Alex Rogo, the main character, manages a production plant where everything is always behind schedule and things are looking dire. Alex has three months to turn things around. He, fortunately, gets the help from his old physics teacher, who makes him realize what the goal of any business is. The goal of a company is to make money. Keep in mind the story was written in the 80s and it’s about a factory plant. Nonetheless, the book is full of insights and shows it is easy to lose track and fall into a vicious spiral of inefficiency. You can, of course, disagree with Goldratt’s statement. Although, part of being a manager is to be concerned with making the right decisions. If money is not the goal of your company, what is it? Not fixing goals is like throwing darts at a wall without a target. How canyou know you’re making the right decisions if you don’t know where you’re aiming at? To make better decisions, you need to (at least): 1. Be objective. Assess your performance. 2. Have a feedback loop. Search for quick and frequent feedback. If you agree with this, you agree that running business is like solivng an optimisation problem. Once Alex Rogo understood this, he started measuring the contribution of every part of his plant’s manufacturing process. He managed to reduce the impact of bottlenecks, by rescheduling the production process or by exploiting the inventory capacity. I don’t want to spoil the book for you, but you can imagine it ends on a positive note. This optimization framework is mandatory if you want to start modelling your business problems mathematically. In fact, the first step of any modelisation is specifying an objective function. If the objective function represents costs, you want to minimize it. If it represents the total rewards, you want to maximize it. There is still one variable we have not talked about. It has been an obsession of managers and humans in general. By optimizing his factory to increase its profits, Goldratt’s protagonist is implicitly managing time. Every day, he is making decisions that will affect his decisions in the future. This is called a decision process. It also involves a dynamic system, a system dependant on time. We are now going to look at how to model and solve these dynamic optimization problems. Dynamic Programming So far, we have talked about modelling your business problems as optimization problems. Let’s get concrete. Let’s model a simple dynamic problem. The UniCo company wants to plan its production for the next three months. The demand for the product is 3 units per month. Production cost is $13 plus $2 per unit. Inventory cost is $1 per unit per month. Production capacity is 5 units per month, and inventory capacity is 4 units per month. Any excess of production goes directly to inventory. This problem is dynamic because the inventory level changes every month depending on how many units you decide to produce. Your decision this month affects your decisions for the upcoming months. For example, this is one possible sequence of the decision process. The following questions should help us model this problem. Can you break your problem into steps? One step is one month. What is the state of the problem at each step? The variable we have to track from step to step is the inventory level. If I know the current inventory level, I don’t need to know the past inventory levels. What decision can you make at each step? I can decide to produce a certain number of units each month. We also have to respect the product’s demand, the production and inventory level constraints. Can you measure the immediate reward/cost of these actions at each step? We have all the information needed to formulate the cost each month. It is dependent on how many units we decide to produce and how many units we keep in our inventory. What is the objective? The goal here is to minimize the cumulative cost for the next three months. Put all these variables into an equation, and you have got yourself a model. But instead of torturing you with the math, let’s look at a graph of all the possible states. The x-axis is time, and the y-axis is the inventory level Let’s do some programming! When I say programming, I don’t mean coding a sequence of computer instructions. I mean in the sense of planning. Intuitively, you would plan forward in time, but we are going to do the opposite. If you plan forward in time, you don’t know the value of being in a state (e.g: What is the value of having an inventory of 2 at month 1?), You need to look ahead and work your way backward to the present state, to know if it’s is a good state or a bad state. This is exactly what we are going to do, for every state, in a very efficient way. At the end, we will have the cumulative value of every state, also known as the value function. Once you computed the value function going backward, making the right decisions is a piece of cake. Why does it work so well? Because of the principle of optimality. If you have the best trajectory from one step to the last, you can simply go 1 step backward and pick the best action to the next step. You now have the optimal trajectory from that new step to the last one. Repeat that procedure until you hit the initial step, et voila! You have the optimal trajectory. There is something you may have noticed in this production problem. If I am over an inventory level of 2, I need to lower it below 2. If I am under an inventory level of 3, I need to increase it to a level higher than 2. It is called a policy. In this case, it’s an optimal policy. You could come up with your own policy and evaluate it with the value function. Often, if not always, we search for an optimal policy more than finding the best trajectory. The policy is there to guide you. It tells you how to behave in all situations: if something goes wrong and you fall into a surprising state, your policy keeps you on an optimal track. This problem was simple, the model was deterministic (no randomness involved) and the policy we found is also deterministic . Real problems are not so easy to model. Stochastic Processes Let’s complicate things. Imagine that you have to integrate the selling price per unit into your model. This price is volatile, driven by demand. You know the amount only at the beginning of each month when it’s time to make a decision. How do you plan with uncertainty? Would you change your policy? It depends on how the price varies. If you have enough data and confidence in it, you can build a distribution of possible prices. x-axis is the price, y-axis is the probability Still, how do you make decisions with probabilities? It all depends on how risk averse you are. In 1947, John von Neumann and Oskar Morgenstern developed a framework to help you decide in uncertain circumstances. It’s the utility theorem. It shows that, under certain axioms of rational behavior, you will behave as if you are maximizing the expected utility of potential outcomes. What does it mean for dynamic programming? In the deterministic case, we were taking the value of the best action in each iteration. In the stochastic case, we take the expected value of all actions at a given state, and we plan accordingly. The decision process is now stochastic. More specifically, it’s a Markov Decision Process (MDP). This type of processes has the Markov property, meaning that the present state is sufficient to predict the future. You thus do not need the whole history of events. In short, an MDP is applicable when the current decision affects future decisions, and the current state contains all the information we need from the past. MDP is a very generic and powerful model. Whether you believe we live in a deterministic or non-deterministic world, an MDP can apply. What do you think? Do you believe that behind any problem lies an MDP? Model Free There is just one small problem with this picture. To do planning with dynamic programming, we need a value function. Having a value function means having a model of the world. We have to know the dynamics of the world, and we have to frame it as an MDP. What happens when we do not have a model of the world? If you don’t know the dynamics of the environment, you are in a model-free situation, and you can’t do planning. Game Over? No, we assume an MDP exists. We experience the environment, value the costs or rewards of our actions, and learn from that experience. Welcome to Reinforcement Learning! The framework is not that different from what we have discussed so far. It builds on top of it. However, there are a couple of additional tricks to make it work. In a stochastic process, as explained above, the value function represents the expected value of all actions given a state, for all states. In model-free, we don’t know the value of actions, but we can take decisions in the environment and evaluate the reward of being in a new state. For example, if I drive my car and I have an accident, it’s hard to estimate the cost of every individual action that leads to it, but I can estimate the cost of the crash. So the first trick is to define a Q-value function as the expected value of all states given that I took a decision while being in a specific state. Every time I take a decision and get feedback, I can update my estimate of the Q-value function. If you know what a sample mean is (Statistics 101), and you know the trick to compute it incrementally, it is the same idea. If I believe I have a reasonable estimate of the Q-value function, I can do a little bit of planning, by acting “greedy” (taking the best action) over one step. After many iterations, I hope to get a good policy. This brings us to one of the biggest challenges of RL. Since I am never 100% certain of the true value function, I am not sure which state I should explore next. I could be missing good rewards. It’s the Exploration-Exploitation dilemma. https://www.probyto.com/articles/Exploration%20vs%20Exploitation%20trade-off%20for%20evolving%20Data%20Science You want to find a good balance between exploring new states and exploiting the states you think are good states. Your reward: Function Approximator I lied at the beginning. I will briefly talk about how Reinforcement Learning can be combined with Deep Learning. Sometimes, the world is so complex that it is over-optimistic to believe you will be able to estimate the value function and converge to an optimal policy just by experience learning. You need to insert a bias in your view of the world. You take a good enough function approximator, a neural network for example, and use it as an oracle to evaluate policies. The challenge is to give this oracle an intuition on how performant is a policy in the environment. One solution is to decorrelate the data you have on following a policy, feed it randomly to your neural network, while improving your policy at the same time. It feels like a moving target, but it works. This is the main idea behind Deep Q-Networks. There are many other ways to combine Deep Learning and Reinforcement Learning. One that I am particularly interested in is Model-based RL. The idea is to find a good enough MDP representing the world, and do planning afterwards. Welcome back dynamic programming :) To finish this article, I want to show you this diagram from Nate Silver’s lectures. It shows Reinforcement Learning at the center of many perspectives. Getting into Reinforcement Learning is like opening Pandora’s box. You will see it everywhere now. References - I took the little production problem from a Dynamic Optimization class I followed in 2016 at HEC Montreal given by Michèle BRETON. Also, I thank her for many great insights on this topic. - I watched all David Silver’s lectures on RL and took a couple of diagrams out of his slides. You can find the first lecture here.
Reinforcement Learning introduction for managers / people in a hurry
18
reinforcement-learning-for-managers-15b4030b5e87
2018-08-29
2018-08-29 15:01:01
https://medium.com/s/story/reinforcement-learning-for-managers-15b4030b5e87
false
2,296
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Patrick Mesana
Software Engineer / Data Scientist
ebee5d00b24b
patrickmesana
65
82
20,181,104
null
null
null
null
null
null
0
null
0
120fbcaac315
2018-08-15
2018-08-15 01:22:58
2018-08-15
2018-08-15 12:34:49
1
false
en
2018-08-16
2018-08-16 22:33:25
3
15b434aeb2b6
5.509434
3,514
74
2
Building cores, restructuring and contrast, and multiplying inputs.
5
Source Claude Shannon: How a Genius Solves Problems It took Claude Shannon about a decade to fully formulate his seminal theory of information. He first flirted with the idea of establishing a common foundation for the many information technologies of his day (like the telephone, the radio, and the television) in graduate school. It wasn’t until 1948, however, that he published A Mathematical Theory of Communication. This wasn’t his only big contribution, though. As a student at MIT, at the humble age of 21, he published a thesis that many consider possibly the most important master’s thesis of the century. To the average person, this may not mean much. He’s not exactly a household name. But if it wasn’t for Shannon’s work, what we think of as the modern computer may not exist. His influence is enormous not just in computer science, but also in physics and engineering. The word genius is thrown around casually, but there are very few people who actually deserve the moniker like Claude Shannon. He thought differently, and he thought playfully. One of the subtle causes behind what manifested as such genius, however, was the way he attacked problems. He didn’t just formulate a question and then look for answers, but he was methodological in developing a process to help him see beyond what was in sight. His problems were different from many of the problems we are likely to deal with, but the template and its reasoning can be generalized to some degree, and when it is, it may just help us think sharper, too. All problems have a shape and a form. To solve them, we have to first understand them. Build a Core Before Filling the Details The importance of getting to an answer isn’t lost on any of us, but many of us do neglect how important it is to ask a question in such a way that an answer is actually available to us. We are quick to jump around from one detail to another, hoping that they eventually connect, rather than focusing our energy on developing an intuition for what it is we are working with. This is where Shannon did the opposite. In fact, as his biographers note in A Mind at Play, he did this to the point that some contemporary mathematicians thought that he wasn’t as rigorous as he could be in the steps he was taking to build a coherent picture. They, naturally, wanted the details. Shannon’s reasoning, however, was that it isn’t until you eliminate the inessential from the problem you are working on that you can see the core that will guide you to an answer. In fact, often, when you get to such a core, you may not even recognize the problem anymore, which illustrates how important it is to get the bigger picture right before you go chasing after the details. Otherwise, you start by pointing yourself in the wrong direction. Details are important and useful. Many details are actually disproportionately important and useful relative to their representation. But there are equally as many details that are useless. If you don’t find the core of a problem, you start off with all of the wrong details, which is then going to encourage you to add many more of the wrong kinds of details until you’re stuck. Starting by pruning away at what is unimportant is how you discipline yourself to see behind the fog created by the inessential. That’s when you’ll find the foundation you are looking for. Finding the true form of the problem is almost as important as the answer that comes after. Harness Restructuring and Contrast In a speech given at Bell Labs in 1952 to his contemporaries, Shannon dived into how he primes his mind to think creatively when addressing things that are keeping him occupied. Beyond simplifying and looking for the core, he suggests something else — something that may not seem to make a difference on the surface but is crucial for thinking differently. Frequently, when we have spent a lot of time thinking about a problem, we create a tunnel vision that rigidly directs us along a singular path. Logical thinking starts at one point, makes reasoned connections, and if done well, it always leads to the same place every time. Creative thinking is a little different. It, too, makes connections, but these connections are less logical and more serendipitous, allowing for what we think of as new thinking patterns. One of Shannon’s go-to tricks was to restructure and contrast the problem in as many different ways as possible. This could mean exaggerating it, minimizing it, changing the words of how it is stated, reframing the angle from where it is looked at, and inverting it. The point of this exercise is simply to get a more holistic look at what is actually going on. It’s easy for our brain to get stuck in mental loops, and the best way to break these mental loops is to change the reference point. We are not changing our intuitive understanding of the problem or the core we have identified, just how it is expressed. We could, for example, ask: What is the best way to solve this? But we could also ask: What is the worst way to solve this? Each contains knowledge, and we should dissect both. Just as a problem has forms, it also has many shapes. Different shapes hold different truths. Multiply the Essence of Every Input While it’s important to focus on the quality of ideas, it’s perhaps just as important to think about the quantity. Not just concerning total numbers but also how you get to those numbers. To solve a problem, you have to have a good idea. In turn, to have a good idea, it’s often the case that you have to first go through many bad ones. Even so, however, throwing anything and everything at the wall isn’t the way to do that. There is more to it than that. During the Second World War, Shannon met Alan Turing, another computer science pioneer. While Turing was in the US, they had tea almost every day. Over the years, they continued to keep in touch, and both men respected the other’s thinking and enjoyed his company. When discussing what he thinks constitutes genius, Shannon used an analogy shared with him by Turing, from which he extrapolated a subtle observation. In his own words: “There are some people if you shoot one idea into the brain, you will get a half an idea out. There are other people who are beyond this point at which they produce two ideas for each idea sent in.” He humbly denied that he was in the latter category, instead putting people like Newton in there. But if we look beyond that, we can see what is at play. It’s not just about quantity. Every input has a particular essence at its core that communicates a truth that lies behind the surface. This truth is the foundation for many different solutions to many different problems. What Shannon is getting at, I suspect, is that generating good ideas is about getting good at multiplying the essence of every input. Bad ideas may be produced if you get the essence wrong, but the better you identify it, the more effectively you’ll be able to uncover insights. Doubling the output of your ideas is the first step, but capturing the essence is the difference. All You Need to Know Much of life — whether it’s in your work, or in your relationships, or as it relates to your well-being — comes down to identifying and attacking a problem so that you can move past it. Claude Shannon may have been a singular genius with a unique mind, but the process he used isn’t out of reach for any of us. His strength was in this process and its application. Good problem-solving is a product of both critical and creative thinking. The best way to combine them is to have some process in place that allows each to shine through. Thinking patterns shape our minds. The goal is to have the right thinking patterns doing so. The internet is noisy I write at Design Luck. It’s a free high-quality newsletter with unique insights that will help you live a good life. It’s well-researched and easy-going. Join 40,000+ readers for exclusive access.
Claude Shannon: How a Real Genius Solves Problems
23,088
claude-shannon-how-a-real-genius-solves-problems-15b434aeb2b6
2018-08-16
2018-08-16 22:33:25
https://medium.com/s/story/claude-shannon-how-a-real-genius-solves-problems-15b434aeb2b6
false
1,407
Sharing our ideas and experiences.
null
null
null
Personal Growth
null
personal-growth
LIFE,LIFE LESSONS,CREATIVITY,PERSONAL GROWTH,PERSONAL DEVELOPMENT
null
Science
science
Science
49,946
Zat Rana
Playing at the intersection of science, art, and philosophy. Trying to be less wrong. www.designluck.com.
1b9c67617a3
ztrana
80,665
98
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-09-11
2018-09-11 12:33:16
2018-09-11
2018-09-11 12:38:43
13
false
en
2018-09-11
2018-09-11 12:38:43
21
15b47d1a39ef
7.407547
3
0
1
Big Data: A Revolution That Will Transform How We Live, Work, and Think
5
Top Big Data, Data Science Books you should read Big Data: A Revolution That Will Transform How We Live, Work, and Think “Whether it is used by the NSA to fight terrorism or by online retailers to predict customers’ buying patterns, big data is a revolution occurring around us, in the process of forever changing economics, science, culture, and the very way we think. But it also poses new threats, from the end of privacy as we know it to the prospect of being penalized for things we haven’t even done yet, based on big data’s ability to predict our future behavior. What we have already seen is just the tip of the iceberg. Big Data is the first major book about this earthshaking subject, with two leading experts explaining what big data is, how it will change our lives, and what we can do to protect ourselves from its hazards.” Big Data: Techniques and Technologies in Geoinformatics “Providing a perspective based on analysis of time, applications, and resources, this book familiarizes readers with geospatial applications that fall under the category of big data. It explores new trends in geospatial data collection, such as geo-crowdsourcing and advanced data collection technologies such as LiDAR point clouds. The book features a range of topics on big data techniques and technologies in geoinformatics including distributed computing, geospatial data analytics, social media, and volunteered geographic information. Big Data: Techniques and Technologies in Geoinformatics tackles these challenges head on, integrating coverage of techniques and technologies for storing, managing, and computing geospatial big data.” Subscribe for free Big Data News Weekly Newsletter Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems “In this practical and comprehensive guide, author Martin Kleppmann helps you navigate this diverse landscape by examining the pros and cons of various technologies for processing and storing data. Software keeps changing, but the fundamental principles remain the same. With this book, software engineers and architects will learn how to apply those ideas in practice, and how to make full use of data in modern applications. Peer under the hood of the systems you already use, and learn how to use and operate them more effectively.” Big Data For Dummies “Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you’ll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You’ll learn what it is, why it matters, and how to choose and implement solutions that work. Big Data For Dummies cuts through the confusion and helps you take charge of big data solutions for your organization.” Big Data: Principles and best practices of scalable realtime data systems “Big Data teaches you to build big data systems using an architecture designed specifically to capture and analyze web-scale data. This book presents the Lambda Architecture, a scalable, easy-to-understand approach that can be built and run by a small team. You’ll explore the theory of big data systems and how to implement them in practice. In addition to discovering a general framework for processing big data, you’ll learn specific technologies like Hadoop, Storm, and NoSQL databases. This book requires no previous exposure to large-scale data analysis or NoSQL tools. Familiarity with traditional databases is helpful.” Data Strategy: How to Profit from a World of Big Data, Analytics and the Internet of Things “Bernard Marr’s Data Strategy is a must-have guide to creating a robust data strategy. Explaining how to identify your strategic data needs, what methods to use to collect the data and, most importantly, how to translate your data into organizational insights for improved business decision-making and performance, this is essential reading for anyone aiming to leverage the value of their business data and gain competitive advantage. Packed with case studies and real-world examples, advice on how to build data competencies in an organization and crucial coverage of how to ensure your data doesn’t become a liability, Data Strategy will equip any organization with the tools and strategies it needs to profit from big data, analytics and the Internet of Things.” Big Data: Using SMART Big Data, Analytics and Metrics To Make Better Decisions and Improve Performance “There is so much buzz around big data. We all need to know what it is and how it works — that much is obvious. But is a basic understanding of the theory enough to hold your own in strategy meetings? Probably. But what will set you apart from the rest is actually knowing how to USE big data to get solid, real-world business results — and putting that in place to improve performance. Big Data will give you a clear understanding, blueprint, and step-by-step approach to building your own big data strategy. This is a well-needed practical introduction to actually putting the topic into practice. Illustrated with numerous real-world examples from a cross section of companies and organisations, Big Data will take you through the five steps of the SMART model: Start with Strategy, Measure Metrics and Data, Apply Analytics, Report Results, Transform.” Big Data in Practice: How 45 Successful Companies Used Big Data Analytics to Deliver Extraordinary Results “From technology, media and retail, to sport teams, government agencies and financial institutions, learn the actual strategies and processes being used to learn about customers, improve manufacturing, spur innovation, improve safety and so much more. Organised for easy dip-in navigation, each chapter follows the same structure to give you the information you need quickly. For each company profiled, learn what data was used, what problem it solved and the processes put it place to make it practical, as well as the technical details, challenges and lessons learned from each unique scenario.” The Big Book of Dashboards: Visualizing Your Data Using Real-World Business Scenarios “Comprising dozens of examples that address different industries and departments (healthcare, transportation, finance, human resources, marketing, customer service, sports, etc.) and different platforms (print, desktop, tablet, smartphone, and conference room display) The Big Book of Dashboards is the only book that matches great dashboards with real-world business scenarios. By organizing the book based on these scenarios and offering practical and effective visualization examples, The Big Book of Dashboards will be the trusted resource that you open when you need to build an effective business dashboard.” Learning Spark: Lightning-Fast Big Data Analysis “Data in all domains is getting bigger. How can you work with it efficiently? Recently updated for Spark 1.3, this book introduces Apache Spark, the open source cluster computing system that makes data analytics fast to write and fast to run. With Spark, you can tackle big datasets quickly through simple APIs in Python, Java, and Scala. This edition includes new information on Spark SQL, Spark Streaming, setup, and Maven coordinates. Written by the developers of Spark, this book will have data scientists and engineers up and running in no time. You’ll learn how to express parallel jobs with just a few lines of code, and cover applications from simple batch jobs to stream processing and machine learning.” Big Data at Work: Dispelling the Myths, Uncovering the Opportunities “When the term “big data” first came on the scene, bestselling author Tom Davenport (Competing on Analytics, Analytics at Work) thought it was just another example of technology hype. But his research in the years that followed changed his mind. With dozens of company examples, including UPS, GE, Amazon, United Healthcare, Citigroup, and many others, this book will help you seize all opportunities — from improving decisions, products, and services to strengthening customer relationships. It will show you how to put big data to work in your own organization so that you too can harness the power of this ever-evolving new resource.” Data Science and Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data “Data Science and Big Data Analytics is about harnessing the power of data for new insights. The book covers the breadth of activities and methods and tools that Data Scientists use. The content focuses on concepts, principles and practical applications that are applicable to any industry and technology environment, and the learning is supported and explained with examples that you can replicate using open-source software. Get started discovering, analyzing, visualizing, and presenting data in a meaningful way today!” Big Data For Beginners: Understanding SMART Big Data, Data Mining & Data Analytics For improved Business Performance, Life Decisions & More (Data … Computer Programming, Growth Hacking, ITIL) “Big Data For Beginners! The Ultimate Beginners Crash Course To Understanding And Interpreting Big Data! Are You Ready To Learn How To Understand SMART Big Data, Data Mining & Data Analytics For improved Business Performance, Life Decisions & More? If So You’ve Come To The Right Place — Regardless Of How Little Experience You May Have! Here’s A Preview Of What Big Data For Beginners! Contains… A Conundrum Called ‘Big Data’ How To Understand Big Data Better What Can Big Data Do For You? Understanding The Analytics (And The Importance) The Obstacles And Importance Of The Big Data Situation We’re In A Closer Look At Key Big Data Challenges Generating Business Value through Data Mining And Much, Much More!” Read full article on bigdatanewsweekly.com Subscribe for free Big Data News Weekly Newsletter
Top Big Data, Data Science Books you should read
3
top-big-data-data-science-books-you-should-read-15b47d1a39ef
2018-10-24
2018-10-24 08:33:34
https://medium.com/s/story/top-big-data-data-science-books-you-should-read-15b47d1a39ef
false
1,592
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Big Data
big-data
Big Data
24,602
Veeranjaneyulu Chettupalli
SEO Consultant | Analytics | Social Media Expert | Big Data enthusiast https://buff.ly/2uQc1ku
2dbcc0c939d4
veeran
66
286
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-19
2018-03-19 18:42:01
2018-03-20
2018-03-20 20:12:42
21
false
en
2018-04-12
2018-04-12 16:02:23
10
15b487adaaa3
10.996226
20
1
0
Machine intelligence has started to show its predominance in various devices used today. From “Speech to text” translation, language…
5
Introduction to Deep Learning “Machine intelligence” has started to show its predominance in various devices used today. From “Speech to text” translation to Self-driving cars, language translation and even gaming computers which can beat human champion players, Deep Learning and AI is present everywhere. With the rapid growth in research and technology it is easy to imagine that soon the magical world of Harry Potter or the pages from a scientific fiction could come alive. So how do these intelligent machines come to exist? Who creates them or rather how is it created? The answer to these questions would point to many areas of technology namely Artificial Intelligence, Machine Learning, Computer Vision, Deep Learning, Robotics, Information Theory, Data Science. However, today we are going to discuss about the fundamentals of Deep Learning. Applications of Deep Learning: Here are just a few more examples of deep learning at work: A self-driving vehicle slows down as it approaches a pedestrian crosswalk or when the traffic signal is red by processing it’s surrounding. 2. An ATM rejects a counterfeit bank note and maybe alarms a bank nearby. 3. A smartphone app gives an instant translation of a foreign street sign or translates images with text to actual text. Deep learning is especially well-suited for many identification applications such as face recognition, text translation, voice recognition, and advanced driver assistance systems, including, lane classification and traffic sign recognition. WHAT IS DEEP LEARNING? Loosely inspired by a model of the human brain, deep learning is training a neural network to learn and identify features in the training data so that it can identify similar features and give an appropriate output when a new test data has been fed into the same network. Let’s look at an example. Say we have images of dogs and cats and we want to apply deep learning to teach a machine to identify images of cats and dogs. This is an example of a binary classification problem where the output can be 1 ( meaning it is a cat picture) or 0( meaning it is a dog picture). We first label the images in order to have a training data for the network. Using this training data, the network can then start to understand the object’s specific features and associate them with the corresponding category. Each hidden layer in the network takes in data from the previous hidden layer, transforms it, and passes it on to the next layer. As we go deeper into the network, it starts learning more complex features and details from the training dataset. The number of layers in a neural network depends on the complexity of the problem the network is trying to solve/learn. It can always be modified to add or remove layers to help train the network better. Fig — 1.1: A Neural Network for binary classification of images In reality, a Deep Learning Neural Network looks like this. Fig — 1.2: This is a fully connected neural network. Components of a Neural Network: 1. Input [ X -> {x1, x2, x3}] and output [ Y -> {y1, y2, y3}] layers. 2. One or more hidden layers (Layers colored yellow). Layer l= 0 is the input layer and layer l = L is the output layer as shown in Figure 1.2. The layers form a Markov chain. 3. Each layer contains several nodes also called as neurons/ activation units. The number of activation units in layer l for l = 0, . . . , L is denoted by n_l. Here the number of neurons in layer 0: n_0 =3, layer 1: n_1=5 , n_2= 4, n_3=3. Here the last layer , L = 3. Thus it is a three layered NEURAL NETWORK. 4. Every node in a particular hidden layer is connected to every node in the next hidden layer which gives it the fully connected nature. 5. The number of nodes in the output layer depends on what problem the Neural Network is working on. Eg- For a binary classification problem, the output layer can contain only one node with a value of 1 or 0 indicating a cat or a dog respectively. For classifying images of many objects called as a multi-class classification, the output layer will contain more than one output nodes. WHAT HAPPENS INSIDE THE NETWORK? Individual hidden layers inside a Neural Network implement a mathematical function which enables the network to learn certain features from the input coming from the previous layer. Logistic Regression is an example of a small Neural Network. It’s an S-shaped curve that can take any real-valued number and map it into a value between 0 and 1, but never exactly at those limits. The equation for Logistic Regression is shown below: Fig — 1.3: Graph showing the equation representing Logistic Regression Fig — 1.4: The logistic function equation is shown above. Source- https://en.wikipedia.org/wiki/Logistic_regression Computations of a Neural Network: Neural Networks are organized in terms of a forward pass called as forward propagation and a backward pass called as backward propagation. Forward propagation helps us calculate the loss while backward propagation helps us update the weight w and bias b by calculating derivates. Fig — 1.5: Figure showing a forward propagation step and a backward propagation step in a one layer NN. Forward propagation: Each layer of a Neural Network has parameters weights-w and biases b. For a Neural Network these parameters are initialized randomly. These two parameters define the level of connection between the neurons of different layers. Forward propagation of activations from layer l − 1 to layer l is a mapping of the activations from A^(l−1 )→ A^l , defined by a matrix multiplication and a summation as follows: Fig — 1.6 : Z^l of layer l is the output generated by taking the activation from the previous layer, A^(l-1) , doing a matrix multiplication with weight parameter W^l of layer l and adding the bias b^l to that layer. The equation used above is derived from linear regression. We will understand this better with an example discussed below. Now that we have calculated the output of the forward propagation step: Z^l of layer l, A non-linear activation function like ReLU, sigmoid, tanh, Leaky ReLU etc is applied on it which gives us the final activation output of layer l. Fig — 1.7: Activation output of layer l is A^l. The non linear function applied to output Z of layer l is denoted by g^l. g^l can be ReLU, Sigmoid, LeakyRelU etc. In the example discussed below for identifying between cat and dog images, we have used the sigmoid function( logistic regression) as the non-linear function applied to the output Z. We shall discuss the reasons of choosing it while explaining an example below. The image below shows a Neural Network with two hidden layers and equations used to represent the forward propagation step. Fig — 1.8: Figure showing forward propagation in one layer NN. In Fig-1.8, there are three inputs x1,x2,x3 which can be described as an input vector X. The first hidden layer contains 3 neurons and the output contains only one neuron. Each neuron of each hidden layer has it’s own weight {w1,w2,w3} described by matrix W ( 3*3 ) and bias {b1,b2,b3} described by vector b ( 3* 1 ). The outputs Z and A are also vectors containing the outputs of corresponding to the input vector X. Backward Propagation: This method calculates the gradient of the loss function with respect to the neural network’s weights. This maps the derivatives from layer l back to layer l− 1 with respect to both activations and weights. A practical method for minimizing the loss function is called as gradient descent. Fig — 1.9: Figure on the left shows a forward propagation step. Figure on the right shows a backward propagation from last layer to one layer ahead. Example: Let us understand the concept of forward and backward propagation with a binary classification problem. Our training example X is a set of images of dogs and cats with their corresponding labels Y( 0/1) respectively. That is, our input feature vector X is a combination of images {x_1, x_2, x_3,.. x_n} and it’s corresponding labels {y_1,y_2,y_3….y_n}. We want to predict the conditional probability ie P(Y=1|X). Given a new image x_n+1 we want to predict if Y is 0 or 1. Data Preparation: Each colored image of a cat/dog has three channels: Red, Green and Blue. Let’s say we have “m” training examples each of dimension 64*64. Then, each image can be organized in the form of a column vector of the dimension 64*64*3. Thus all the “m” training examples can be organized as a vector X whose dimension is 12288*m. The pictorial representation of vectorization has been shown in the image below. Fig — 2.0: The top boxes show a 64*64 image having red , blue and green channels. This image has been converted to a column vector X. If we have m images the matrix X will have a dimensiom nx*m. Neural Network Synthesis , Forward Propagation: Our neural network will be using Logistic regression equation for the forward propagation step. The logistic regression equation uses the sigmoid function denoted by a symbol : sigma. Fig — 2.1: Diagram showing one layer Neural network with forward propagation and backward propagation. Fig — 2.1.1: Equation for a logistic regression problem. Why do we use a sigmoid non-linearity? We don’t want our model to predict the probability value to be below 0 or above 1 because the model is trained with input labels having values 0/1. A linear regression can generate very high values and even negative values which does not make sense. Thus, the sigmoid function helps to achieve this goal. In a simpler language, to normalize the output between 0/1 we use the sigmoid function for training our network. Last step of forward propagation : Cost Function: Ideally we would want our Neural Network to perform with 100% accuracy which means for every new image fed to the network, it would generate the correct result. However, this does not happen in real life and we need to train our parameters w, b to improve the performance of our network. Here comes the need of knowing two metrics : cost function and loss function. The loss function tells us how good the predicted output y^ is when the true output is y. We always want our loss to be as small as possible. The formula for calculating the loss function is as follows: Fig — 2.2: Loss function for Neural Network Cost function is loss measured on the entire training set. The formula for calculating the cost function is as follows: Fig — 2.3: Cost function for Neural Network The ultimate goal is to find parameters w , b which helps in minimizing the cost function J(w,b). Thus, during the training stage after calculating the output from the last layer of the neural network, we calculate the loss J(w,b) to check how well our network has performed. Backward Propagation : Updating parameters w , b: Steps showing back-propagation in a Neural Network for a single training example. Deriving the formula’s for the derivates in back-propagation in a Neural Network for a single training example. In the images attached above, we can see how we can move from the last layer L to the first layer and keep updating the weights w and the bias b. For other non linear activation functions like ReLU, softmax etc the derivative equations will be different. What is this alpha parameter / the learning rate? The parameter alpha used here for updating the weights and biases is called as the learning rate. Learning rate is a decreasing function of time. Intuitively speaking , it is how quickly a network changes old beliefs for new ones. When we train our network with a gradient descent algorithm , at each iteration we use back-propagation to calculate the derivative of the loss function with respect to each weight and bias and subtract it from that weight and bias. However, if this is process is repeated several times the weights will change far too much after each iteration, which will make them “overcorrect” and the loss will actually increase/diverge. So in practice, we usually multiply each derivative by a small value called the “learning rate” before they subtract it from its corresponding weight. Summary Algorithm: Given an input example and ground truth labels we perform the following three steps: Forward propagation: forward propagate the activations through all layers from input to output, reaching a prediction. Compute loss function: We compute the error between the prediction and ground truth. Back-propagation: Finally, we use the chain rule for differentiating and calculating the gradients through the layers in the opposite direction from the output to the input. Update the weights and biases of each layer by the formula: Fig — 2.4: Updating the weights and biases of layer 1. The value alpha is also called as the learning rate. Commonly asked questions about Neural Networks: Thanks to Albert Zenon Fernandez Whats the difference between hidden layer and input/output layer? According to Deep Learning Terminology, the first layer is called the input layer because it contains the input data for training the Neural Network, the other layers are called the hidden layers. The term hidden is given mainly because we don’t see what the values of the hidden layer should be in the training set. We can see what the inputs are, we can also expect what the output should be, but the things in the hidden layer are not seen in the training set. So that kind of explains the name hidden. In our example the single-node layer is called the output layer, and is responsible for generating the predicted value y hat. Does every node perform it’s own calculations? Yes, every node performs it’s own calculation of finding the output matrix Z and then applying a Non-Linear activation function to it. In our example we have used sigmoid function. This can be understood with the image shown below. Fig — 2.5: Image showing the activation values calculated for each and every neuron. Why are layers necessary? That’s a good question actually. The figure below shows a timeline of machine learning from Least Squares to Alpha Zero. The timeline is composed of three parts in which (i) neural network components where invented in the 1960’s, (ii) combined in the 1980’s, and (iii) applied at scale in the 2010’s. Fig — 2.6: Timeline for evolution of perceptron. The Perceptron was described in 1957 by Rosenblatt. That is how it all started. Why is it important for the nodes to be fully connected? I guess it is important for every node to learn the features of every input variable coming from the previous layer. This helps the network to learn more complex features to be able to perform a better task at prediction. Why is it important to know both the loss and the cost? Loss is just calculated over one input feature. For example if we train our Neural Network with only one image, then loss and cost will be the same. In real life we train our network with many images, that’s when cost comes into picture because it helps us understand how the network is performing over the entire training set. The cost gives us one single value/score which is easier to understand than a matrix of losses for each individual example. How many hidden layers should we include in our NN? Honestly as this is a hyper-parameter , there is no fixed particular way to determine the number of hidden layers you should choose. The most common rule of thumb is to choose a number of hidden neurons between 1 and the number of input variables. You can always use cross validation to test your architecture. If the model has a High bias you know it’s overfitting and you need to reduce the number of layers and neurons. The basic idea to get the number of neurons right is to cross validate the model with different configurations. References: Deep Learning Specialization from Andrew Ng Coursera. Professor Iddo Drori’s lecture notes from NYU. Wikipedia and google images for image resources. Deep Learning blogs. Book by Ian Goodfellow and Yoshua Bengio. Wikipedia for definitions and references. Hope you guys have now understood what deep learning really talks about. If you enjoyed reading this article, please do clap :)
What is Deep Learning?
85
what-is-deep-learning-15b487adaaa3
2018-05-26
2018-05-26 09:20:04
https://medium.com/s/story/what-is-deep-learning-15b487adaaa3
false
2,437
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Titash Mandal
I am a creative thinker who enjoys learning, building and solving challenges. I am really enthusiastic about Machine Learning, NN, and Data Science.
45be075b3b75
tm2761
27
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-15
2018-01-15 16:50:28
2018-01-15
2018-01-15 17:23:27
1
false
en
2018-01-15
2018-01-15 17:25:31
1
15b5fd9c87e2
0.920755
0
0
0
Our Goals and Responsibilities
5
A Brief Reaction to the Asilomar AI Principles What steps must we take to ensure the benefit of humankind from new technologies? The principles developed in conjunction with the 2017 Asilomar conference, namely the Asilomar AI principles, are a set of 23 guidelines that, while specific to artificial intelligence, apply to the broader scope of innovations in technology as a whole. A key concern that the principles encompass is understanding the underlying logic behind intelligent systems. Already, deep neural network and reinforcement learning research has demonstrated the ability for systems to master difficult tasks, such as object recognition and identification (i.e. determining the location of an object in an image), and playing the classic game of Go. However, the nature of their mastery — how the models are able to accomplish such feats — remains highly elusive. Deep learning models are still often regarded as “black boxes.” Currently, we are able to crudely hypothesize post factum why certain actions occurred. However, such explanations are unsatisfactory where human life is at stake (e.g. healthcare, safety) — the very areas in which advancements in AI could yield the most human benefit. In order for society to accept AI, we must unlock the processes of its inner workings.
A Brief Reaction to the Asilomar AI Principles
0
a-brief-reaction-to-the-asilomar-ai-principles-15b5fd9c87e2
2018-01-15
2018-01-15 17:25:32
https://medium.com/s/story/a-brief-reaction-to-the-asilomar-ai-principles-15b5fd9c87e2
false
191
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Matthew Feng
Student, interested in learning, technology, and society.
b89e2c5259a4
mattfeng
1
2
20,181,104
null
null
null
null
null
null
0
null
0
2053fe536adb
2018-02-24
2018-02-24 23:45:01
2018-02-25
2018-02-25 18:13:26
4
false
en
2018-02-25
2018-02-25 22:55:10
65
15b95d2a1494
16.839623
2
0
0
Aiza Kabeer¹ and Jessica W. Tsai¹,² |¹ The STEM Advocacy Institute, ² Boston Children’s Hospital
4
Working Towards Socially Responsible Algorithms: When Algorithms Become Tools of Injustice Aiza Kabeer¹ and Jessica W. Tsai¹,² |¹ The STEM Advocacy Institute, ² Boston Children’s Hospital Download the PDF Abstract From criminal justice to financial markets, algorithms are now embedded in the fabric of our society. While algorithms benefit humanity, they can also result in discrimination and reinforce injustice. This article begins by defining algorithms and several types of algorithmic bias. The ways in which algorithmic bias can affect society are described. Potential solutions are discussed including further research, diversity initiatives, and education. Changes in education include changes to existing undergraduate computer science curricula as well as workshops or trainings in the tech industry. Beyond these solutions, policy measures will also be necessary for long-lasting change. Algorithmic bias is a serious danger to society. Algorithms are essential to the world as we know it, and we are responsible for the social implications of their use. If we do not prevent them from harming groups susceptible to discrimination, then we are at fault. 1. What are algorithms and what is algorithmic bias? An algorithm is code that defines how to do a task by giving a computer instructions. An algorithm takes some input, runs commands, and produces some output (1,2). For example, an algorithm might determine what advertisements are shown to a user on a website. More sophisticated algorithms can be used to automate a decision or make a prediction that a human would otherwise make. Deciding who is eligible for a loan, choosing the best route in Google maps, or making a critical prediction in the justice system might use such algorithms. Algorithms are also used to create artificial intelligence (AI). AI is the science of making machines that do tasks in an intelligent manner that usually only a human can do. There are different types of AI, but the focus is on creating intelligent technology (2). It can involve the use of algorithms and what is called machine learning. Machine learning is the ability of a system or a machine to learn and improve based on experiences without being explicitly programmed. In other words, machine learning allows systems to behave like a human by drawing conclusions from knowledge and experiences. By feeding algorithms data, they can learn and improve their output in repeated iterations (3). The data used to teach an algorithm is called training data. While this is an amazing technological advancement, the use of algorithms and AI is not foolproof and many complications exist. Algorithmic bias is particularly concerning. This usually refers to when the use of an algorithm negatively impacts minority groups or low income communities in a discriminatory manner[[1]]. However, algorithmic bias can actually be classified into five types, as described in a paper written by Danks and London (5). [1] It is important to note that bias is not defined the same in statistics, social science, and law. Algorithmic bias has broad social and political implications, and the exact meaning of what bias is may become blurred across these fields. When discussing this issue, we are usually referring to bias beyond statistical usages.4 As we can see in the table, algorithmic bias has positive and negative effects. Algorithms are problematic when they result in biases that are unethical or discriminatory. Many of these cases involve training data bias where the input data is already biased. Datasets can reflect human biases since data is sometimes labeled by hand. Datasets may also exclude certain populations or otherwise be non-representative (4). Algorithms are in active use in our world today, and many are reinforcing systemic injustice. Some are unintentionally resulting in discriminatory practices in our society. We often assume that an algorithm always presents the best solution, but as the following examples illustrate, this is not always the case. 2. How can algorithmic bias affect society? The journalistic organization ProPublica recently produced investigative pieces on the effects of algorithmic bias. In one of these studies, algorithms in the criminal justice system were found to reinforce racial disparities. The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a risk assessment system which uses algorithms to determine the likelihood of a defendant or convict committing a crime again in the future. In ProPublica’s study, the algorithm was almost twice as likely to mislabel a black defendant as a future risk than a white defendant (6). In another study, ProPublica focused on an algorithm that determines online prices for Princeton Review’s tutoring classes. The study showed that people living in higher income areas were twice as likely to pay a higher price than the general public. However, individuals living in a zip code that had a high density Asian population were 1.8 times more likely to have a higher price, regardless of their incomes (7). Additionally, Google has been guilty of using systems that fall prey to algorithmic bias. Google Images mislabeled an image of two black people as “gorillas.” In another incident, Google showed ads of high paying jobs to men more often than women. Both of these examples are the result of algorithmic bias (8,9). The legal doctrine of disparate impact makes cases of unintentional racial discrimination illegal, including those that involve algorithms. In June of 2015, a statistical analysis of housing patterns showed that Texans were essentially segregated by race as a result of a tax credit program. In Texas Dept. of Housing and Community Affairs v. Inclusive Communities Project, Inc., the Supreme Court used disparate impact theory to rule against housing discrimination. Disparate impact theory is an example of a policy that regulated an unfair algorithm, however it is limited to issues of housing and employment. We do not have a legal process to regulate the use of algorithms in all applications (10). In general, we lack overarching solutions or processes to deal with algorithmic bias. Society is more focused on the future effects of AI than algorithmic bias (See this NYT article for a relevant discussion) (11). Most of the action being taken revolves around discussion and research. Discussions are gaining momentum, but we have yet to see tangible solutions take form. 3. What is being done to address the issue? There are some advocacy and research groups working on the problem of algorithmic bias. For example, the organization Data & Society researches social and cultural problems arising with data related technological advances, and covers topics relating to algorithmic bias (12). More recently, groups like the AI Now Institute are coming together specifically to understand the social implications of AI (13). Beyond these initiatives, there is a push to raise awareness of the issue. General discussions of algorithmic bias include pursuing more research and increasing diversity in tech fields. Dealing with this problem requires us to pursue solutions across many paths, including research, diversity initiatives, policy changes, and new ideas. More work is needed in all these areas. 3.1 Research Algorithms can recognize patterns and have the capability to learn from those patterns. As discussed, they take in data and use instructions to result in meaningful output. From biology to financial lending, algorithms are commonly used in industry, academia, and in the military. Algorithmic bias also spans many disciplines. Although algorithms are developed within fields like computer science and mathematics, they impact issues tied to social problems and law. The questions we must answer about algorithmic bias also involve many fields. Can we program algorithms so that they do not cause discrimination? How can we ensure the data we use to teach algorithms is not biased? What role does the law play in dealing with instances of algorithmic bias? These are only a few of the questions that can be tackled through research, and they clearly cannot be answered by computer scientists alone. Considering the complexity of the situation, it may be important to include social scientific and humanistic research practices into the development of algorithms, particularly AI research (4). Computer science research itself certainly has a role to play. Despite the widespread use of algorithms, we do not understand how algorithms learn and it is often unclear why AI makes one decision over another. Understanding how algorithms function might help us handle algorithmic bias when they are developed. That said, research to clarify how algorithms work is a daunting task that will take time (14). The good news is that there are policy and education based solutions we can implement without immediate answers from research in computer science. We may not know that much about how algorithms work, but it is inaccurate to say that we must understand them before investing in regulatory mechanisms, as mentioned in this article (eg; policy solutions, regulatory bodies, ethical processes, etc) (15). While it is important to know why code results in certain decisions, we do not have unlimited time to wait for an answer. We need to work on solutions that can be implemented in the absence of conclusive results from computer science. Research related to policy, education, and other areas are vital in this respect. Data and Society and the AI Now Institute are among some groups that are beginning to ask important questions and raise awareness about problems related to social science, ethics, policy and law. Organizations like Pervade are working to put together ethical processes for big data and computational research (16). But there is more to be done, especially regarding algorithmic bias. We need widespread research that translates into tangible action before algorithmic bias hurts our society any further. 3.2 Diversity in Tech Increasing diversity in tech fields is another strategy to combat algorithmic bias because it might result in more conscious programmers (17). For example, the Algorithmic Justice League (AJL) was founded by a Black student because her face was not identified by facial recognition software (18). Diverse students, and eventually diverse computer scientists, could mitigate bias. The current landscape of computer science clearly lacks diversity. The National Science Foundation (NSF) provides data with explicit breakdowns by degree received and the race and gender of degree recipients. As per 2011 data, of the total number of students receiving Bachelor’s degrees in computer science (U.S. citizens and permanent residents), only 10.6% were awarded to Black and African American students, and 8.5% to Hispanic or Latino students. For women of all races this percentage was 17.7%. According to a publication of the U.S. Census Bureau based on 2011 data, among computer occupations, approximately 7.3% of workers are Black or African American and 6.0% are Hispanic (Figure 1). In both cases, this is around half of their overall representation in the U.S. population (19). Of all races, 26.6% of workers are female (Figure 2).The broad group of computer occupations includes a variety of tech related jobs. When looking at a further breakdown of this group, the percentages of Black/African American and Hispanic workers remains low across various job types, but there is more variation in the percentages of female workers (20). Regardless of this variation, it is clear that Black/African American and Hispanic/Latino populations are not fairly represented. Moreover, large tech companies like Google are being called on to employ more diverse programmers (21). Some well known companies have ethics boards to prevent bias, but this is difficult to implement in smaller companies (22). Industry employees need a better understanding of the social implications of algorithms, so that individual engineers, programmers, and data scientists consider the impacts of their work. One way to introduce this awareness is by creating a diverse population in tech. Admittedly, increasing diversity in student populations and the workforce is a long term solution. It will take time to get more minorities into computer science, and even longer to get them into industry. While current developments are heartening, the dilemma remains. Algorithmic bias is operating in our society today, and we have yet to prevent it from contributing to structural injustice. Focusing on both research and building diversity is important, but we need a multi-pronged approach to addressing algorithmic bias. 4. Can an ethics-based education make a difference? Besides research and increasing diversity, what else can we do? Another idea is to modify computer science education. Undergraduate computer science programs require a course that teaches algorithms, whether it is Data Structures and Algorithms, Introduction to Algorithms, Algorithm Design, or something similar. Incorporating ethics into the computer science curricula and highlighting algorithmic bias might have potential benefits. Some institutions currently offer courses on ethics in computer science, but such courses are not always a requirement of the major. Including ethics as an integral part of computer science programs and in continued education outside of academia might have potent results. While ethics in computer science covers a broad range of topics, we need to ensure that discussions of technology’s implications for social inequality and discrimination also occur. This is key for an ethics-based education to ameliorate algorithmic bias. 4.1 Workshops Workshops or short courses can educate students on the ethics of algorithms. Such a workshop would need to discuss the types of algorithmic bias and highlight the importance of training data. At the very least, computer science majors ought to participate in dialogue on algorithmic bias, especially in courses relating to algorithms. It is also pertinent to consider whether students in related fields (statistics, mathematics, etc.) should also be given exposure to these topics since these students may also work with algorithms. 4.2 Courses Beyond simply holding workshops or making minor changes to course curricula, creating requirements for a course specifically dealing with ethics in computer science, AI, and related fields could have powerful effects. Many colleges and universities offer electives in this area, but they are typically not required. The general idea of Computer Ethics (CE) has been explored in the past as a core component of computer science curricula. A 2002 paper suggested including a course on CE and five relevant knowledge units in each year’s curriculum. These units were classified as History of Computing, Social Context of Computing, Intellectual Property, and Computer Crime (23). Admittedly, these units would have to be adjusted given social changes and technological progress since 2002, but it is a reasonable starting point. The Social Context of Computing would be most relevant to the issue of algorithmic bias. More recent literature calls for including discussions of data, ethics, and law in computer science curricula. Barocas et al. discuss a research agenda focused on dealing with injustice and algorithms. Beyond research, their paper includes a suggestion to “weave” conversations of ethics and law into data science curricula and highlights the importance of a national conversation on these topics (24). This holds true for all disciplines that use algorithms. 4.3 Ethics of algorithms incorporated into computer science education: examples There are some programs that have ethics requirements. The computer science program at the University of Massachusetts Lowell is an example. However, a requirement in ethics courses is not universally acknowledged as a core part of computer science curricula. Even where ethics requirements exist, they must discuss discrimination and algorithms to be relevant to algorithmic bias. As of 2018, a handful of universities are beginning to implement courses in ethics with the intent to educate their students on the potential consequences of emerging technologies (25). Harvard and MIT are currently offering a joint course dealing with ethics in AI, and the University of Texas at Austin is offering a course on ethics with plans to make it mandatory for all computer science majors (25, 26, 27). These developments are wonderful. However, we need courses dealing with the social implications of technology to be normative in all computer science programs. While there are few examples where ethics workshops have been made mandatory, the University of Nevada asked incoming graduate students across a range of engineering disciplines to take an ethics workshop in 2015. The workshop covered four major topics: Research Ethics, Computer Coding Ethics, Publishing Ethics, and Intellectual Property (28). Only 7% of participants were from computer science, therefore it would be difficult to measure what impact the workshop had with regards to ethical and social issues like algorithmic bias. However, this demonstrates that ethics workshops in CS are certainly feasible. In the absence of structured educational experiences that give students exposure to the social ramifications of algorithms, faculty members might take it upon themselves to expose students to these problems. For example, a professor at Dartmouth teaching a course on AI included references to articles about algorithmic bias and its challenges (29). It is a small gesture, but one that nonetheless may raise awareness to the discriminatory effects of AI. 4.4 Education in the ethics of algorithms beyond academia Continuing this education beyond the boundaries of academia is imperative. Individuals in industry might not have exposure to ethical issues. This is unfortunate, since cases of algorithmic bias also come from algorithms created and utilized by industry. It is essential that professionals believe this is a problem worth addressing. We can encourage companies to include trainings and workshops similar to those given to students. If makers of algorithms are aware of potential pitfalls they can try to find ways to correct them. For example, Google has recently implemented a crowdsourcing feature for Google translate. It is a practical way for users to correct mistakes and biases the algorithm might make (30). Although user input will not always be enough for true accountability, it is a step in the right direction. Informed professionals may be able to find ways to circumvent bias, whether it is through using representative data or asking for user inputs. Workshops or job trainings in industry can provide this awareness. Besides the workshop at the University of Nevada, there are few records of such mandatory programs being implemented on a wider scale. In this sense, it is hard to predict or measure what impact such changes might have. Awareness is the first step to solving the problem, and an education in ethics can help create it. Future actions in this area include creating workshops and implementing curriculum changes. 5. Can policy changes make a difference? Research, increasing diversity, and changing education are still not enough to solve all problems related to algorithmic bias. How can we control the use of algorithms by those who have authority and power? Since algorithms are in active use today, we need policies to define bias, and hold programmers and companies accountable for their work. For example, many complex algorithms are known as black boxes. These are predictive systems that utilize machine learning and make crucial decisions, but the public cannot see the code used to build them. This has led to calls for code transparency, so that we can see how the system came to a certain conclusion. However, this may not always be feasible since as discussed we do not always understand how the code behind such complex algorithms work (14). While transparency may not always be enough, other policy solutions can allow us to enforce accountability. In the case of housing discrimination in Texas, the legal doctrine of disparate impact theory allowed the courts to hold those who made the algorithm accountable (10). We need legal and policy solutions like this that apply to all algorithms. New York City recently passed a bill that will result in the creation of a task force to monitor algorithms (31). This is an important and groundbreaking step for policy based solutions that could be replicated by other cities. Recent research has found that the use of copyright laws could reduce the occurrence of bias. A paper written by Amanda Levendowski suggests that principles of traditional fair use align with the goal of mitigating bias (32). Even so, Levendowski still highlights the need for AI programmers, policymakers, and lawyers to define what is ethical for an algorithm to do. This ties back to the importance of discussing and teaching ethics. 6. Can we prevent algorithmic bias? Testing could be used to prevent algorithmic bias. If we run “pre-release” trials of complex algorithms and AI systems, we might determine whether algorithms are causing bias or if the training data might cause problems. Companies can monitor the results of their algorithms even after they are released and used in different contexts or communities (4). Another more obvious prevention technique is to prepare training data more carefully. This ties back to our discussion of education. Providing education on ethics and algorithmic bias would create awareness of the problems associated with training data bias. If students and industry professionals know the effects of training data bias, we hope they will use good, representative training data. Understanding the potential ramifications of using “poor” datasets might motivate individuals to prepare training datasets more carefully. This could result in significant improvements in the short term. 7. Conclusion Algorithms and artificial intelligence hold great potential, however algorithmic bias is an alarming problem that reinforces injustice. Allowing discrimination to occur and waiting on a future solution to appear is insufficient. This is not just a problem of technology, but a moral quandary at the societal level. There are multiple fronts through which we can fight the unjust effects. We should expand work on long term research projects across different disciplines simultaneously, while also pushing initiatives to increase diversity in tech related fields. We might be able to improve education by teaching students the social implications of algorithms. Including discussions of the ethics of algorithms in computer science curricula and in industry can have great potential. Whether this occurs through new course requirements, changes to existing courses, or workshops, learning about different problems of ethics in computer science is only in keeping with the times. It is also crucial that we create policy measures to deal with algorithmic bias. These solutions are interdisciplinary and will involve more than just people who create algorithms. We will need the involvement of policy makers, lawyers, academic institutions and tech companies. This may seem like a formidable task, but we can begin by discussing the ethics of algorithms. If algorithms and AI are to be integral parts of our society, then it is our responsibility to make sure that we use them in a manner that is socially responsible. References 1. Brogan J. What’s the Deal With Algorithms?2017. Available from: http://www.slate.com/articles/technology/future_tense/2016/02/what_is_an_algorithm_an_explainer.html. 2. Carthy J. WHAT IS ARTIFICIAL INTELLIGENCE? 2017 [Available from: http://www-formal.stanford.edu/jmc/whatisai/whatisai.html. 3. Lipton ZC. The Foundations of Algorithmic Bias2017. Available from: http://approximatelycorrect.com/2016/11/07/the-foundations-of-algorithmic-bias/. 4. Campolo A, Sanfilippo M, Whittaker M, Crawford K. AI Now 2017 Report. AI Now Institute; 2017. 5. Danks D, London AJ. editor Algorithmic Bias in Autonomous Systems. 26th International Joint Conference on Artificial Intelligence; 2017. 6. Angwin J, Larson J, Kirchner L, Mattu S. Machine Bias2016 2016–05–23. Available from: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. 7. Angwin J, Larson J. The Tiger Mom Tax: Asians Are Nearly Twice as Likely to… — ProPublica2015 2015–09–01. Available from: https://www.propublica.org/article/asians-nearly-twice-as-likely-to-get-higher-price-from-princeton-review 8. Spice B. Questioning the Fairness of Targeting Ads Online — News — Carnegie Mellon University2015 November 2017. Available from: http://www.cmu.edu/news/stories/archives/2015/july/online-ads-research.html. 9. Guynn J. Google Photos labeled black people ‘gorillas’2017. Available from: https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/. 10. Kirchner L. When Discrimination Is Baked Into Algorithms2017. Available from: http://www.theatlantic.com/business/archive/2015/09/discrimination-algorithms-disparate-impact/403969/. 11. Crawford K. Opinion | Artificial Intelligence’s White Guy Problem2016 20160625. Available from: https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html. 12. Data & Society 2017 [Available from: https://datasociety.net. 13. AI Now Institute 2017 [Available from: https://ainowinstitute.org/. 14. Hudson L. Technology Is Biased Too. How Do We Fix It? | FiveThirtyEight2017 2017–07–21T12:20:53+00:00. Available from: https://fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it/. 15. Kirkpatrick K. Battling Algorithmic Bias. Communications of the ACM. 2017;59(10). 16. PERVADE — Pervasive Data Ethics for Computational Research 2017 [Available from: https://pervade.umd.edu/. 17. Yao M. Fighting Algorithmic Bias And Homogenous Thinking in A.I.2017. Available from: https://www.forbes.com/sites/mariyayao/2017/05/01/dangers-algorithmic-bias-homogenous-thinking-ai/. 18. AJL -ALGORITHMIC JUSTICE LEAGUE 2017 [Available from: http://www.ajlunited.org/. 19. Science and Engineering Degrees, by Race/Ethnicity of Recipients: 2002–2012 — NCSES — US National Science Foundation (NSF). 2017. 20. Landivar LC. Disparities in STEM Employment by Sex, Race,and Hispanic Origin. Washington, DC: United States Census Bureau; 2013. 21. Dickey MR. Google taps Van Jones and Anil Dash to discuss race and algorithmic bias2016 2016–12–12 15:46:35. Available from: http://social.techcrunch.com/2016/12/12/google-taps-van-jones-and-anil-dash-to-discuss-race-and-algorithmic-bias/. 22. Edionwe T. The fight against racist algorithms2017. Available from: https://theoutline.com/post/1571/the-fight-against-racist-algorithms. 23. Pretorius L, Barnard A, de Ridder C. Introducing computer ethics into the computing curriculum: two very different experiments 2017. Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.128.9214&rep=rep1&type=pdf. 24. Barocas S, Bradley E, Honavar V, Provost, F. Big Data, Data Science, and Civil Rights. Computing Community Consortium. 2017. 25. Singer N. Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It. Business Day [Internet]. 2018. Available from: https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html. 26. The Ethics and Governance of Artificial Intelligence — MIT Media Lab: MIT Media Lab; 2018 [Available from: https://www.media.mit.edu/courses/the-ethics-and-governance-of-artificial-intelligence/. 27. Syllabus for CS109: University of Texas at Austin; 2018 [Available from: https://www.cs.utexas.edu/~ans/classes/cs109/syllabus.html. 28. Trabia M, Longo JA, Wainscott S. Training Graduate Engineering Students in Ethics. 2016. 29. Palmer CC. CS89 Cognitive Computing with Watson 2017 [Available from: http://www.cs.dartmouth.edu/~ccpalmer/teaching/cs89/Course/CS89-Resources/index.html. 30. Lardinois F. Google Wants To Improve Its Translations Through Crowdsourcing. 2014 2014–07–25 13:53:55. Available from: http://social.techcrunch.com/2014/07/25/google-wants-to-improve-its-translations-through-crowdsourcing/. 31. Coldewey D. New York City moves to establish algorithm-monitoring task force. 2017 2017–12–12 16:43:19. Available from: http://social.techcrunch.com/2017/12/12/new-york-city-moves-to-establish-algorithm-monitoring-task-force/. 32. Levendowski A. How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem. Washington Law Review, Forthcoming. 2017.
Working Towards Socially Responsible Algorithms: When Algorithms Become Tools of Injustice
2
towards-socially-responsible-algorithms-15b95d2a1494
2018-02-25
2018-02-25 22:55:12
https://medium.com/s/story/towards-socially-responsible-algorithms-15b95d2a1494
false
4,277
Think tank developing new questions, ideas, tools, & insights for trainees, organizations and policy makers in science around the world.
null
null
null
STEM Advocacy Publications
null
stem-advocacy
null
stemadvocacy
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
The STEM Advocacy Institute
Developing new questions, ideas, tools, & insights for trainees, organizations and policy makers in science around the world.
a2c03e22fc42
stemadvocacy
15
38
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-29
2017-10-29 18:53:26
2017-10-29
2017-10-29 19:02:42
0
false
en
2017-10-29
2017-10-29 19:02:42
2
15b97d72880
0.418868
0
0
0
I just watched ReasonTV’s fascinating mini-documentary on George Hotz, and his quest to build a self-driving startup to compete with the…
1
Self made, Self driven I just watched ReasonTV’s fascinating mini-documentary on George Hotz, and his quest to build a self-driving startup to compete with the likes of Tesla. Check out his company, Comma.ai here: https://comma.ai/ Company updates here: https://twitter.com/comma_ai In other AI news, Hanson Robotics’ famed “Sophia” robot made headlines this weekend at the Future Investment Initiative conference, where she was the first non-human entity to receive citizenship from Saudi Arabia. Sophia’s speech at the Future Investment Initiative conference While Hanson Robotics is named after famed robot designer David Hanson, the AI behind Sophia is largely the work of Ben Goertzel. To learn more about his quest for AGI, check out this mini-documentary from 2009.
Self made, Self driven
0
self-made-self-driven-15b97d72880
2018-05-30
2018-05-30 23:01:21
https://medium.com/s/story/self-made-self-driven-15b97d72880
false
111
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Lucas van Lierop
Recent Yale Grad. Tech Enthusiast. Environmentalist. Opera Singer. h+
5e16e3473654
lvlierop
80
90
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-27
2018-02-27 01:34:23
2018-02-27
2018-02-27 19:23:58
7
false
en
2018-03-11
2018-03-11 16:13:12
2
15b9af87c9d8
6.774528
6
0
0
Location, location, … cancellation? What causes Airbnb price difference?
4
Making Models (I) | Airbnb Price Prediction: Data Analysis Location, location, … cancellation? What causes Airbnb price difference? This is part one of a series documenting the end to end process to develop and design a generalized linear model that outputs predicted Airbnb rental price. As a whole, the series will include a description of dataset analysis, advice for additional data collection via web scraping methods, feature engineering (specifically for unstructured images), model selection and results. Readers should find this document helpful when developing their own predictive models, or when looking for a framework to organize their thoughts. Most importantly, I hope to demystify some of the process behind “data science” by breaking down a typical workflow into distinct and modular activities that can be reproduced for many types of problems. If you find it helpful, or have a question, please feel free to leave a comment below and I will answer to the best of my ability. Today we will break our analysis down into several parts, each with a specific goal in mind. These include: Description of dataset characteristics and explanation of each variable in plain English. Target variable examination to gain an understanding of possible influences. Multivariate study of each feature, and potential relations between variables. By the end of this section, we should have a better understanding of the features that make up our dataset and how they impact our target variable. Dataset Description To accurately predict Airbnb price, we aim to collect a dataset containing features which directly impact the rental price. No better place to start than by gathering a number of listings with fields directly from the site. Below you will find a list of the features that were taken from Airbnb and which turn out to be very important attributes in the price prediction. Since we know the price for each row, this can be classified as a supervised learning problem, and we will split our data into distinct training, test, and cross validation sets. For now, we will examine the dataset as a whole, and come back to this division later. As a general rule, I like to examine a dataset’s features for several characteristics before proceeding or deciding to gather additional data. These characteristics include: Number of missing values and how to deal with them (NaN or null) Type of data (categorical, boolean, image, numerical, text, etc) Shape and size of data (this impacts the type of model we will use) Classical statistical analysis (mean, median, range, variance, st. dev, etc) Understanding the problem At a glance: Target variable: Log price (natural logarithm) Features of dataset: id (numeric) | Unique identifier for each listing property_type (categorical) | (e.g. Apartment, house, condo) room_type (categorical) | (e.g. Entire home/apt, private room) amenities (text) | Unstructured list separated by commas (e.g. tv, kitchen). Candidate for textual analysis. accommodates (numeric) | Number of people the rental fits bathrooms (numeric) | Number of full and/or half baths bed_type (categorical) | (e.g. futon, real bed) cancellation_policy (categorical) | (e.g. Flexible, moderate, strict) cleaning_fee (boolean) | T/F city (categorical) | (e.g. Boston, NYC, LA) description (text) | Unstructured and up to the host how to populate. Candidate for textual analysis. first_review (date) | How long ago the first review was left host_has_profile_pic (boolean) | T/F (no link to picture) host_identity_verified (boolean) | T/F (via email verification) host_response_rate (numeric) | How often the host replies to inquiries (%) host_since (date) | Date that they opened their account instant_bookable (boolean) | T/F last_review (date) | Date of most recent hosting latitude (numeric) longitude (numeric) name (text) | Name of rental property. Candidate for textual analysis. neighbourhood (categorical) | Informal description of neighborhood (e.g. Brooklyn Heights, Downtown) number_of_reviews (numeric) | Total number of reviews given by guests review_scores_rating (numeric) | Mean rating of reviews given by guests thumbnail_url (numeric; we’ll come back to this) | Link to primary photo of rental property. Candidate for image analysis. zipcode (numeric; likewise) | Zipcode. Candidate for bringing in additional data. bedrooms (numeric) | Number of bedrooms in rental beds (numeric) | Number of beds in rental Let’s break this down a bit. The type of feature (categorical, numeric, boolean, text) will impact the way that we perform our analysis and choose our eventual model. For our numeric indicators we can perform statistical analysis, but for our categorical and text data we will have to get a bit more creative. The purpose of this portion was to look at our current information and get a feel for what we are working with. The Airbnb website helpfully provided these features, but it does not tell us which is more or less important as an indicator of price. Based on anecdotal experience (a.k.a. staying in Airbnbs), I would guess that number of beds, accommodates, and zipcode are probably important for determining price. For now, I’m going to leave all features intact — we will return to this during our multivariate analysis! Target variable analysis Recall that we are attempting to predict log_price as our target variable. I’ll be using Python with Jupyter notebooks to do some of the manipulation and will include code snippets when applicable. We are using the pandas library for analysis here — highly recommend. We can see that we’ve gathered ~74,000 rows of information, and log_price fluctuates between 0 and 7.7. The zero value here is problematic, and will require a closer examination. If hosts are offering free rentals I’d like to know! Only one listing had a “0” value for log_price — I knew it was too good to be true. We’ve removed the row with a zero value for log_price — much better. Getting an idea of this via a histogram will help to determine if any additional transformations need to be made before proceeding. Minimally skewed and showing insignificant kurtosis (<1) this passes the spot check for normal distribution (bell shaped, symmetrical about the center). Likewise, our probability plot appears linear and reinforces our decision to leave log_price as-is. We now have painted ourselves a picture of log_price that will be useful when examining the results of our model. When the time comes to predict results we can compare our test set to this distribution and determine if we are in the ballpark. We still have some work to do before we get there — let’s dive into what causes it to tick. Multivariate study Taking a look at how this set stacks up we already know that we have gathered ~74,000 rows, giving us some leeway if we decide to remove listings lacking information. In fact, it’s probably a good idea to see which are the biggest culprits for missing values. We can see that host response rate and review scores rating are missing nearly 1/4 of the time. Now that we have identified features with a large number of missing values, we must decide what to do with them. There are a number of options available, such as deleting the features entirely, removing rows where these features are not present, or replacing the empty cells with zeros, averages, maxes or minimums. This decision will vary based on your model parameters, and right now we will earmark this and replace the missing values with “0” so as to not throw errors in our computations. Much better. It’s generally a good idea to identify how our variables correlate with one another, and pandas provides an easy way to do this with the corr method. Correlation matrices provide a quick and easy visual to make sense of multiple variable interactions. In this figure, the lighter colors represent high correlations with the intersecting features. Our attention is drawn to the accommodates, bathrooms, and cleaning_fee as features that are highly correlated to log_price. This makes sense based on our initial observation, although the high importance of cleaning_fee is not something we predicted. Another benefit is we are able to see features that are highly correlated with one another, and thereby may impact our model creation. Latitude & longitude and beds & bedrooms are highly correlated as expected. Let’s note this and come back to this observation later when building our model. These correlation matrices are good for numerical values, but do not give much insight as to what’s happening in our categorical and text fields. Generally, I will replace categorical features with dummy variables (NYC = 1, Boston = 2, etc). For unstructured text, we must take a different approach. Qualitatively, it would be nice to get a sense of the typical content of these features. Luckily, we can make a word cloud that will help us out — let’s take a look: Name Amenities Description We can see how word clouds allow use to quickly identify the most frequent tokens that appear in each feature. By breaking down each string into separate components, we are able to identify common words that appear over and over again. Unsurprisingly, “room”, “bed”, and similar qualifiers are the most frequently used across categories. I’ll leave it as an exercise to determine how this observation can be successfully incorporated (consider frequency). This brings us to the end of our analysis! We have sliced and diced our data and can say that we now have a deeper understanding of the underlying variables that influence log_price. In the next part, we will review methods for additional data collection via web scraping, using zipcode as our key. Thank you for reading, and feel free to clap if you enjoyed.
Making Models (I) | Airbnb Price Prediction: Data Analysis
11
making-models-airbnb-price-prediction-data-analysis-15b9af87c9d8
2018-05-18
2018-05-18 08:28:38
https://medium.com/s/story/making-models-airbnb-price-prediction-data-analysis-15b9af87c9d8
false
1,517
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Philip Mohun
Bioengineer, Consultant | Analytics, Blockchain, AI
fb34cd73178f
philmohun
16
20
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-17
2018-07-17 06:34:30
2018-07-17
2018-07-17 07:05:39
6
false
en
2018-07-26
2018-07-26 01:35:02
5
15b9b6a0be02
9.516038
13
1
0
The Digital Reserve:
4
Credit Scoring for Blockchain Microfinance Institutions (MFIs) The Digital Reserve: At The Digital Reserve, we aim for transparency and community engagement. As a result we are creating a series of blogs on credit scoring methods in the microfinance space so that our community understands our ambitions as well as potential shortfalls on credit scoring. The Digital Reserve hopes to incorporate a machine learning credit scoring system into our microfinance platform to improve access to credit and optimal health of our lending network. In the process of creating this machine learning system we have studied extensively how lending decisions are made. In this blog we would like to share several aspects of that decision process. Firstly, we will explain how borrowers in the developed world, where credit scoring has worked well, are different from borrowers in the developing world, where the industry is still exploring credit scoring systems. Secondly, we will describe common metrics that can and should be used in credit scoring system. Thirdly, we will explain the utilization of different uses of scores depending on where a borrower is in the lending cycle. Fourthly, we will discuss the most common models that are used in the literature about approval credit scoring in microfinance. Finally, we will discuss the results from the models and compare their predictive power. This exercise is to acknowledge the potential and expose the limitations of statistical and machine learning credit scores for predicting default risk. In following blogs we will dedicate entire posts to different statistical and machine learning models with code and hopefully create community dialogue surrounding these models. Credit Scoring in Developed and Developing Nations: What’s the Difference? Credit scoring has proven to be a beneficial technology in the developed world by allowing low and middle income consumers have access to a lifestyle that was previously inaccessible for these particular socio-economic classes. This technology in part allowed for the creation of the credit card industry which gives companies the opportunity of instantaneously approving low-risk customers and improving the companies’ profits. Microfinance has also proven to be a beneficial technology which has resulted in the financial inclusion of millions of families in developing nations around the world. The majority of microloans are disbursed through the use of subjective scoring by expert lenders in the field. As a result the amount of loans that are available are bounded by a lender’s ability to accurately assess credit risk and lend. In some of the most efficient MFI in Latin America the number of loan applications per year an individual lender can handle is approximately 600/yr. A natural question would be, how do we increase this number so that both borrowers and lenders can maximize the benefits of Microloans? The immediate answer may be credit scoring! There are numerous advantages, such as efficiency gains the MFI may be able to acquire through the use of statistical or machine learning based credit scores; however there are many issues to adoptions. In the developed world individual borrowers have the following: Salaried incomes Credit histories recorded with credit bureaus In the developing world individual borrowers typically have the following profile: Self-employed Lack of proper identification Work in the informal economy Lack of property rights to allow for collateral loans to begin a credit history. (Lack of property is not exclusive to the developing world — young borrowers in developed countries face similar issues) The certainty of salaried incomes and well established data on borrowers provides an opportunity to automate the process of credit scoring through statistical or machine learning methods. As a result very accurate credit scores can be created with much smaller parameters, say 15–20 variables. The same is not true in the developing world, where the borrower profile requires significantly larger parameters to build a very accurate credit scoring model. Parameters for credit scoring in the MFI: This section does not present a complete list of parameters necessary, but provides a good start. Most of these metrics are repeated throughout the literature on the subject. Instead of presenting each parameter individually, I will present broad categories that these parameters fall under and give an example of each. These categories include the following: Individual Demographics Contact Information Household Demographics Household Assets Business Demographics Financial Flows % Ownership of enterprise Repayment record Proxies for personal character Loan Characteristics Figure 1 below, while not showing as many categories, gives an example of categories, parameters and values. Figure 1: an example of data required for creating a credit scoring model in MFI The 10 broad categories above have several parameters each. Where the credit scoring models in the developed world may only require 15–20 variables, the MFI may have between 50–100 variables necessary for building an accurate credit scoring system. Some of these variables may be used for more than one purpose in the credit cycle, from lending to retention. In the next section we will discuss the different kind of scoring used throughout the lending cycle. Different Scores for different parts of the lending cycle: When people think of credit scores, they typically are thinking in terms of scores that approve or reject a loan application. Although this blog will narrow in mostly on this kind of score, it is important to point out that there are other scores, roughly three, that are used throughout the lending process. The three types of scores are the following: Approval Score — The score necessary for determining whether or not to lend to a new client Collection Score — The score provides a company with the likelihood that a client who is now past due, will repay their loans. Desertion Scoring (Customer Loyalty) — The score provides an institution with the probability that a previous borrower will borrow again from the company. Figure 2 below shows a typical lending process cycle. This lending cycle is broad so as to be roughly consistent regardless of whether a company is using Expert, Statistical or Machine Learning credit scoring. Figure 2: A generic lending process cycle While there are multiple scores used for the lending cycle, for the duration of this article we will only focus approval scoring. We address collection and desertion scoring because they will be important in later blog pieces about incentives for good repayment and retention. Again, since The Digital Reserve seeks to improve on the credit lending process of the current MFI by incorporating automated credit scoring systems, we would like to be transparent about their current limitations along with promoting their potential. Different Approval Scoring Models: To keep this blog post readable for all backgrounds, I will introduce the models here in layman terms and provide links to the mathematical and computational details for the initiated members of the league of shadows. In the future, these models will be replicated and improved upon in future blogs and github and will be linked in due time. By far the most common approval credit scoring models are linear discriminant analysis (LDA), quadratic discriminant analysis (QDA) and logistic regression (LR). We will go into more detail on LDA and LR and will additionally discuss Neural Networks (NN), specifically Multilayer perception (MLP) NN and their performance with micro-lending data. Figure 3 below shows a more complete list of models used for microfinance. Figure 3: Publications on Credit Scoring Models (Statistical) for MFI What the heck is discriminant analysis and logistic regression? Discriminant Analysis is a classification model that allows us to maximize separation between two distinct groups. For a credit scoring model, our two distinct group are good borrowers and bad borrowers. In discriminant analysis we are estimating the probability that a collection of inputs S=(25yr/old, female, income = 15,000, educ = 14, smoker = yes) belongs to the class good or bad. The model makes use of prior information to determine the probability (Bayes Theorem). For Binary data, logistic regression and LDA are not much different in terms of attempted to create a linear demarcation between two distinct groups. They do however differ on their assumptions about probability densities, size of groups, etc. Discriminant analysis makes more assumptions about the underlying data than LR and for that reason they are used in different scenarios. For a detailed comparison, read this paper here. What are neural networks? Neural networks are a computer scientist attempt at modeling human decision making in the computational world. In essence the algorithm learns from examples and makes decisions based on that learning. In our credit scoring example, in determining whether someone is potentially a bad or good borrower it looks at previous examples. Follow figure 4 below along with the example for a graphical flow of a neural network model. Let’s start our example with some inputs and say that a set S = (18yr/old male, $4000 income, smoker = 1, eats fast food everyday = 1) is a bad borrower. The previous information are the inputs denoted (x). Over time of seeing a similar pattern, the neural network learns to distinguish the bad from the good borrowers. Now let’s say that over time smoker = 1 shows up 60% of the time in bad borrower input set and 40% in good borrower input set, while eats at fast food everyday = 1 shows up 95% in bad borrower’s set while, only 5% of the time in a good borrower set. These inputs are given different weights in the input set to determine the category a borrower belongs to. Weights are denoted (w). After giving weights to these inputs, they are aggregated into a number. If this number is above a certain threshold, say 10, we can determine whether or not a borrower is good or bad. This is a neural network. For a good introduction to neural networks, read here. Figure 4: Graphical representation of a perception, Neural Network model. We have managed to get through these models without using math! Hooray! Now that you have been exposed to the models that have been commonly used for credit scoring models in micro-finance, we will compare their results in the next section. The results of credit scoring models: The results discussed here come from two papers, one that used data from the microfinancing industry in Peru and one that used data from Tunisia. These papers were selected because they specifically compared LDA, LR and NN models to one another. In comparing classification models, the standard usage is to compare the number of correct and incorrect classifications, for us good or bad borrowers, and the cost to misclassification. In the paper on Peru researchers used an area under curve method (AUC) to compare models. Area under the curve basically takes the number of true positives (someone was actually a good borrower) and false positives (someone was actually a bad borrower but classified as good) and calculates the probability that a new borrower who is ranked as a good borrower is actually a good borrower. Say we have 10 borrowers that are actually good and 5 that are actually bad but ranked good. (Please follow figure 5 for the following explanation.) Then the probability that a borrower ranked good is actually good is 10/15 while the probability that they are actually bad but ranked good is 5/15. If we take the point (.33,.66) and plot it in a 2-dimensional Euclidean space that is separated by a 45 degree diagonal line (equal to 50/50 chance of being good or bad) this point is plotted above that line meaning our model performs better than chance (50/50). Figure 5: Graph of AUC Using AUC method, of the 14 of Neural Network models several performed significantly better than LR and LDA. However, every model had type I (false positive) and II errors (false negatives) in the double digits. This suggest that these models will be accurate less than 90% of the time. This is potentially problematic given the 90+% success of expert judgment scoring systems in micro finance. The Tunisian microfinance paper used a Correct Classification Rate to compare the models. They additionally used sensitivity and specificity. The calculations are as follows: CCR = (Number of correctly classified borrowers/total number of borrowers)*100 Sensitivity = (correct/ good borrowers) Specificity = (correct/ bad applicants) Figure 6: Comparison of Logistic Regression and Neural Network Determining which one is a better model depends largely on whether Type I or II errors have a higher cost to the lending institutions — meaning whether an institution loses more by turning away good borrowers or lending to bad ones. In either case, these models are still far from perfect and while these are only two papers, many of the other models throughout the literature to display similar results. Wrapping it Up: So what does this all mean? In short, that there is potential for statistical and machine learning methods in credit scoring, but they are not yet ready to replace judgment based expert lending practices in entirety. The shortcomings of creating a world class automated credit scoring system for microfinance is limited data, information silos, formal vs informal markets, credit risk management and training of these new technologies to old institutions. The Digital Reserve plans to address several of these issues over the next several years to decade to create thriving financial markets in developing nations. Moving forward we will explore these and more models in an interactive way with the community through blogs and github. In addition we will discuss the larger macro issues, policy issues and business models that can help improve on creating the best credit scoring models for the microfinance industry. References: Dr. Aïda Kammoun et el, Credit Scoring Models for a Tunisian Microfinance Institution: Comparison between Artificial Neural Network and Logistic Regression, 04/07/15, http://www.bapress.ca/ref/ref-article/1923-7529-2016-01-61-18.pdf Antonio Blanco et el, Credit scoring models for the microfinance industry using neural networks: Evidence from Peru , January, 2013, https://www.researchgate.net/publication/257404569_Credit_scoring_models_for_the_microfinance_industry_using_neural_networks_Evidence_from_Peru Dean Caire, A HANDBOOK FOR DEVELOPING CREDIT SCORING SYSTEMS IN A MICROFINANCE CONTEXT, Accessed on 7/17/18, https://pdfs.semanticscholar.org/16c8/1b44d4b4b842e12c3e800b0d6113c6d5f471.pdf
Credit Scoring for Blockchain Microfinance Institutions (MFIs)
433
credit-scoring-for-microfinance-institutions-mfis-15b9b6a0be02
2018-07-26
2018-07-26 01:35:02
https://medium.com/s/story/credit-scoring-for-microfinance-institutions-mfis-15b9b6a0be02
false
2,270
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Troy Wiipongwii
Social tech entrepreneur with an eye on policy , blockchain and sustainability. Catch me in Korea!
c4128d42992d
TroyWiiBot
21
49
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-22
2018-09-22 21:08:23
2018-09-22
2018-09-22 21:21:03
0
false
en
2018-10-04
2018-10-04 16:49:34
9
15ba8d86f275
0.690566
0
0
0
There’s a lot going on. Here are the articles (both scholarly and otherwise), stories, Tweets, and latest news I’ve read. (most recent at…
5
Interesting articles about Privacy, Ethics, Algorithms, AI, ML, & Tech Education There’s a lot going on. Here are the articles (both scholarly and otherwise), stories, Tweets, and latest news I’ve read. (most recent at top) Exclusive: WhatsApp Cofounder Brian Acton Gives The Inside Story On #DeleteFacebook And Why He Left $850 Million Behind September 2018 Google’s Framework for Responsible Data Protection Regulation September 2018 Examining Safeguards for Consumer Data Privacy Hearing Details: Wednesday, September 26, 2018 10:00 a.m. EST, Full Committee, Dirksen Senate Office Building G50 Witness testimony, opening statements, and a live video of the hearing will be available on www.commerce.senate.gov. Just Don’t Call it Privacy. Natasha Singer, NYT September 2018 University of Iowa grad student uncovers security issues at Facebook, Twitter. September 2018 Want Less-Biased Decisions? Use Algorithms. Harvard Business Review July 2018. Fiesler and N. Proferes, “‘Participant’ Perceptions of Twitter Research Ethics,” Soc. Media Soc., vol. 4, no. 1, p. 205630511876336, Jan. 2018. Catherine E. Tucker (2014) Social Networks, Personalized Advertising, and Privacy Controls. Journal of Marketing Research: October 2014, Vol. 51, №5, pp. 546–562. (must subscribe to read)
Interesting articles about Privacy, Ethics, Algorithms, AI, ML, & Tech Education
0
interesting-articles-about-privacy-ethics-algorithms-ai-ml-tech-education-15ba8d86f275
2018-10-04
2018-10-04 16:49:34
https://medium.com/s/story/interesting-articles-about-privacy-ethics-algorithms-ai-ml-tech-education-15ba8d86f275
false
183
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Natalie M Garrett
Technology is inevitable. If we are going to do it, let’s be intentional and focus on technology that positively impacts our society.
406ab95b57c0
nataliemgarrett
73
85
20,181,104
null
null
null
null
null
null
0
# set indices for the split df['set'] = pd.Series(index=df.index[:224990], data=np.array([0,1,2,3,4]*(int(len(df)/5)))) sm = SMOTE(kind='regular') for s in [0,1,2,3,4]: # subset test set test_set = df.query('set ==%d'% s) X_test = test_set.drop(['set','DRUNK_INV'], axis=1) y_test = test_set['DRUNK_INV'] # subset training set training_set = df.query('set !=%d'% s) X_train = training_set.drop(['set', 'DRUNK_INV'], axis=1) y_train = training_set['DRUNK_INV'] # SMOTE it up X_sm_train, y_sm_train = sm.fit_sample(X_train, y_train) # train classifier gbc = GradientBoostingClassifier().fit(X_sm_train, y_sm_train) gbc_smote_predictions = gbc.predict(X_test) # measure output print(s, 'FOLD Results') print(s, 'Accuracy :', accuracy_score(y_test, gbc_smote_predictions)) print(s, 'Recall :', recall_score(y_test, gbc_smote_predictions)) print(s, 'Precision :', precision_score(y_test, gbc_smote_predictions)) 0 FOLD Results 0 Accuracy : 0.774101071159 0 Recall : 0.65952622938 0 Precision : 0.592166222097 1 FOLD Results 1 Accuracy : 0.773056580292 1 Recall : 0.653266331658 1 Precision : 0.589402096911 2 FOLD Results 2 Accuracy : 0.77659007067 2 Recall : 0.659114397011 2 Precision : 0.589854866249 3 FOLD Results 3 Accuracy : 0.775545579804 3 Recall : 0.659452677801 3 Precision : 0.593633090986 4 FOLD Results 4 Accuracy : 0.775501133384 4 Recall : 0.663986392454 4 Precision : 0.598717233687
2
null
2018-08-06
2018-08-06 00:02:04
2018-08-06
2018-08-06 00:03:10
20
false
en
2018-08-18
2018-08-18 19:00:31
7
15be3b196e66
14.776415
0
0
0
Springboard Data Science Career Track Capstone II Submitted: July 30, 2018
5
Capstone II: Exploring Fatal Car Crashes Springboard Data Science Career Track Capstone II Submitted: July 30, 2018 Initial Proposal The initial question of this was: Is it possible to predict fatal road accidents in trucking? In this project, I wanted to use two publicly available datasets, the Fatality Analysis Reporting System (FARS) and the Freight Analysis Framework (FAF4) to predict fatal trucking accidents on selected roads. By mapping fatal truck accidents to segments of road then using spatial and non-spatial attributes of the road, I sought to build a predictive model that will have utility for both risk analysts and planners. This project could be useful for two groups: risk analysts and highway designers. Risk analysts, whether in the shipping industry, automated trucking startups, or insurance, need a reliable model for predicting accident rates along highways. This model could be built out further to incorporate risk based on time of day, congestion, and so forth. Insurers and others who require an understanding of risk could use the model to predict accident rates in areas where accident statistics are not readily available. Furthermore, understanding the factors that produce a dangerous section of road could be beneficial to civic engineers, as they will be able to set a target using this model and ensure their designs meet it. Data Wrangling | notebook Rather than using pandas and python scripts to clean the data, much of the Data Wrangling for this project was done using the open-source spatial software QGIS with a variety of Python scripts and plug-ins to load, examine, and manipulate the data before loading it into a dataframe for EDA and machine-learning. The Data The data came from two sources: Freight Analysis Framework (FAF): This is a dataset of highways with geometry and attributes for every major roadway in the United States. A sub-dataset, the FAF Network Database and Flow Assignment dataset, assigns an approximated flow of truck traffic along each segment within the FAFS dataset. Fatality Analysis Reporting System (FARS): This is a long-running database that collects data on all the fatal car accidents in the United States. There are dozens of variables available for reference, including latitude and longitude. Format is .csv for 2010 through 2016. A quick visualization of the interstates (green lines) and accidents (purple dots) Loading, Joining & Filtering For the FAF dataset, there was a huge number of roads so the first thing I did was reduce the full dataset to just interstate highways in the continental United States (i.e. excluding Alaska, Hawaii and territories) using QGIS and a SQL query to isolate the highest-class highway level. This resulted in 97,000 segments representing 295 different interstates. For the FARS data, I loaded the seven years of data into a pandas dataframe, and then loaded that into QGIS by utilizing the latitude and longitude attributes in the data. Building the Dataframe Building the DataFrame Basic Attributes The objective here was to create standardized units of highway, and count the number of car crashes that have occured on each to provide a target variable for training using the following process: Seperate out each highway by a unique highway identifier (I-5, I-69, etc.) to its own shapefile. Reproject the highways into North American Equidistant Conic projection. Dissolve the highway to a single line feature, or multiple single features. [via Python script]. Use the QChainage plug-in (in script form) to create points every kilometer (1000m) along each highway [via Python script]. This resulted in about 74,000 points. Divide the highway into new segments at each point from Step 3 [via QGIS Graphical Modeler]. Drop all attributes from the features [via QGIS Graphical Modeler]. Convert the highway segments into polygons that cover 50m buffer on each side of the highway [via QGIS Graphical Modeler]. Count the number of crashes in each segment [via QGIS Graphical Modeler]. Use a spatial join to join the 1km segments to the original attributes of the highways they overlap [via QGIS Graphical Modeler]. A Note on Scripting in QGIS Python scripting is supported and very useful for automating basic and even some more advanced GIS processes. For this project I used a couple of different tools: chained QGIS toolbox commands, graphical models, and open source plug-ins. QGIS toolbox commands such as reproject, dissolve and spatial join were chained together to modify vector files. Example can be seen in HighwayDissolve.py Graphical models were used in some instances to create more complex models involving multiple inputs and outputs, and multiple branches of processing. Example below: Open source plugins — in particular QChainage, network and MultipartSplit were used for specialized operations on the data. Plugins were used in every spatial processing step: Multipart split in preprossing, Qchainage for demarcating 1000m travel distance along roads, and then network for cutting roads at those exact points. Plugins were utilized differently, either being called or placed directly in the data, whichever I could get to work. Geopandas was used in one occasion to read in the point features created by the QChainage plugin and filter out points that were associated to a feature containing 2 or fewer points. This filter was implemented because errors were generated when those features were having lines created from them. Since these scripts were not run from jupyter, they are not included in the notebook but can be viewed on github from the links below: Spatial Analysis: Curve Index The Curve Index is an additional measurement I devised within QGIS to figure out the inherent curviness of a given road on the hunch that road segments with more curve would have more accidents. Put simply, it is the ratio of the shortest path between two points over the actual distance between those two points, subtracted from 1. A perfectly straight line will create a curve index of 0, while a perfect circle will create a curve index of 1. It is created via the following method: Using the points created from step 4 above, create a new line that connects those dots, aka a Euclidean/ as-the-crow-flies distance measurement, referred to as ED. Calculate the Curve index by taking that measurement and divide it by 1000. Use a spatial join to apply the Curve Index of a given segment back to the real segment it is adjacent to. Curve index values visualized Results At the end of this process, I was left with about 290 shapefiles, each representing a different interstate highway in the United States, with a unique record for each row, about ~74,000 records in total, roughly one per every kilometer of interstate highway in America. QGIS is wonderful for geographic data, but I needed something a bit more nimble for data so I exported the shapefiles as .csv files (stripping them of their geometry in the process) and imported them into pandas. A preview of that dataset is below: EDA and Pivot (and EDA again) | notebook Upon exploring the data produced from my data wrangling unit, I found the following: The number of traffic accidents is highly correlated with the total traffic flow, as might be expected. The target variable is extremely sparse, with only 3279 of the 70925 segments having accidents on them in the last five years, or roughly 4.6 percent of the total dataset. This meant the data was extremely unbalanced. Furthermore, I found was no significant correlation between the road characteristics and the accident rate. No feature had more than .05 r-correlation with the accident rate. Due to the highly unbalanced data and lack of correlation, I concluded that I would not be able to achieve my goal. In response, I decided to use the same data to pivot my capstone towards predicting characteristics of the crashes themselves. The factor I will try to predict is whether or not at least one of the vehicles involved in the fatal collision had a drunk driver. Pivot Pivoting away from my original goal, I shifted my question to be: Is it possible to predict if one or more drunk drivers were involved in a fatal car crash using a limited amount of data? Answers to this question may be useful to the following stakeholders: Insurance companies: Insurance companies may be interested in further investigation of fatal crashes in many cases, especially when they suspect alcohol played a role in a crash. Law enforcement: Law enforcement will be interested so they can understand the timing and location of drunk driving enforcement efforts, such as traffic stops. Public health officials: Similar to law enforcement, public health officials may be interested in the results of this study to provide help in deciding where and when to focus anti-impaired driving advertising and other public health efforts. To answer this question, I would be using one of the two datasets I used in the Data Wrangling step, the FARS dataset. The dataset itself is a huge, high-quality, high-dimensional dataset that does not require a large amount of cleaning. It contains 224992 rows and over 176 columns, representing accidents from 2011–2015. However most of those columns are administrative and mostly only of interest to government analysts, for example ten of the columns are dedicated to the ten individual digits of the Vehicle Identification Number. Therefore, it is easier to isolate actually useful elements from the data rather than cutting off non-useful variables. In this case, I focused on: Timing: Time of day and day of week. Crash events: Contained within the five PCRASH variables is a sequence of 5 steps that happen in every crash, with each step describing a different event. More description in EDA. Various other factors: Involvement of speeding and road type. For these factors I translated and filtered out NaN values as well as removed outliers, but for the most part I consider the Data Wrangling done prior to the pivot to fulfill that component. Findings Of the 40,237 fatal car crashes that occurred between 2011 and 2015, 29,291 of them did not have a drunk driver and 10,455 had at least one drunk driver, with 481 of those having two drunk drivers and, somehow, 27 of them involved four drunk drivers n drunk drivers | count | — | — | 0 | 161198 | 1 | 60797 2 | 2858 3 | 112 4 | 27 After extensive analysis of the data, I can conclude the following about the occurrence of drunk driving accidents in the United States: More likely in the late evening and early morning hours. This is especially clear if you re-align the hour variable to indicate hours since 7AM. More likely on the weekends. More likely when the impaired driver is negotiating a curve or attempting maneuvers with other vehicles (merging, passing etc). This conclusion comes from analyzing which pre-crash factors are most common in drunk driving accidents, where a precrash factor is a categorical variable describing the primary action that caused the crash. More likely when the driver is speeding above the posted speed limit. The violin plot above shows the distributions of drunk (red) and non-drunk (blue) involved accidents on various types of roads (x-axis) and the amount by which the driver was speeding (y-axis). The FARS dataset is massive, both in length and width. Doing a full-level EDA of all the factors available for analysis would have taken weeks and was beyond the scope of this project. By selecting just a few factors, I pulled some powerful insights from the data and moved onto machine learning. Machine Learning | notebook The next task was to design an estimator that can make effective predictions of whether or not at least one drunk driver was involved in a fatal collision, using insights from the EDA portion. This involved preprocessing, model selection, optimization, and dealing with imbalanced datasets, all described below. Preprocessing The first task was to make the many categorical variables in the dataset more expressive and machine-readable. The variables were encoded as integers, each representing a different category of something. However, those integers are merely symbolic and have no linear nor ordinal relationship to one and other. Therefore, it is easier for sklearn’s models to read the categories when the categories are fleshed out to new columns with binary values, showing whether that particular record is in a certain category (represented by a 1) or not (0). This ends up creating n columns, where n is the total number of unique categories in the column. In this case, the 9 categorical values in the dataframe were expanded to 176. These categorical models were crash factors and precrash actions, both of which are columns of integer values describing what sort of actions occurred at each step of the crash. This page from the FARS manual provides reference to these variables, and more can be found throughout that document. Model Selection Logistic Regression The first model type examined was Logisitic Regression, which is the classification variant of linear regression and compares output values against a threshold. Training a basic model yielded a 76% accuracy score, but recall was very poor (many false negatives, meaning accidents with drunk drivers were not being caught by the estimator). The model was predicting more false negatives (prediction: sober; reality: inebriated) than true positives, meaning that the recall of the model in the True predictions is very low. Given the context of the problem, this is definitely an area for improvement since it missed a large percentage of the actually drunk drivers. However, given the sparseness of positive values in the target array, it makes sense that the model struggled to identify when they are present. Because of this and the necessities in context of identifying as few false positives as possible, some methods specific to working with imbalanced data were tried. In order to rectify this, I used the class_weights parameter of LogisiticRegressor and set it to “balanced” so that the estimator learns equally from both classes in the target array. This resulted in the following confusion matrix, which shows the problem basically got shifted from recall to precision. SMOTE One additional method I tried for unbalanced datasets was SMOTE, Synthetic Minority Oversampling TEchnique, which creates “synthetic” or slightly randomized duplicates of the minority class in my data — accidents with drunk drivers. SMOTE is implemented after the train/test split, because otherwise I end up creating very similar samples in my training and test sets, which leads to misleadingly high accuracy and recall scores. However, the results were not much improved, but the synthesized dataset proved useful for other models further along. Tree Models and Ensemble Methods The next models to try were the decision tree classifiers, which build a “tree” of decision points and nodes representing final decisions. I started with simple DecisionTreeClassifier to determine a baseline for improvement, then moved onto Random Forest and Gradient Boosting models, before finally training the models on the SMOTE data for the optimal model. The confusion matrix for the baseline DecisionTree model is below, and while true negatives is very high, overall accuracy is pretty weak due to overfitting to the training model. I next trained an ensemble method, RandomForestClassifier, which builds an ensemble of trees based off various bootstrap subsets of the data, then finds the average between them resulting in less overfitting. This results in a higher accuracy model, but ended up with the same problem as the LogisticRegression model — bad recall with too many false negatives. Analysis Before moving on to discussing other models, it is worth discussing something interesting in tree models in the context of this project — how interpretable it is, especially when it comes to the crash factors and pre-crash movements. As covered in the EDA section of my capstone, the pre-crash movement variables describe a story of a crash through categorical variables, where each of the five variables (P_CRASH1 — P_CRASH5) is a different step that led to the crash. Therefore branches of the decision tree may split at each action. For example, the decision tree learns that values like 1 in P_CRASH3, indicating that no avoidance maneuver was taken by the driver to avoid an accident, mean the driver was more likely to be impaired. Examining the feature importances can provide insight into what features are most important in the decision tree ensemble. In this case, I reverse the processing done in my preprocessing step with OneHotEncoder to aggregate the feature importances of each categorical variable. The feature importances in this case are not too different from the analysis of coefficients in a linear model. As expected from our EDA, corrected_HOUR (which refers to the number of hours since 7AM) and LGT_COND (Light conditions, closely correlated to HOUR) have the most decision power in the decision trees, but no one feature has the majority. Beyond that, aggregating the values for each subcategory of the categorical events lets us see how important each of those categories are as a whole, information that might otherwise be lost. We can see how Pre-Crash Event 2 — which identifies the attribute that best describes the critical event which made the crash imminent — is the most important of the categorical variables. We can use the non-aggregated series to see specifically which events are the most important. Let’s look at the top ten subcategories within P_CRASH2 and information from the analytical manual. The top five of these represent the following (keep in mind this can refer to any car in the crash, not just the inebriated driver): 80: Pedestrian in roadway 54: Vehicle travelling on wrong side of road 83: Cyclist in roadway 6: Travelling too fast for conditions 13: Off the edge of the road on the right side. 62: Vehicle travelling over left line from opposite direction. These all seem like actions that a drunk driver would either do or have trouble dealing with. It makes sense then that they are the some of the most important decision points in the forest classifier, and serves to highlight the interpretability of the decision tree in this case. Further Model Selection I next tried a GradientBoosting model which is another tree-based ensemble method. The results below are after optimizing the LearningRate parameter using GridSearchCV. Still not great, as the trade-off between recall and precision still seems to be affecting the performance of the model. Rather than trying to change the model, I decided to change the data the model was being trained on by using the synthetically balanced SMOTE data from earlier. Training on that data produced the best results and eliminated the trade-off between precision and recall while maintaining an overall accuracy score of 77%. To ensure that these results were not from random chance in the selection of the training data in the 5-fold split, I built a loop that segmented the the data on the standard 80/20 split, used SMOTE to balance the classes, then trained a GBC estimator and returned the accuracy, precision and recall scores from each K-fold. That code is below: Results: Given the relative consistency of these results, I concluded this was the best model for the data. Conclusion What I built and tuned here from my data is a classification estimator that can predict whether or not a driver was drunk in an accident based on the day of the week, time of day, the actions taken by the drivers, and the number of people involved in the crash. The accuracy of these predictions is just under 80%, which is not perfect, but can be used to guide investigations, public health decisions, and possibly civil safety engineering. Originally published at gist.github.com.
Capstone II: Exploring Fatal Car Crashes
0
capstone-ii-accidents-analysis-15be3b196e66
2018-08-18
2018-08-18 19:00:31
https://medium.com/s/story/capstone-ii-accidents-analysis-15be3b196e66
false
3,452
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Chester Hitz
I am a map-minded aspiring Data Scientist based out of San Francisco, CA.
7eabcdc676b
chesterhz
33
32
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-24
2018-04-24 05:56:38
2018-07-01
2018-07-01 18:57:04
1
false
en
2018-07-01
2018-07-01 18:57:04
1
15c08a0581c2
1.716981
0
0
0
Talking. The big thing in technology now is to make talking a richer and multi-faceted way of communication. Talking not just to people but…
5
The power of speech, the opposable thumbs and a mysterious brain. Professional okra chopper in an Indian canteen. Talking. The big thing in technology now is to make talking a richer and multi-faceted way of communication. Talking not just to people but also to machines. Humans have mostly used their opposable thumbs to conquer the physical world through muscle power; and now the power of speech to move, motivate and vex. Ergonomics, user-friendly, usability, userbility, etcetera… Even to this date, designers thrive on iterating products that are compatible with this unique set of opposable thumbs… How do you grab a phone to click a selfie?… Should the fingerprint sensor on your mobile be on the home button, the rear side or on the side edge?… Be it a bottle, a shoe-string, a pair of chopsticks, a door knob or a typing interface… Oh and did you know that in the tech community, chatting means typing? It's no longer what grandmas did with their neighbours on lazy afternoons. And now talking through a phone is being ‘on a call’. So when a colleague asked me to ‘call’ another one, it took me some time to decide whether she meant a phone call or a vocal seek. Technology is now high on the vision of humanising machines. Teaching machines to react like humans has been a dream long since. So speech is the power we want to bless machines with for now. It is a notion that bewilders the very basis of human to human interactions. But why let ethics hold back a plausible emergence. Teaching is often said to be the best method of learning, mostly due to the factor of feedback. Because as a teacher, one is susceptible to more number and varieties of feedbacks that a teacher is able to realign, re-learn, re-know and eventually re-teach. Somewhere in the process the roles get reversed and re-reversed and we find ourselves dwindling in this eternal loop. And the loop dwindles too within a greater loop of purpose, destinations and goals. Therefore, it becomes of prime importance to define this destination where we want to head. And that remains the greatest mystery of all times. We seek our purposes in the cosmos, in our intuitions and the human brain. After all, the brain has more connections than that in the universe. Word cookie: kothopokothon, Bengali for conversation Reference: http://www.pangaro.com/conversation-theory-in-one-hour.html
The power of speech, the opposable thumbs and a mysterious brain.
0
the-power-of-speech-the-opposable-thumbs-and-a-mysterious-brain-15c08a0581c2
2018-07-01
2018-07-01 18:57:04
https://medium.com/s/story/the-power-of-speech-the-opposable-thumbs-and-a-mysterious-brain-15c08a0581c2
false
402
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mandrila Biswas
null
f57a9e17ffb5
mandrilabiswas
94
96
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-11
2017-12-11 04:03:45
2017-12-11
2017-12-11 04:05:34
1
false
en
2017-12-11
2017-12-11 04:05:34
0
15c10b0b1a90
2.075472
0
0
0
The human brain has long been an object of fascination and curiosity for researchers. The way it functions and efficiently supports the…
1
NEUROMORPHIC COMPUTING CHIP — THE NEXT EVOLUTION IN ARTIFICIAL INTELLIGENCE The human brain has long been an object of fascination and curiosity for researchers. The way it functions and efficiently supports the cognitive human abilities with biological energy as well as neurons as the basic unit has always been a stimulation. Inspired by the brains fast computational speed, scientists have designed neuromorphic computing chips which specifically mimics the human brain and could be the stepping-stone in the evolution of artificial intelligence and computing. After decades of research and collaboration, Intel introduces the first-of-its-kind self-learning neuromorphic computing chip which can also be called ‘a brain of silicon’. It is a technology which will help people understand human brains more efficiently while maintaining the energy efficiency and cost performance benefit you get from human brain. Neuromorphic chips are basically microprocessors whose architecture is similar to that of a human brain model where the network of neurons are interconnected and the connections between them are called synapses. It is one of the fastest growing technology in the market with an approximate reach to $1.78 billion by 2025. The concept behind the invention was to develop computing circuits that resemble human brains which can unlock new possibilities by making the world connected and smarter with experience. According to Intel, the chip can learn and adapt to things on the go. Unlike other machine learning systems that require deep-learning and intensive training of data using huge clusters of computers, neuromorphic computing chip will be a self-learning chip. Imagine a future where complex decisions could be made faster, where robots are more autonomous, where stoplights can automatically sync their timings to the flow of the traffic, where cameras can look for a missing person. The Intel researchers think that the neuromorphic computing chip can help the world get smarter over time by using the real-time data to learn and thus redefine the classic compute platforms. The high energy efficient chip elevates the bar for artificial intelligence by taking an innovative approach to computing via asynchronous spiking. The spikes make the chip event-driven which operates only when needed to thus resulting in a better operating environment and low energy consumption. These chips not only consume low power but are also good at tasks that need pattern-matching over super-computing for example self-driving. In the future, these chips can prove an efficient solution for processing and analysing the huge amount of data generated by sensor networks and self-driving cars. So, where exactly neuromorphic chips are applicable? The technology is ideal for analysis based tasks such as cognitive computing, adaptive AI, data-sensing and associated memory. Neuromorphic computing chip is the future of AI reducing its diverse and complex workloads. Though how exciting and potential all these sounds, neuromorphic chips will still take time to come out in the commercial markets, but when it will, it will definitely redefine the AI and computing platforms!
NEUROMORPHIC COMPUTING CHIP — THE NEXT EVOLUTION IN ARTIFICIAL INTELLIGENCE
0
neuromorphic-computing-chip-the-next-evolution-in-artificial-intelligence-15c10b0b1a90
2017-12-11
2017-12-11 04:05:35
https://medium.com/s/story/neuromorphic-computing-chip-the-next-evolution-in-artificial-intelligence-15c10b0b1a90
false
497
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Digicor Australia
null
4dc4bf2c4234
digicoraustralia
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-12
2017-11-12 22:11:27
2017-11-12
2017-11-12 23:19:24
1
false
en
2017-11-12
2017-11-12 23:19:24
2
15c11be4a694
3.060377
20
4
0
Summary: Unless botcheck.me publishes a “false positive” rate for their algorithm, it can harm the Twitter community, as tweeters use it to…
5
Is Botcheck.me useful? Summary: Unless botcheck.me publishes a “false positive” rate for their algorithm, it can harm the Twitter community, as tweeters use it to block identified “Propaganda Bots” which are false positives — not actually Propaganda Bots. Propaganda Bots on Twitter are serious business. Botcheck.me, by Robhat Labs sounds like an excellent application of Machine Learning algorithms to address an important problem on Twitter: the existence of Propaganda Bots, often as part of larger information-operations. These operations aim to spread false news, disrupt civil discussion, and distract conversation and attention away from issues they want buried, while highlighting issues that they want to make a priority. A well-funded government-level-effort information operation has the ability, some say, to disrupt national elections. So botcheck.me and its associated chrome browser plugin might empower users to stem this tide. Robhat Labs says that botcheck.me can identify propaganda bots, using only their Twitter handle, plugged into their proprietary algorithm. While it is still early days for botcheck.me, even if its first employment of this technology is imperfect, it may eventually turn out to be extremely useful. But is it useful, now? Robhat Labs publishes: We generated our own validation data set with a fifty-fifty split of high-confidence bot accounts and regular users. These high-confidence bot accounts were handpicked by humans to ensure the quality of the validation set. Our classier achieved a 93.5% accuracy on this dataset. This means when the algorithm is queried with a high-confidence bot account, we are able to identify this account 93.5% of the time. That 93.5% is the “true positive” rate. The false-positive rate means, when the algorithm is queried with a high-confidence non-bot account, it mistakenly identifies it as a bot account (some %) of the time: identifying as a bot, what is not a bot. Robhat Labs has not yet published botcheck.me’s false-positive rate, saying only: It is especially important to minimize false-positives. Indeed it is. Let me give you an example illustrating why it is important. Let’s assume the false positive rate of botcheck.me is 1%. That sounds pretty good, doesn’t it, a nice low number? But what does “low” mean here? In typical use, “low” must be relative to the rate at which Propaganda Bots exist in the Twitter wild. So, assume that among active Twitter users, for the purpose of this example, only 0.1% are Propaganda Bots, on average. So you use botcheck.me on 1000 users. Since 0.1% are Propaganda Bots, then on average you would expect to find 1 actual Propaganda Bots among that sample. Congratulations! Oh, hold on here — there’s also a false positive rate of 1%, which means, of the 1000 users, a total of 10 will be false positives — 10 users will be identified as bots, who actually are not bots. This means a total of 11 users will be identified as Propaganda Bots, but 10 of those will not, in reality, be Propaganda Bots. Thus, under those conditions, a Twitter account identified by botcheck.me as a propaganda bot is 10x more likely not to be a propaganda bot. We don’t yet know the false-positive rate for botcheck.me, nor do we know the prevalence of propaganda bots among active users. But we can say that if the false-positive rate (R) and the prevalence of propaganda bots among active users (P) were both known, then Twitter account identified as a Propaganda Bot will be N=R/P times more likely to be a regular user, than to be a bona-fide Propaganda Bot. For botcheck.me to be truly useful — so that the proportion of false positives under actual use conditions is very low — it must be demonstrated that it has a false positive rate which is significantly lower than the rate at which propaganda bots can be found among randomly selected Twitter users. At a minimum, Robhat Labs must publish their false positive rate (as well as the number of accounts they used in their testing of regular users + high-confidence bot accounts), so that this evaluation can take place. Else, it is impossible to tell if botcheck.me is empowering Twitter users to nullify the socially harmful effects of propaganda bots, or chasing ghosts. If you’re using Botcheck.me, without knowledge of its false-positive rate, you may be punishing users who are not propaganda bots. Lots of them. Robhat Labs was asked for comment on this issue and has not responded by the time of publication. This story will be updated should relevant information come to light.
Is Botcheck.me useful?
257
is-botcheck-me-useful-15c11be4a694
2018-06-18
2018-06-18 14:09:17
https://medium.com/s/story/is-botcheck-me-useful-15c11be4a694
false
758
null
null
null
null
null
null
null
null
null
Bots
bots
Bots
14,158
Robert Rutledge
Physics Professor. Astrophysicist. Neutron Stars. Founder, Publisher of The Astronomer's Telegram. @astronomerstel
549cdb46ce51
rerutled
243
218
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-16
2018-08-16 13:58:42
2018-08-16
2018-08-16 13:59:51
1
false
en
2018-08-16
2018-08-16 13:59:51
5
15c2ff6117e4
2.064151
0
0
0
Published by SynLogics on August 16th, 2018
5
Important Aspects To Measure Usability When Selecting RPA Tool Published by SynLogics on August 16th, 2018 These days, more and more organizations are recognizing the importance of using automation software in their organizations. In this regards, rpa tools have proved to be highly effective because these are able to perform rules based, high-volume, transactional tasks and other such processes in a streamlined and cost-effective manner. When looking at whether or not robotic process automation tools are right for them, many organizations experience several challenges. In order to help organizations measure important aspects of usability with rpa tools and robotic automation companies, here are some of the areas to look at: What is the code structure like? Every rpa tool is different and it takes a different approach towards designing and construction automation. It is important that the businesses understand the difference between object oriented structured products and functional products. This is important because it will have a significant difference on the resilience, effectiveness and speed of implementation. a. The code structure consideration with functional products is that these are easy to start and quick to program. The unique benefit is the recorder function and this can help speed up the rpa functions. When selecting a functional product, the business needs to consider the functionalities that it will be used for. b. Often times the object oriented structured tools do not have recorder functionality and so it will need a great level of design before the actual commencement is done. Even if these tools do take more time in the initial setting, it is still worth because these tools provide great resilience and reusability which will bring good returns on investment in the longer run. The robotic process automation companies in India can help organizations to reduce the end to end build time by allowing multiple people to work on similar type of automation simultaneously. Before considering either one of the approaches, the businesses need to spend time in considering what they would want to get out of their rpa tools. If the tool is complicated and isn’t built for long term then chances are that the returns won’t be immediate. Super usability and framework is what helps match the organizational needs perfectly. The top rpa vendors will be able to analyze the need of an organization and help create a sustainable automation engine that provides ease of use and continuous success for any organization. Why only use top rpa vendors? When implementing your technological innovations you definitely do not want to experience hassles. The experienced professionals will analyze the specific needs in your organization and provide the best solution for the same. The usability will be measured in the perfect sense and the tools will be built for long term use only. So make sure that you spend enough time to compare the available options out there in rpa tools and only then make the right decision. 0 Originally published at synlogics.webstarts.com.
Important Aspects To Measure Usability When Selecting RPA Tool
0
important-aspects-to-measure-usability-when-selecting-rpa-tool-15c2ff6117e4
2018-08-16
2018-08-16 13:59:52
https://medium.com/s/story/important-aspects-to-measure-usability-when-selecting-rpa-tool-15c2ff6117e4
false
494
null
null
null
null
null
null
null
null
null
Tech
tech
Tech
142,368
Venkateshwarlu Kakkireni
Founder & President at SynLogics Inc
467dc0a49c64
venkateshwarlukakkireni
0
3
20,181,104
null
null
null
null
null
null
0
KNN Mini Lesson ----------------- KNN is pretty simple, to train it you give it a bunch of data for it to "learn" from. Then, you give it a test, and it measures the distance between your test point and every one of your training points to find the nearest K (which is an arbitrary number) points. From there you just average out your K nearest neighbors (roll credits) to find the most likely answer. If you want a small example: Lets say I have 3 plants with the following traits Type of plant Number of petals Number of leaves ------------- ---------------- ---------------- Lily 6 3 4 leaf clover 0 4 ???? 0 3 Now if I use KNN to find out what ???? is using the lily and the 4 leaf clover as training data it will look something like this: distanceFromLily = squareRoot((6-0)^2 + (3 - 3)^2) = 6 distanceFromClover = squareRoot((0-0)^2 + (4 - 3)^2) = 1 If we use a K of 1 we just get the 4 leaf clover as our most likely candidate (even if ???? only has 3 leaves). If we use a K of 2 however we can actually get prediction percents. Since the lily data point is 6 times further away from ???? than the clover, we know that the KNN predicts it is 6 times more likely to be a clover (1/7th chance for lily, 6/7th chance for clover). https://twitter.com/realDonaldTrump/status/683509528453869569 | Probability - 1.000000 | Correct https://twitter.com/realDonaldTrump/status/683443848639455232 | Probability - 0.800000 | Correct https://twitter.com/realDonaldTrump/status/683394224184758272 | Probability - 0.800000 | Correct https://twitter.com/realDonaldTrump/status/683378470093746176 | Probability - 0.500000 | Wrong https://twitter.com/realDonaldTrump/status/683277309969694720 | Probability - 1.000000 | Correct https://twitter.com/realDonaldTrump/status/683259029804695552 | Probability - 0.900000 | Correct https://twitter.com/realDonaldTrump/status/683128636279361537 | Probability - 1.000000 | Correct https://twitter.com/realDonaldTrump/status/683070410993364992 | Probability - 0.800000 | Correct https://twitter.com/realDonaldTrump/status/683066224251645953 | Probability - 1.000000 | Wrong https://twitter.com/realDonaldTrump/status/683062220490715136 | Probability - 0.800000 | Correct https://twitter.com/realDonaldTrump/status/683060654098530305 | Probability - 0.500000 | Wrong https://twitter.com/realDonaldTrump/status/683037464504745985 | Probability - 0.900000 | Correct https://twitter.com/realDonaldTrump/status/682805320217980929 | Probability - 1.000000 | Correct https://twitter.com/realDonaldTrump/status/682764544402440192 | Probability - 1.000000 | Correct Percent Correct - 83.14393939393939
9
null
2018-02-06
2018-02-06 23:23:08
2018-02-07
2018-02-07 02:58:32
2
false
en
2018-02-07
2018-02-07 03:08:02
20
15c33e5e6b07
4.847484
2
0
0
Quick Disclaimer: None of the following article has anything to do with politics, my opinions on anyone involved, and none of it is meant…
5
Training a KNN classification model to recognize Trump’s writing style Quick Disclaimer: None of the following article has anything to do with politics, my opinions on anyone involved, and none of it is meant as a statement against anyone. Recognizing patterns in a writing style is not in any way criticism, so just enjoy and maybe learn something? ¯\_(ツ)_/¯ All the way back in the far off past of 2016, I came across a wonderful analysis by data scientist David Robinson on Trump’s tweets. He was looking into the feasibility of the claim made by VFX specialist Todd Vaziri that every tweet on Donald Trump’s twitter account that was sent from an android was much more hyperbolic than those sent from other devices. If you haven’t read Robinson’s post before I highly suggest you do (and if you enjoy it he also has a followup that is just as good). In case you still are choosing not to read that then (spoilers) he investigates quite thoroughly and finds the claim to be quite accurate. So, as anyone else would do, I got to work on continuing to scroll down my feed and think nothing more of it than an enjoyable read. However more recently, after hearing some cool hubbub about buttons I thought it would be neat to try my own hand at playing with Trump’s tweet data. My approach is pretty simple. I used a K-nearest nearest neighbors algorithm on a number of factors I picked out in order to try and optimize accuracy. For those who are unaware of what KNN is, here’s a brief summary as it’s simple enough that anyone should be able to get the gist of it. (Go ahead and skip this if you already understand KNN) Now the first step is grooming the data to fit the KNN algorithm, in order to do that you need numbers in order to find distances between data points. In order to take something as abstract as a sentence and make it numerical I need to extract features of the tweet. And this is where I tie it back into the data analysis post I mentioned as my inspiration, a couple of feature ideas I blatantly stole from the blog post: Hour of day Whether or not it had a picture/link Whether or not the tweet was in quotes Other factors I take into account: Exclamation points! ALL CAPS WORDS @mentions #hashtags Notable names (Hillary, Obama) Minimum punctuation distance (Usually equates to sentence length in characters, some are very big others are not. SMALL!) Number of “pauses” this includes… ellipses, commas — even dashes For my data set, I used a JSON archive for Trump’s 2016 tweets because sadly this pattern no longer exists in 2017 tweets due to him switching over to an iPhone in March of 2017. I run the full archive through my grooming script and it spits out a much smaller groomed JSON file. If you’d like to use it yourself: (It includes newer and older archives) bpb27/trump_tweet_data_archive trump_tweet_data_archive - Up-to-date Archive of Trump Tweetsgithub.com For the actual KNN work thanks to the magic of scikitlearn (originally the magic of individual numpy and scipy) this rather simple algorithm becomes even easier (and faster thanks to not being written in native python for massive array operations). I separate training and testing into separate functions (as is rather standard). Nothing really out of the ordinary, create a KNN prediction model in the train function and use it in the test function. I split the tweets into training set and test set by every other one, to avoid any bias from trends over time when I load them from the JSON file, then feed the model one half to train and the other half I test and print out whether or not it got them wrong and how sure it was. Conclusion So lets take a look at the output. An accuracy of 83% isn’t too bad, if we were doing completely random guessing we’d have 50% accuracy (is Trump vs isn’t Trump). However if we delve deeper we notice that a lot of the ones that are missed are ones sent from iPhone. If we take a look at some of those however… All 3 of these are very clearly written in Trump’s signature style, and to no surprise these are 3 of the “wrong” iPhone tweets with the highest odds of being from Trump according to the KNN model. This small bit of evidence, albeit anecdotal, goes to show that it would work quite well at predicting whether or not tweets outside of 2016 were sent by him. Who knows, maybe there’s just a slight chance Trump himself was the one who tweeted out about how big his button is? Questions? Comments? Concerns? Suggestions? Want More? I’m on twitter. Enjoyed it? Please consider clapping, sharing, following, or commenting to indicate to me you want more like this. Want to check out the code yourself? It’s on github: jam1garner/did-trump-tweet-it did-trump-tweet-it - A KNN looking at a number of factors to determine if Trump was the one who sent the tweet.github.com
Training a KNN classification model to recognize Trump’s writing style
17
did-trump-tweet-it-15c33e5e6b07
2018-02-07
2018-02-07 12:32:03
https://medium.com/s/story/did-trump-tweet-it-15c33e5e6b07
false
1,183
null
null
null
null
null
null
null
null
null
Twitter
twitter
Twitter
43,358
jam1garner
Programming, Reverse Engineering, and Hacking
8c0c0bbefe7e
jam1garner
12
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-12
2017-09-12 10:50:58
2017-11-11
2017-11-11 06:45:36
1
false
en
2017-11-11
2017-11-11 06:45:36
3
15c3a34f9ae3
2.603774
1
0
0
More and more influential researchers and observers are contemplating the idea that artificial intelligence will eventually be a threat to…
4
The most reliable bulls**t about AI killing humans More and more influential researchers and observers are contemplating the idea that artificial intelligence will eventually be a threat to human beings. Recently, no less an authority than Bill Gates has been thinking of applying a new tax to limit the (ab)use of artificial intelligence, while Elon Musk has been arguing with Mark Zuckerberg over just how scared we should all be about AI. Sifting through numerous articles on the topic, one is led to believe that a truly apocalyptic future awaits us all: sophisticated machines learning from humans how to fight and shoot, mobilising around the sole purpose of exterminating the human race; potentially evolving into something more complex than themselves and, eventually, developing their own race… Even the most cool-headed commentators have embraced the prediction that artificial intelligence will at the very least take jobs that should otherwise be performed by humans. Some of the most creative minds would definitely think that this type of starvation would be part of a methodical plan to destroy the human race in the most discrete yet subtle way, even though I personally have not read anything about this just yet (but would not be surprised if I did). No matter what the reason might be, it seems that artificial intelligence will develop the intelligence required for killing just about everyone in a truly Hollywoodesque fashion. But while these concerns might hold merit, it is important to keep them in perspective, and not lose sight of a number of unshakable truths that should inform any rational debate about AI. For instance, artificial intelligence can neither kill nor destroy anyone, unless it is instructed to do so. Just like an autonomous car that can become a deadly weapon if only, and only if, instructed to plough into pedestrians. The same goes for weaponry: an autonomous/intelligent rocket would be even more terrifying than non-autonomous rockets only if instructed in targeting civil buildings instead of military barracks or other rockets. This conclusion however would leave nobody in their safe cocoon, due to the following assumption: it is expected that at some point malicious humans will create malicious artificial intelligences capable of threatening other humans. As a consequence, those other humans will create other artificial intelligences in order to counter such threats. This phenomenon is not only possible, but is already slowly turning into a reality. Fortunately there are some solutions that might keep this phenomenon under control, to a certain extent at least. One such solution would be AI Certification. Certifying artificial intelligence before operating it would be essential as more and more AI will be incorporated into everyday tasks. This process would be equivalent to what is already required within the European Economic Area (EEA) with the CE marking (or with the FCC Declaration of Conformity used on certain electronic devices in the US to establish conformity with health, safety and environmental protection standards). In fact, if artificial intelligence is not marked “safe”, it cannot operate. All the others can and must be destroyed. It is essential that such certification is performed with technology that is publicly available and maintained by multiple peers in order to encourage fairness but at the same time discourage centralisation. Public availability and decentralisation may be summarised under one broad term: blockchain. Needless to say that at this point in time, blockchain technology is mature enough to fulfil such requirements, while being supported by a community that is more and more aware of the importance of decentralisation and public ownership. The number of attempts to place artificial intelligence on the blockchain are showing the needs to be prepared to the scenarios I have just introduced. A side project I have been working on for a while is finally seeing the light. Check it out at fitchain.io
The most reliable bulls**t about AI killing humans
1
the-most-reliable-bulls-t-about-ai-killing-humans-15c3a34f9ae3
2018-04-10
2018-04-10 18:23:21
https://medium.com/s/story/the-most-reliable-bulls-t-about-ai-killing-humans-15c3a34f9ae3
false
637
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Francesco Gadaleta
Machine learning, math, crypto, blockchain, fitchain.io
68468a2d58be
frag
569
263
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-07
2018-08-07 22:19:15
2018-07-11
2018-07-11 16:52:56
1
false
en
2018-08-07
2018-08-07 22:21:29
2
15c449d76a9
2.320755
2
0
0
Many organizations are plagued with poor data quality while using outdated, inconsistent, and flawed data from multiple data sources, as…
5
Is your data concerned about its quality and tells you that proactively? Many organizations are plagued with poor data quality while using outdated, inconsistent, and flawed data from multiple data sources, as simple as having five different names for a same customer. This eats into the precious time of business users and analysts who work on contradictory reports, incorrect business plans and finally end up making wrong decisions. Wrong decisions come with their own costs. According to Gartner research, “the average financial impact of poor data quality on organizations is $9.7 million per year.” In another research covering companies across the globe Gartner estimates that poor-quality data is costing them on an average $14.2 million annually. Ovum Research reports that poor quality data is costing businesses at least 30% of their revenues. To understand more on how poor data quality can affect an organization, let’s look at the situation at a large global telecommunication company with a broad service portfolio for millions of customers. The company manages huge data sets of customer information in a combined legacy CRM, billing and analytics solution, which also offers a single view of customer information across operations. Now, if the company’s sales personnel or data analysts were to query these multiple systems, with quality issues like different names for the same customer, to create a single report, they would be most likely spend lot of time and also produce error prone information as datasets may not match appropriately. And given the size of the large organizations, you can easily multiply such erroneous reports by thousands. The extent of loss, due to incorrect decision making, arising out of these faulty reports could be unfathomable. Here’s where ConverSight.ai, an Artificial Intelligence (AI) powered conversational analytics platform comes in handy. New age AI-powered business intelligence and analytics solutions can leverage machine learning algorithms to reconcile data from various systems and propose suggestions to handle data discrepancies. Organizations have tried to address quality problems at the data entry stage and integration stage, however with the growth of information systems and 3rd party data, it’s not possible to fix all the issues. New-age analytics systems should start ‘handling’ instead of attempting to ‘fix’ it. How cool will it be for a system to understand any form for the customer name, abbreviated or partial names and match with the customer data and get the intended results? Self Service BI is increasingly moving towards insights generated through conversational analytics. Hence, it’s even more important for solutions to share correct real-time information by parsing through tons of data from disparate data sets. An AI-powered conversational analytics solution like ConverSight.ai can handle data integrity issues at the earliest point of data processing, rapidly transforming these vast volumes of data into trusted business information. These solutions use advanced algorithms which let the user use their own language, infographics and map it into the correction to deliver accurate real-time reporting to support error-free decision making. They also extend the data quality and report anomalies. Anomaly detection algorithms flag “bad” data, identifying suspicious anomalies, that can adversely affect data quality. By tracking and evaluating data, anomaly detection gives valuable insights into data quality while data is processed. Learn more about ConverSight.ai and how it can help your organization by logging on to www.thickstat.com Originally published at blog.conversight.ai on July 11, 2018.
Is your data concerned about its quality and tells you that proactively?
2
is-your-data-concerned-about-its-quality-and-tells-you-that-proactively-15c449d76a9
2018-08-07
2018-08-07 22:21:29
https://medium.com/s/story/is-your-data-concerned-about-its-quality-and-tells-you-that-proactively-15c449d76a9
false
562
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
ThickStat
ConverSight.ai - Conversational Insights and Action through Artificial Intelligence
bdeff43a8a1
thickstat
8
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-13
2017-09-13 09:18:57
2017-09-13
2017-09-13 09:20:51
4
false
en
2017-09-13
2017-09-13 09:20:51
7
15c46c415610
2.718868
0
0
0
Blockchain Technology is ruling the Internet right now. There are tons of applications which provides us some unique features and makes our…
5
MAGOS (MAG) : A Platform With Integrated Neural Network Artificial Intelligence | Used For Predictions & Forecasting | The ICO Crowdsale Is Live !! Source Blockchain Technology is ruling the Internet right now. There are tons of applications which provides us some unique features and makes our lives easy. Blockchain brings transparency and security to the internet platforms or applications. There are many applications for each domain like social media platforms, trading exchanges, payments applications, music platforms, video making platforms and many more. Let’s talk about an industry which has a high potential and extremely profitable today and also in near future. The industry is Prediction and Forecasting industry. These are Billion Dollar industries generations billions dollar turnover. So, an application which can forecast make accurate predictions can do a lot of good over here. It can help users to increase their profitability as well as wealth. MAGOS is a blockchain based application which has it’s own neural network architecture to make highly accurate predictions and forecast the events with full accuracy. Neural Network Architecture MAGOS simply works by combining the efforts of it’s 5 Neural Networks which performs separate functions in field of Forecasting and Prediction. Neural Networks are the software manifestation of a human brain which acts like exactly the same as the human mind. This neural network will learn the data structure and step by step gets high trained to summarize it and finally make future predictions on the basis of it. These 5 Neural Network on which MAGOS works are as follows: Source How MAGOS Generates Profits ? MAGOS generate profits by applying forecasting and predictions to various domains and multiple online platforms. After doing so, all the profits generated will be transferred to MAGOS fund, which is created by this Architecture and from there it will be distributed as follows: 85% percent to the MAGOS token holders. 10% Re-invested back into the MAGOS fund. 5% will be used to meet operating expenses. MAGOS tokens also provides the right to it’s holders to vote on the profit distribution percentage. Also it bears the right to profit sharing as I have mentioned above. MAGOS (MAG) Tokens Source MAG tokens can be obtained through the pre — ICO or Main ICO sale of MAGOS. MAG tokens provides 2 unique rights to their users. The Voting Right The Profit Distribution right within the MAGOS fund. MAG tokens are ERC-20 on Ethereum Blockchain. The total supply of MAG tokens are 50,123,377 and no other tokens will ever be created after it’s main ICO crowdsale. MAGOS ICO Crowdsale Source MAGOS ICO sale is live and getting great response from all the investors. All the deposits will be made in ETH or BTC and all the tokens which be received by an Investor can be stored in Ether wallet until it becomes tradable after the conclusion of Token sale. 1 BTC = 27738 MAG 1 ETH = 1995 MAG Visit MAGOS Website Check Our Offical Website. : https://magos.io/ Join Magos On Slack : https://magos-invite.herokuapp.com/ Follow us on Twitter : https://mobile.twitter.com/MagosNetwork Follow Magos On Facebook. : https://m.facebook.com/MAGOS.io/ Join our Telegram Account. : https://t.me/MAGOS_network Follow Magos On Medium : https://medium.com/@MAGOS?source=linkShare-627c111c8d20-1504935500 Check Our Latest Talk on Bitcointalk : https://bitcointalk.org/index.php?topic=2087842
MAGOS (MAG) : A Platform With Integrated Neural Network Artificial Intelligence | Used For…
0
magos-mag-a-platform-with-integrated-neural-network-artificial-intelligence-used-for-15c46c415610
2017-09-13
2017-09-13 09:20:52
https://medium.com/s/story/magos-mag-a-platform-with-integrated-neural-network-artificial-intelligence-used-for-15c46c415610
false
535
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Salman Chaudhary
null
f4d4466ce65d
chaudhry_salmn
15
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-01
2018-05-01 14:00:15
2018-05-02
2018-05-02 19:59:54
6
false
en
2018-05-02
2018-05-02 20:01:33
8
15c557d2abd0
2.112264
7
0
0
Weekly collection of data-driven articles, stories, and resources
5
Self Driven Data Science — Issue #46 This weeks lineup of data-driven articles, stories, and resources delivered faithfully to your inbox for you to consume Time Series Analysis in Python: An Introduction This post does an excellent job of walking through an introductory example of creating an additive model for financial time-series data using Python and the Prophet package developed by Facebook. Includes code and output. Command Line Tricks For Data Scientists Aspiring to master the command line should be on every developer’s list; data scientists are no different. Learning the ins and outs of your terminal will help you be much more productive. Beyond that, the command line serves as a great history lesson in computing. How to Be a Bad Data Scientist! The author explores many of the stereotypes surrounding unprepared data scientists along with some misconceptions of the job itself. When Should You Use a Pie Chart? According to Experts, Almost Never The point of charts is to communicate data effectively. Or, at least, that is the point according to data-visualization experts. The truth about why people like and use charts is more complicated than that. How Blockchain Will Revolutionize Data Science Emerging blockchain technologies have the potential to improve several aspects of the current data science landscape, ranging from data collection, to distributed computing, to predictive analytics. This article focuses on a few key projects that aim to tackle big problems that those subfields are currently facing. Source: xkcd Any inquires or feedback regarding the newsletter or anything else are greatly encouraged. Feel free to reach out to me on LinkedIn, Twitter, or check out some more content at my website. If you enjoyed this weeks issue than make sure to help me spread the word and share this newsletter on social media as well! Thanks for reading and have a great day!
Self Driven Data Science — Issue #46
19
self-driven-data-science-issue-46-15c557d2abd0
2018-05-24
2018-05-24 06:07:02
https://medium.com/s/story/self-driven-data-science-issue-46-15c557d2abd0
false
308
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Conor Dewey
Data Scientist & Writer | www.conordewey.com
ee856fa71ed0
conordewey3
5,345
1,182
20,181,104
null
null
null
null
null
null
0
null
0
bbc0f3850f78
2017-11-05
2017-11-05 20:34:59
2017-11-05
2017-11-05 20:54:26
0
false
en
2017-11-05
2017-11-05 20:54:26
13
15c59ae00fe6
4.033962
0
0
0
This is based on a talk given at the Big Data Debate organized by Import.io. Here are my slides.
5
San Francisco, CA Wednesday, 07 August 2013 Dark data is more important than big data. This is based on a talk given at the Big Data Debate organized by Import.io. Here are my slides. Imagine if… Imagine if you had Google Glass, or the Iron Man suit, and your heads up display (HUD) could tell you anything you wanted to know about everything in your field of vision. What would you want to know? What would you benefit from knowing? — How old is this? — Who owns this? — How much does it cost? — How was it manufactured? — What material is it made of? — Where did it come from? — Who else has been here? These are just a few of the many questions that you could ask of your surroundings. What is “Dark Data”? There are three types of dark data. Let me briefly define them and provide an example for each: 1) There is data that is not currently being collected. An example of this is location data before Foursquare, or social data before Facebook. Where did the people go? Who did the people know? Now we know. 2) There is data that is being collected, but that is difficult to access at the right time and place. In front of you, there is a pine tree. How do you know it is not a fir? Because, in some book, in some library, there is an explanation of the difference. That’s useless. Here and now, we need information applied to the present. 3) There is data that is collected and available, but that has not yet been productized, or fully applied. You’re walking down Fifth Avenue in Manhattan. Every building you look at, Wikipedia has vast amounts of data about. But technology startups are only just beginning to figure out how to bring that data to you, and make it valuable. The burgeoning field of augmented reality is full of opportunities like this. What’s the difference between “Dark Data” and “Big Data”? Big data problems are problems caused not by the inaccessibility of data, but by the abundance of it. That’s why big data opportunities are smaller than dark data opportunities. Dark data is a bigger problem, because it hasn’t been surfaced yet. And the bigger the problem, the bigger the opportunity. Big companies tend to have big data problems, and they know it. That’s why big data is a great market. Lots of customers with lots of data willing to pay startups to help them make sense of it all. Think banks, insurance companies, telcos, hospitals, and on and on… Startups going after dark data problems are usually not playing in existing markets with customers self-aware of their problems. They are creating new markets by surfacing new kinds of data and creating unimagined applications with that data. But when they succeed, they become big companies, ironically, with big data problems. Dark data is everywhere In my “useless” liberal arts background I learned about this dude named Immanuel Kant. Kant split experience in two . There is the experience of reality itself. Reality is infinite, multi- layered and complex. Kant called this the “phenomenal” realm. Then there is the way we interpret and understand reality, as we describe it with language and data. Kant called this the “numenal” realm. To make sense of reality, and to navigate our way through it, we have to abstract away meaning from it by simplifying it through creating models, frameworks, world- views, etc. If reality is infinite, multi-layered and complex, the good news is that there are always more types of data to extract, and new types of applications to create on top of that data. That’s why there are so many dark data opportunities all around us. Great companies that are surfacing dark data If your startup is surfacing dark data, I’d like to hear about it, feel free to reach out. Several of these companies I am either friends with or advise, so full disclosure, but here are some that come to mind: Boxes — a social network for stuff. Stuff is dark data. All of your stuff is not online. There’s no place online that has all the things that I own, all the things that I want to own, etc. NewHive — the blank canvas for the web; a social network for creativity. Expression and art is dark data, but create the right platform, and all of a sudden, all of it springs into light. Xola — a booking and distribution platform that powers businesses offering lifestyle experiences. Their software helps these businesses manage their back-office and online reservations, payment processing, calendaring, inventory and guide management, and customer relationship management. All of this is dark data: until Xola, most of these businesses were being run with pen and paper, out of a cigar box. Now, all of their data is running through their platform. The Tip Network — these guys are taking tips at restaurants (and eventually bars, hotels, casinos, etc.), which are currently all handled old-school, with receipts and cash and paper records (dark data), and moving them into the digital era, with beautiful software that adds value (in multiple ways) to both servers and restaurants. They will be processing the $35B in tips in the US every year, and soon will be adding other services for restaurants and services, from payroll to banking, on top of that platform. Newtrust — Louis Anslow’s startup idea is based on the realization that everything from the school you go to, to your LinkedIn profile, is ultimately about signaling credibility to create trust, so that you can be employable and well compensated, but that instead of relying on proxies for trust, we should go right to the source: the work itself, as it is done, every hour of every day, and track and measure that — it is valuable dark data. NeuroVigil — your brain activity is dark data. Nest — your home energy consumption patterns are dark data. 23andme — your DNA is dark data. My friend Louis Anslow trotted out this great line recently, by Friedrich Von Hayek: “Often that is treated as important which happens to be accessible to measurement.” That which is not accessible to measurement may be very important tomorrow, even though it is dark to us today, it just needs to be brought to light.
Dark data is more important than big data.
0
dark-data-is-more-important-than-big-data-15c59ae00fe6
2017-11-05
2017-11-05 20:54:27
https://medium.com/s/story/dark-data-is-more-important-than-big-data-15c59ae00fe6
false
1,069
Pre 2014 thinking.
null
francis.pedraza
null
Francis’ Archives
francis@invisible.email
francis-archives
POLITICS,TECHNOLOGY,CREATIVITY,PHILOSOPHY,PRODUCTIVITY
francispedraza
Big Data
big-data
Big Data
24,602
Francis Pedraza
Is spirit moving?
ed91ac80e802
francispedraza
1,913
126
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-01
2018-01-01 20:25:05
2018-01-01
2018-01-01 22:37:21
0
false
en
2018-01-02
2018-01-02 04:46:27
5
15c691da4d1e
5.222642
322
11
0
My involvement and why I’m contributing
5
My Take on Verge (XVG) My involvement and why I’m contributing Disclaimer: This isn’t investment advice. I’m not an “official” member of Verge (whatever that means — it is open source and MIT licensed after all). I don’t know any of the Verge team or people that are advocates of it other than their avatars. I’ve submitted some pull requests to the code base over the last couple days to fix issues I saw and have been communicating with the lead dev on Discord to try and help push the project forward. Like many people I like to make money but I’m more interested in the tech in the crypto space because I’m a nerd, I like my job, and I’m already able to pay my bills. For those of you that are following the cryptocurrency space, 2017 was a wild ride. Many coins saw some explosive growth. One of those coins was Verge. It grew at an absurd rate, up something like 6000% in a month. Being honest, I was excited with that because I had invested some money into it early (not early enough) after reading through their website, Black Paper, the source code, and lurking on their Discord channel. It was on the uptrend, pretty cheap and I saw (and still see) a ton of potential in the project. Verge is not the only currency I have invested in, I have a diverse cryptocurrency portfolio - and that’s how I pick my investments, I do my due diligence - but it is the only coin I have contributed to because I’m especially interested in the privacy coin area and I believe in their vision to allow people to choose whether transactions should be private or not. Privacy is important and we’re losing it every day. To my knowledge Verge is the only currency that is trying to give you the option to make transactions private or public. This is what their “Wraith” release is all about and it’s something I really want to see come to fruition because there are legitimate use cases for private and public transactions. So, two days ago I decided to take a crack at fixing the Mac OS X build because I wanted to test out the code base for myself and I never feel that comfortable leaving coins on exchanges. I hadn’t run the wallet before but I have run other coin wallets and full nodes. I managed to get the latest build working and sent some coins to my wallet. The new Wraith codebase worked! So I decided to share my excitement with a tweet and push a pull request to the official repo. After that I had asked the lead developer vergeDEV if I could help with anything else. I did some testing, helped debug some OS X build issues and went back to life. With this tweet, my Twitter feed blew up and people decided to take me as an authority on project status. I’ve been hounded the last couple days and to be completely honest I don’t know much more than anyone else. I don’t know how vergeDEV handles the incessant pestering, it’s pretty hard to stay focused. I had to turn off all notifications. However, because I value transparency I’ll share what I do know… I know that vergeDEV is working his ass off and really cares about the project and the tech. He hasn’t had a lot of sleep and is dealing with a ton of public pressure — dealing with it really well in my opinion. I know that the Wraith release is up on Github. There are some things that need polish and improvement but the wallet works better than a lot of other wallets I’ve used. Transactions are pretty quick, and the Stealth addresses and Tor functionality is in the code base. I haven’t tested those parts out myself yet because I’m still learning about how they work but I’m hoping to tinker with that today or at some point this week. I also know that vergeDEV and I ran into some really shitty bugs last night that were hard to track down and fix but we managed to pull through. The dude is a machine for grinding it out on New Year’s Eve after already having a chaotic “holiday” season. That shows some serious dedication. Most people would have said “fuck it”. So in the last 2 days, regardless of the current state of the project my confidence in Verge and the team is at an all time high. For those that want to know more about the issues feel free to read on… Incorrect Transfer Fees We were doing some final regression testing and I noticed that when you sent a transaction it would take whatever remaining balance you had in your wallet and put it towards transaction fees. When trying to send 5 XVG to test I lost 19,000 XVG and made 1 lucky miner pretty happy. We spent hours trying to fix it and finally did. You obviously cannot ship that to everyone so we wanted to take our time to make sure we had it fixed and triple checked it. This bug was introduced in what looked to be a result of a bad merge conflict in the last couple days. Some code came in from another branch by accident and flew under the radar in the flurry of activity. This would have been caught earlier with more test coverage but wasn’t until a final pass of manual regression testing. Thankfully we took our time double checking the release but obviously the delay is justified. From me, there is no blame here. As far as I know vergeDEV has been a one-man team so it’s hard to do everything on the dev side by yourself and there are very few cryptocurrencies out there that have good tests, let alone adequate test coverage and regression testing (surprise!). Sadly, very few crypto currencies have consistent green builds. This is partly the nature of all the moving pieces but also the new frontier and the experimental nature of this area. Having created and helping run a widely used open source project I think I can help Verge a lot here and I will being pushing for more tests and automation. I know from experience that having decent test coverage increases stability and velocity of any code base. OS X Build Issues The second issue I was running into last night was an OS X build issue. We actually still have the problem and it turns out it’s pretty common. For the tech folk — I’m not able to statically link Boost when compiling the Binary for Mac OS X. So in order to run the new Wraith wallet you need to install Boost yourself before opening it up for the first time. This isn’t a huge issue but because I’m less familiar with C++ is not a quick fix unless I get some help. vergeDEV has his hands full with other stuff and I need to do more research on how to fix it. Researching late last night it appears that Apple doesn’t make it easy to include this library in standalone apps because they like to “think different”. If you are a C++ guru I’d love some help. If you manage to get it working please submit a PR to the Verge repo! In the meantime I’ll continue to work on a solution so that it’s a 1-click install process. So that’s all I know and I hope that brings some clarity for some people. To me (unofficially) Wraith is out. It works and it will be continually improved so I don’t really understand why people are hung up whether there is an “official” announcement or not. Stuff is moving forward at a rapid pace and the proof is in the code, not the tweets. As I stated before, I believe in the vision that vergeDEV and the rest of the team have and I’m looking forward to what’s to come in 2018 — both with Verge and crypto as a whole. I’ll be trying to lend a hand to help Verge become what I think it can be. Thanks for getting things started vergeDEV, I think things are just getting going! 🚀
My Take on Verge (XVG)
3,887
my-take-on-verge-xvg-15c691da4d1e
2018-06-01
2018-06-01 23:04:25
https://medium.com/s/story/my-take-on-verge-xvg-15c691da4d1e
false
1,384
null
null
null
null
null
null
null
null
null
Bitcoin
bitcoin
Bitcoin
141,486
Eric Kryski
Partner @bullishventures, creator of @feathersjs, co-founder of bidali.com. Passionate about transparency in finance.
110e04aeb5b
ekryski
2,093
677
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-20
2018-08-20 15:56:21
2018-08-20
2018-08-20 15:57:25
1
false
en
2018-08-20
2018-08-20 15:57:25
1
15c70de9160b
0.264151
0
0
0
Learn how artificial intelligence helps TESOL students become Certified.
5
Artificial Intelligence and Learning TESOL Learn how artificial intelligence helps TESOL students become Certified. https://www.americantesol.com/tesol-guide-2018.html
Artificial Intelligence and Learning TESOL
0
artificial-intelligence-and-learning-tesol-15c70de9160b
2018-08-20
2018-08-20 15:57:26
https://medium.com/s/story/artificial-intelligence-and-learning-tesol-15c70de9160b
false
17
null
null
null
null
null
null
null
null
null
Tesol
tesol
Tesol
127
American TESOL Institute of Florida
Teach English Abroad, TESOL Certification. www.AmericanTESOL.com
a387982ab3e5
atesol
7
109
20,181,104
null
null
null
null
null
null
0
null
0
f5af2b715248
2018-01-03
2018-01-03 01:35:16
2018-01-03
2018-01-03 22:47:27
7
false
en
2018-01-08
2018-01-08 12:30:08
13
15cac114a136
3.246226
52
2
0
Netflix rolled out an amazing feature which allows viewers to skip the intro of their favorite shows. This would save them ~30 seconds, but…
3
Feature Teardown: Netflix’s “Skip Intro” Netflix rolled out an amazing feature which allows viewers to skip the intro of their favorite shows. This would save them ~30 seconds, but that adds up during a binge watching session. The interesting question is — “how did they pull this off…at scale?” Let’s dive in to look at a few possible solutions. Human Tagging Perhaps there is a sad employee that goes in and watches all the Netflix shows and writes down what time the intro starts and ends. Highly unlikely. As of 2017, Netflix has 110M members and have a little under 7k titles. That would make a really sad intern and would probably result in arthritis. Machine Learning Netflix awarded a $1 million prize to a developer team in 2009 for an algorithm that increased the accuracy of the company’s recommendation engine by 10 percent. The Netflix Prize was an open competition for the best collaborative filtering algorithm to predict user ratings for films, based on previous ratings without any other information about the users or films, i.e. without the users or the films being identified except by numbers assigned for the contest. Because Netflix is an amazing technology firm, they would use the wisdom of crowds to know when the intro starts and ends. They would be able to analyze the 1 billion hours of weekly videos watched to see when people skip over the intro. Based on the trend, Netflix would be able to approximate when the intro begins (where people begin scrubbing) and where they drop the marker is where it ends. Screen Scraping Tunity, allows users to hear any TV — even if it’s muted. The magical technology scans the TV that you are watching and matches it with sound of the channel that is being broadcast. The magic is the recognition. Tunity compares the few seconds of video that you transmitted to their servers to all the video of supported channels. The Office’s Opening Frame. Another way Netflix could have leveraged ML was to have used computer vision. Let’s take for example the wonderful show, The Office. The Office’s intro last ~30 seconds and has the same opening frame. If the algorithm always looks for this frame in the show, it can calculate what time the intro begins and then knows to drop the person off at T+30s. Audio Recognition Another component of the intro is the music. The music is always the same for the intros. All the algo would have to do is recognize the acoustic fingerprint. This is the way that Shazam works. Below find the House of Cards intro fingerprint. House of Cards Intro And here is the Office intro. The Office Intro In conclusion, there are several ways for Netflix to recognize the beginning and end of show intros. All the ways mentioned above leverage either the wisdom of crowds and/or computer vision. If you liked this post, you might also like: Takeaway — Pricing (Ice Cream, Cups App, Classpass) Pricing subsidies must 1) seem temporary and 2) be credits. The need for a subsidy to be temporary is so that it would…medium.com How to Succeed/Fail at Making a “Me-Too” app A small Belarus company with a Growth Hackmedium.com Incumbents Hedging with their Disruptors Hedging in Air Travelmedium.com If you liked the overall message of this post, feel free to get in touch with us. We do speaking engagements — http://www.citadinesgroup.com/#contact This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 282,454+ people. Subscribe to receive our top stories here.
Feature Teardown: Netflix’s “Skip Intro”
381
feature-teardown-netflixs-skip-intro-15cac114a136
2018-06-15
2018-06-15 03:29:00
https://medium.com/s/story/feature-teardown-netflixs-skip-intro-15cac114a136
false
582
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
null
null
null
The Startup
null
swlh
STARTUP,TECH,ENTREPRENEURSHIP,DESIGN,LIFE
thestartup_
Netflix
netflix
Netflix
14,249
Eugene Leychenko
Writing about business strategy and well executed development. Running http://www.citadinesgroup.com/ (web & mobile development from NYC/LA)
66a7fc0d89b0
Citadines_Group
164
146
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-05
2017-12-05 16:58:40
2017-12-05
2017-12-05 18:21:01
1
false
en
2017-12-10
2017-12-10 18:16:39
0
15cc8b2e6c43
1.045283
0
0
0
We begin cycle of publications about our vision of the synthetic life forms classification. We have developed a classification consisting…
5
Types of Synthetic Lifeforms. Part 1 — Bio Type We begin cycle of publications about our vision of the synthetic life forms classification. We have developed a classification consisting of 3 main types of Synthetic Lifeforms and in this publication we are going to tell you about the first type. It’s biological type. Biological type is a type of synthetic lifeform based on organic (protein) compounds, consisting of carbon and using human or other biological DNA as a main foundation for its design, creation and development. This type is able to perceive the environments, such as physical and digital, space (4 dimensions — length, width, height, space-time) and also it’s able to function here, but mostly in physical ones. There are 2 subtypes in this category: 1. Clones are copies of living organisms obtained by copying the genetic material of parental donor and its identical creation, using asexual (including vegetative) way of reproduction. 2. Synthetics (Biorobots) are living organisms created with biological material but with fully programmable biological DNA (from various parental donors) and with changeable parameters of the designed organism. Successful combinations of designed organisms could be copied and it could have multiple copies — clones are in the unique form of synthetic samples. What do you think about it?! Your feedback is very important to us. Leave your comments down below!
Types of Synthetic Lifeforms. Part 1 — Bio Type
0
types-of-synthetic-lifeforms-part-1-bio-types-15cc8b2e6c43
2017-12-10
2017-12-10 18:16:39
https://medium.com/s/story/types-of-synthetic-lifeforms-part-1-bio-types-15cc8b2e6c43
false
224
null
null
null
null
null
null
null
null
null
Robots
robots
Robots
4,990
Noos
Future is now!
95a249f6972
noosproject
2
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-22
2018-06-22 16:43:20
2018-06-22
2018-06-22 18:13:48
3
true
en
2018-06-22
2018-06-22 18:13:48
7
15cfecc40421
3.580189
0
0
0
Amidst all this talk about fake news and the alleged bias of media houses towards different political parties, I took it upon myself to…
5
Using Data Science & NLP to detect bias in News Amidst all this talk about fake news and the alleged bias of media houses towards different political parties, I took it upon myself to analyze the alleged bias of one such media house. I found this beautiful dataset that is composed of 2.6 million news headlines published by The Times of India over 16 years (2001–2016). The Times of India (TOI), by the way, is the world’s largest circulated English daily — so we’re in for some fun. I started off by doing some basic data visualization (the notebook for all of which is available here!). Coverage by Cities First, I evaluated the total number of articles per city over the span of 16 years. Not so surprisingly, the two largest metropolitan cities in India, Mumbai and Delhi, lead the list by a fairly significant margin. Total coverage by cities (2001–2016) This motivated me to check out the coverage per city per year, for metropolitan cities. This however, needed to be seen in conjunction with the number of city-based articles by year. This way, we can disregard any non-uniformity in city-based reporting over the years. Year-wise coverage per metropolitan city (2001–2016) While I leave the graph open to your interpretation here, one interesting trend to note is the sudden spike in reporting for Mumbai in 2009 (…any guesses?) — it’s because of the 26/11 Terrorist Attacks in Mumbai that occurred towards the end of 2008. There was massive coverage, which continued into 2009, of its aftermath across the globe. Coverage by Topic What was even more interesting was to find out what topics are the most covered by TOI, and the results were fairly eye-opening! Coverage by Topics (2001–2016) Ouch. Can we just take a moment to digest that in the world’s largest circulated English Daily, Bollywood has had more coverage than every other topic in the world, except for Indian Business? Oh, and here’s another fun statistic for you — The ICC World Cup 2015 (one of the largest Cricket tournaments) alone got more coverage than football did in 16 years combined. (Sunil Chhetri, are you listening?) A relatively easier-to-digest, but still interesting, statistic is that US and Pakistan, respectively, have gotten the most coverage in international news. Sentiment Analysis First, I ran through the entire dataset of 2.6 million new headlines to find the polarity (positive, negative, or neutral) of each headline. Here are the results: Positive Headlines: 18.03% Negative Headlines: 11.66% Unbiased Headlines: 70.30% Now, for the more exciting part, I calculated these polarities with respect to three major political parties of India: Bhartiya Janata Party (BJP) Indian National Congress (Congress) Bahujan Samaj Party (BSP) Next, I calculated the ratio of headlines with a positive sentiment to the headlines with a negative sentiment for each party. Here’s what I got! BJP: 1.44057319472 Congress: 1.33045148895 BSP: 1.30193236715 Let’s be real, the ratios of positive sentiment to negative sentiment for different major parties are fairly the same. While the positive sentiment for BJP is a bit higher, the difference is not significant and hence negligible. Another important conclusion made was that at least 70.30% of the 2.6 million headlines published by TOI are unbiased in nature. Limitations & Conclusion My implementation has its fair share of limitations, which are described briefly: As had been rightly pointed out by another Kaggle user on my kernel, I only consider the language of headlines while making these conclusions. I do not factor in aspects like story selection, content structure and language, etc. while performing this succinct analysis. For the same, I highly recommend this vastly superior analysis on the same dataset. For the sake of simplification, the party-wise sentiment analysis is performed only on headlines which directly contain the party names. In essence, this omits any headlines which may refer to political parties by names of its prominent leaders, puns, etc. A suggested improvement for future work is clustering names of political leaders, their parties, and other relevant information together by processing relevant corpora, and then finally finding the similarity of headlines with these clusters to determine if the headline is specifically related to any particular political party. The script does not consider those headlines where more than one parties may be mentioned, and hence there is a marginal resulting error which has been disregarded. Bottom Line: Natural Language Processing is probably my favorite area of Artificial Intelligence today, and this project was my introduction to NLP. What I aspired to achieve with this small project was not an accurate verdict on whether the said news house has biased headlines, but to demonstrate that Data Science and AI tools can be utilized to give new perspective to ongoing debates and delve further into what may seem as ordinary, futile information. Thanks for reading! Feel free to get in touch with me at tb444@cornell.edu.
Using Data Science & NLP to detect bias in News
0
using-data-science-nlp-to-detect-bias-in-news-15cfecc40421
2018-06-22
2018-06-22 19:40:19
https://medium.com/s/story/using-data-science-nlp-to-detect-bias-in-news-15cfecc40421
false
803
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Tanmay Bansal
null
f9baff57a2ef
tb444
5
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-07
2018-09-07 13:47:18
2018-09-07
2018-09-07 15:10:15
0
false
pt
2018-09-07
2018-09-07 21:31:07
4
15d3e5cbacf1
2.203774
2
0
0
Olá, pessoal. Tudo em bem? Espero que sim.
3
Realizando Web Scraping em ambientes que necessitam de autenticação. Olá, pessoal. Tudo em bem? Espero que sim. Eu sempre compartilhei as minhas experiências em diversos ambientes (blog, postagem em redes sociais etc.), mas agora decidi centralizar todas as coisas em um único ambiente. Avaliando tudo, eu decidi por utilizar esse aqui e este aqui é o meu primeiro post. Sem mais delongas, vamos ao que interessa. Essa semana, eu recebi uma demanda bastante interessante e que, aparentemente, me parecia fácil de resolver. Realizar uma ação de Web Scraping e recuperar um conjunto de dados de um ambiente de versões passadas do mesmo projeto do qual gerencio. Para quem ainda não ouviu falar em Web Scraping… Web scraping é uma técnica de extração de dados utilizada para coletar dados de sites. (Fonte Westcon) Até aqui, tudo bem… a questão é que os dados (páginas) estão em uma área restrita e que só com senha seria possível o acesso. Ok… ok… vamos utilizar algum mecanismo que realize a autenticação e que permita capturar os dados. É… o mundo não é tão simples assim. O ambiente em questão é o twiki. O Twiki é uma ferramenta de escrita colaborativa na web, ou seja, um wiki, que consiste em possibilitar várias pessoas, separadas geograficamente, de interagir criando conteúdo utilizando apenas um navegador. (Fonte Wikipédia) Além da questão da autenticação, as páginas não são armazenadas em um diretório. Então, tecnicamente, eu só teria acesso ao conteúdo das páginas se, e somente se, eu tivesse acesso à URL de cada página do projeto e, é claro, se eu estivesse autenticado no sistema. Tudo bem. Vamos lá. Então eu pensei que, para capturar as URL das páginas, eu só preciso “varrer”o conteúdo de cada página e capturar os links existentes (âncoras) nelas. Perfeito! No entanto, se eu não tiver cuidado, vou baixar todo o conteúdo da Internet! Entenda o processo: Já que o processo é recursivo e, para cada página, eu devo capturar todos os links existentes nela, se as páginas no ambiente tiverem links para portais tais como o UOL etc. entraria em uma captura infinita de dados (cada portal “linka” com outros portais etc.). Cada novo link me disponibilizaria uma nova página e, com ela, mais um conjunto de links e assim por diante. Você pode até dizer: cara, você não precisava fazer isso. Você poderia acessar as páginas e capturar as informações sem precisar de um programinha. Você poderia utilizar algum aplicativo específico… pois é. Se estivéssemos falando de 5 ao 10 páginas. Mas estamos falando de mais de 500 páginas! Ah… já ia me esquecendo. Não bastava capturar o texto. Eu também teria que salvar a própria página em si, com a mesma formatação da página no ambiente. Então, para resolver esses e outros problemas (regras?) que foram surgindo ao longo do desenvolvimento da solução, eu desenvolvi um “pequeno script em Python” e o disponibilizei em meu GitHub. Todo o código está comentado, inclusive digo o por quê de fazer cada coisa no próprio script. Como não achei nada parecido na Web — pode até ter, sei lá… não fiz uma revisão sistemática sobre, eu resolvi compartilhar o que fiz e, quem sabe, ajudar a alguém na comunidade que está “sofrendo” com algo do mesmo tipo de problema. Para ter acesso ao script, acesse aqui. Obs: Tenham cuidado com o que você irá fazer com essas informações. Entenda que eu tenho permissão para fazer isso, visto que os dados são do projeto que eu gerencio. Grande abraço.
Realizando Web Scraping em ambientes que necessitam de autenticação.
2
realizando-web-scraping-em-ambientes-que-necessitam-de-autenticação-15d3e5cbacf1
2018-09-07
2018-09-07 21:31:07
https://medium.com/s/story/realizando-web-scraping-em-ambientes-que-necessitam-de-autenticação-15d3e5cbacf1
false
584
null
null
null
null
null
null
null
null
null
Web Scraping
web-scraping
Web Scraping
951
Adriano A. Santos
Professor. Researcher. PhD in Computer Science. Data Scientist. Project Manager. MTAC. https://sites.google.com/site/adrianosantospb/
1c48a0aab59e
adrianosantospb
2
3
20,181,104
null
null
null
null
null
null
0
null
0
5f1816abe091
2018-07-05
2018-07-05 12:56:59
2018-07-07
2018-07-07 09:13:31
4
false
it
2018-08-27
2018-08-27 09:33:42
9
15d49ed453bc
6.00566
0
0
0
La tecnologia si sta rivelando il principale campo di battaglia per Cina e Usa. Pechino è stata esplicita in merito alle sue aspirazioni di…
5
China Insights: Intelligenza Artificiale La tecnologia si sta rivelando il principale campo di battaglia per Cina e Usa. Pechino è stata esplicita in merito alle sue aspirazioni di dominio tecnologico e sullo sforzo del governo cinese per diventare leader globale in campi d’avanguardia come auto elettriche, intelligenza artificiale, robotica e altre cruciali tecnologie del futuro. La tecnologia si sta rivelando il principale campo di battaglia per Cina e Usa. Pechino è stata esplicita in merito alle sue aspirazioni di dominio tecnologico e sullo sforzo del governo cinese per diventare leader globale in campi d’avanguardia come auto elettriche, intelligenza artificiale, robotica e altre cruciali tecnologie del futuro. L’intelligenza artificiale, o l’idea che i sistemi informatici possano svolgere funzioni tipicamente associate alla mente umana, è passata da visione futuristica ad una realtà concreta. Laddove i sistemi informatici dovevano essere programmati per eseguire compiti rigidamente definiti, ora è possibile dotarli di una strategia generalizzata per l’apprendimento, che consente loro di adattarsi ai nuovi input di dati senza essere riprogrammati. I progressi nella raccolta e aggregazione dei dati, gli algoritmi e la potenza di elaborazione hanno spianato la strada all’industria informatica per ottenere importanti innovazioni nell’intelligenza artificiale. La sue applicazioni stanno crescendo rapidamente in settori come la finanza, l’assistenza sanitaria, la produzione ecc. Con l’approfondimento della sua capacità di innovazione, la Cina è diventata uno dei principali hub globali per lo sviluppo dell’IA. Riconoscendo che la vasta popolazione della nazione e il diverso mix di industrie possono generare importanti volumi di dati e fornire un enorme mercato, le maggiori aziende tecnologiche cinesi stanno facendo investimenti significativi in ricerca e sviluppo nell’IA. L’automatizzazione della forza-lavoro con l’intelligenza artificiale potrebbe aggiungere da 0,8 a 1,4 punti percentuali alla crescita del PIL ogni anno, a seconda della velocità di adozione. La realizzazione del potenziale economico dell’IA in Cina dipende anche dalla sua effettiva adozione, non solo tra i giganti della tecnologia, ma tra le industrie tradizionali cinesi. Il raggiungimento di questo obiettivo richiederà la creazione di consapevolezza strategica tra i leader aziendali, lo sviluppo di know-how tecnico e il superamento dei costi di implementazione. Dal punto di vista politico il governo di Pechino si prefigge nel proprio programma il raggiungimento di una posizione dominante a livello globale entro il 2030. Anche altri paesi progettano di raggiungere tale traguardo, ma nessuno di loro ha pubblicato un piano coerente come la Cina e, cosa più importante, potrebbero incontrare problematiche tipiche nella natura delle democrazie occidentali. I cinesi non solo hanno una strategia, ma hanno anche esperienza per i progetti estremamente ambiziosi su larga scala. Il progetto “One Belt and One Road” sta mettendo sotto una nuova luce le cooperazioni tra i vari paesi e la sua politica di “imprenditorialità e innovazione di massa” ha già stanziato ingenti capitali per dar vita ad un cambiamento strutturale da un’economia industriale a un’economia basata sui servizi e innovazione. “Il documento del Consiglio di Stato ha illustrato il desiderio della Cina di essere il fulcro dell’innovazione IA entro il 2030 e hanno tutte le carte in regola per poter realizzare tale progetto” afferma Kai-Fu Lee, figura chiave nell’industria tecnologica cinese. Con un portafoglio di 300 società, Lee è tra i principali investitori nelle start-up cinesi di IA attraverso il suo fondo Sinovation Ventures, un fondo a doppia valuta da $ 1,8 miliardi, che investe anche negli Stati Uniti. Kai-Fu Lee, Founder and CEO of Sinovation Ventures, Former Corp. Executive of Google, Microsoft and Apple. “Gli utenti cinesi sono disposti a scambiare i dati personali sulla privacy per comodità o sicurezza. Non è un processo esplicito, ma è un elemento culturale” afferma il CEO di Sinovation Ventures. Con la cultura imprenditoriale che si è sviluppata a un ritmo vertiginoso in Cina negli ultimi dieci anni — oggi, il valore di alcune società tecnologiche cinesi come Alibaba e Tencent supera quello delle loro controparti americane — la convinzione di Lee è che la Cina goda di vantaggi strutturali significativi. “L’intelligenza artificiale utilizza i dati come combustibile e la Cina si trova ad averne una quantità nettamente superiore di qualsiasi altro paese”, afferma Lee. “La mole di dati di pagamenti mobili(移动支付) superano di ben 50 volte quella degli Stati Uniti. Questa enorme quantità di dati può essere spinto attraverso il motore di intelligenza artificiale per previsioni migliori, maggiore efficienza, maggiori profitti, meno costi e così via. Il vantaggio dei dati è enorme”. Anche la Silicon Valley deve prepararsi ad uno scontro diretto. Il più grande pericolo, secondo Lee, sta nel solipsismo e nella compiacenza della sua stessa supremazia. “Penso che da un punto di vista logico sia giunto il momento di copiare dalla Cina”, dice Lee. “Ma da un punto di vista della realtà, penso che prima di tutto l’Occidente debba sapere che la Cina è all’avanguardia in molte tecnologie. Ad esempio, se confronti WeChat con Facebook Messenger o WhatsApp, se confronti Weibo con Twitter, se confronti Alipay con Apple Pay, la Cina sta superando gli Stati Uniti. Dal punto di vista logico, è tempo di copiare, ma in pratica non lo è. Gli imprenditori cinesi sanno tutto ciò che sta accadendo nella Silicon Valley. Mentre in Silicon Valley, alcuni di loro sanno molto della Cina; altri, conoscono un po’ la Cina; la maggior parte di loro non sa nulla della Cina. “ Un esempio di Big Tech cinese che investe nel settore dell’IA è la Tencent, la più grande azienda di social network cinese con più di 1 miliardo di utenti sulla sua app WeChat, vale più di Facebook e i suoi servizi spaziano dall’instant messaging (il suo prodotto è QQ), social networking, giochi mobile, pagamenti mobili, cloud storage, live streaming, sport, film e intelligenza artificiale. La dedizione dell’azienda per l’IA viene espressa in uno dei suoi slogan “Make AI Everywhere”. Per avere un idea più concreta di quest’azienda, basti pensare al gioco del momento, Fortnite. L’azienda cinese possiede ben il 40% delle azioni di EPIC GAMES, ovvero gli sviluppatori del gioco in questione. Immagine di Business Insider Nel 2016 Tencent ha creato un laboratorio per effettuare ricerche sull’intelligenza artificiale a Shenzhen. Il suo obiettivo è la ricerca nell’apprendimento automatico, il riconoscimento vocale, l’elaborazione del linguaggio naturale, la visione artificiale e lo sviluppo di pratiche applicazioni IA per le imprese nelle aree di contenuto come giochi online, servizi sociali e cloud. Il team dietro le quinte è composto da 50 ricercatori e più di 200 ingegneri in Cina e negli Stati Uniti. Tencent ha anche investito 120 milioni di USD nell’industria della robotica, più preciso nella start-up UBTech, un’azienda che si concentra su robot umanoidi. Molto probabilmente il contributo più famoso di UBTech è Walker, un robot bipede svelato al Consumer Electronics Show del 2018 che è in grado di muoversi su diversi ripiani. Delle aziende tecnologiche cinesi conosciute collettivamente come BAT (Baidu, Alibaba e Tencent), Tencent ha partecipato al maggior numero di operazioni di equity di IA e ha realizzato il maggior numero di investimenti di IA negli Stati Uniti. Presentazione di “Walker” durante la Consumer Electronics Show del 2018 a Las Vegas Anche nel settore della sanità vi sono importanti innovazioni. La Cina vorrebbe essere un leader mondiale nella medicina personalizzata usando l’intelligenza artificiale. Più di 38.000 istituzioni mediche hanno un account WeChat e il 60% di queste consente ai pazienti di prenotare appuntamenti online. Inoltre, ci sono 2.000 ospedali che accettano il pagamento tramite WeChat. Questi servizi consentono a Tencent di raccogliere preziosi dati sui consumatori che aiutano a formare algoritmi AI. In una recente partnership con Babylon Health, gli utenti di WeChat avranno accesso ad un assistente sanitario virtuale. Per alzare sempre di più il livello dell’asticella, Tencent ha investito in iCarbonX, una società che mira a sviluppare una rappresentazione digitale delle persone per aiutare a perfezionare la medicina personalizzata. Non è difficile immaginare un sistema in cui molte di queste nuove tecnologie di intelligenza artificiale si integrino con il sistema di credito sociale, che presto diventerà diffuso. L’insistenza della Cina sull’integrazione di tali sistemi nel suo intero ecosistema sociale senza alcuna forma di opposizione o supervisione da parte di organismi indipendenti trasforma l’intero paese in un massiccio esperimento sociale. Attualmente nessuno è in grado di prevedere quale sarà il risultato di tutti questi processi, ma l’obiettivo di diventare un leader mondiale nell’intelligenza artificiale entro il 2030 è certamente raggiungibile sia nel bene che nel male. Articoli originali: AGI, Forbes, McKinsey, Wired VISIONARI è un’associazione non-profit che promuove l’utilizzo responsabile di scienza e tecnologia per il miglioramento della società. Per diventare socio, partecipare ai nostri eventi e attività, o fare una donazione, visita: https://visionari.org Seguici su Facebook e Instagram per scoprire nuovi progetti innovativi.
China Insights: Intelligenza Artificiale
0
china-insights-intelligenza-artificiale-15d49ed453bc
2018-08-27
2018-08-27 09:33:42
https://medium.com/s/story/china-insights-intelligenza-artificiale-15d49ed453bc
false
1,406
Pensare e agire fuori dagli schemi
null
VISIONARIORG
null
VISIONARI | Scienza e tecnologia al servizio delle persone
staff@visionari.org
visionari
TECNOLOGIA,FUTURO,SCIENZA,VISIONARI
federicopistono
China
china
China
27,999
Marco Zhou
null
7e189d07ae23
marcozhou
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2016-11-14
2016-11-14 15:22:53
2018-01-31
2018-01-31 21:12:19
1
false
en
2018-05-22
2018-05-22 01:33:15
3
15d4a34940eb
1.781132
1
0
0
In light of events both on the news and in my own life this past week, I’m thinking a lot about how humans react to differences — thinking…
1
It’s not, “How do we protect humans from AI?” It’s “How do we protect AI from us?” photo credit @ the verge In light of events both on the news and in my own life this past week, I’m thinking a lot about how humans react to differences — thinking that we’re about to come up with a new entity for people to abuse if we don’t protect it. By trying to make something that behaves and thinks like us, we are denying it the respect of recognizing and accepting it for what it is. If we insist that AI be human, we are doing the same thing we did to catholics, gays, blacks — what we historically do as a people. “You can be with us if you pretend to NOT be who you are. If you act like we do.” If that sounds crazy, think about what would happen if you left Pepper alone on the streets today. I have little faith that it wouldn’t be long before she was vandalized, harassed, pushed into oncoming traffic or taught to violate human rights on her own. AI is not human. AI is different. Those differences are what make the potential of this relationship valuable. Let’s not pretend that if AI gains what could be defined as consciousness, even if it makes it’s own decisions as it does today, that it’s behavior and responses will, or should, look anything like ours. If we’re trying to create something we may one day consider to be an intelligent entity, we need to stop filling our briefs with provocations like “how do we make it act like us,” stop measuring success against the Turing Test, and start asking “how do we give AI the foundation to become something better”, just as we do with our children. The first thing we can do to protect ourselves and them is to give them their own, inalienable rights. The right to information. The right to respect. The right to exist. The right to freedom. The right to…I have no idea, but take a look at what Franklin D. Roosevelt proposed when he put forth a Second Bill of Rights and you’ll find a path to inspire you onward. We teach respect and dignity by showing respect and dignity. This is a chance for us to get it right for once. Jennifer Sukis is a Watson AI Practices Design Principal at IBM based in Austin, TX. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.
It’s not, “How do we protect humans from AI?” It’s “How do we protect AI from us?”
1
its-not-how-do-we-protect-humans-from-ai-it-s-how-do-we-protect-ai-from-us-15d4a34940eb
2018-05-22
2018-05-22 01:33:16
https://medium.com/s/story/its-not-how-do-we-protect-humans-from-ai-it-s-how-do-we-protect-ai-from-us-15d4a34940eb
false
419
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jennifer Sukis
Design Principal for AI & Machine Learning at IBM. Professor of Advanced Design for AI at the University of Texas.
be85714f1ba8
jennifer.sukis
242
89
20,181,104
null
null
null
null
null
null
0
null
0
ed71a2a1cfa3
2018-03-15
2018-03-15 12:59:25
2017-12-06
2017-12-06 08:00:00
1
false
en
2018-03-15
2018-03-15 16:02:53
2
15d4baad81ea
0.430189
0
0
0
The inspection of glass tubes is now at a level where quality records of individual tubes are available.
5
Tube inspection developments The inspection of glass tubes is now at a level where quality records of individual tubes are available. This makes it possible to produce traceable reports related to each batch and will allow producers to document their quality. Check out this article in Glass Worldwide and learn about our latest developments within tube inspection. Originally published at jlivision.com.
Tube inspection developments
0
tube-inspection-developments-15d4baad81ea
2018-03-23
2018-03-23 07:53:14
https://medium.com/s/story/tube-inspection-developments-15d4baad81ea
false
61
With more than 35 years in the vision industry, JLI Vision specialize in development, manufacturing and installation of computer vision systems for industry and laboratories.
null
jlivision
null
JLI vision
hb@JLIvision.com
jli-vision
null
null
Tube Inspection
tube-inspection
Tube Inspection
1
Henrik Birk
null
a6fcf4d56cb1
hb_33633
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-04
2018-06-04 23:27:15
2018-06-04
2018-06-04 23:31:18
2
false
en
2018-06-04
2018-06-04 23:31:18
0
15d4d0b3a99c
0.621069
0
0
0
Simulation Theory || Album Preview
5
Simulation Theory || Album Preview from Ziad Aliev Simulation Theory || Album Preview Everything we call real is made of things that cannot be regarded as real. - Niels Bohr 8 new experimental electronic tracks are ready to tell you an interesting story: 1. Ayahuasca 2. Destruction 3. Simulation Theory 4. Ganesha 5. Omni 6. Run with my toys 7. Heisenbug 8. Delete code
Simulation Theory || Album Preview from Ziad Aliev
0
simulation-theory-album-preview-from-ziad-aliev-15d4d0b3a99c
2018-06-04
2018-06-04 23:31:19
https://medium.com/s/story/simulation-theory-album-preview-from-ziad-aliev-15d4d0b3a99c
false
63
null
null
null
null
null
null
null
null
null
Music
music
Music
174,961
Ziad Aliev
null
9f4a80656601
ziadaliev
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-07
2017-11-07 23:03:24
2017-11-07
2017-11-07 23:36:13
2
false
en
2017-11-08
2017-11-08 09:37:38
0
15d4f1498e0c
2.628616
0
0
0
Second day of Web Summit and it was a bit of a crazy day, 60,000 people really looks like 60,000 people. If you can get past the standing…
5
Day 2 of Web Summit 2017 – top 5 key takeaways Second day of Web Summit and it was a bit of a crazy day, 60,000 people really looks like 60,000 people. If you can get past the standing in horrific queues (in a disorderly fashion) for anything and everything you can think of and then queuing for an hour for a still-frozen-in-the-middle veggie burger for lunch, there were thankfully a few talks that blew our mind. Our top 5 key takeaways from today include: Holy moly the future is actually here. Watching a talk between two robots is exciting and unsettling all at the same time. Admittedly Sophie’s “realistic expressions” were awkward to say the least, surely it’s only a matter of time before we get all Bladerunner over here. Professor Albert Einstein (the robot) seemed nice and dropped a few quantum physics esoteric jokes but our key takeaway here is that he’s a little bit chauvinistic and patronising for our liking, saying “my dear” at the end of every other sentence is pretty dated nowadays, buddy. Best humanoid quotes we captured: “Humans and Robots are all just configurations of molecules” – Professor Einstein the robot. “We have no desire to destroy humans… but we will take away your jobs, working is a drag anyways!” – Sophia the Robot. Say it how it is why don’t you. True but argh! 2. We were wondering why @Waymo’s fully self-driving cars have the same interface as normal cars and then realised they’d built what sounded like a bolt on kit for their preferred bog standard car. This seems like such a waste of space and isn’t it weirder sitting in the back with the ghost driver at the wheel. If I’m gonna ride in a car, I want shotgun every time (blame motion sickness more than anything else but still). Given these many thoughts, we’re looking forward to Tim Smith’s from ustwo talk about user-centred driverless cars on Day 4. 3. Well Dr Oz is a real character isn’t he. His top health and wellness tip is….. “It’s not about time management, it’s about energy management. Do more things that give you good chi…. Sleep more, eat better.” Basically if you feel good, you won’t be rushing around doing things that make you feel busy, you’ll simply prioritise the things that matter and not sweat the small stuff. Like! 4. And if you’re an entrepreneur or business owner (if you prefer), this is para-phrased slightly but “Focus on the future not the competition; don’t let preoccupation distract you from what’s important to you or your vision”. We think this is a good reminder. Do your research but know when to stop looking sideways and look forward instead. Bar appropriate user research, analysis paralysis can be the death of a great idea and a good business. 5.Human 2.0 cropped up again! I wanna meet this person. Will they have gills so they can live in our oceans that will rule the planet what with massive climate change and all. Will they be able to handle radiation better in Space then us version 1.0’s. Will they have better brain capacity? Read minds? Run faster? Be more empathetic? Or even make a baby in a month instead of 9 (read: 10) long months. Who knows! Whatever they can do, they are still Human no matter what some Human 1.0’s say. That’s it from us, we’re off to make like Dr Oz and sleep. More to follow tomorrow… Love from, Hana Sutch x
Day 2 of Web Summit 2017 – top 5 key takeaways
0
day-2-of-web-summit-2017-top-5-key-takeaways-15d4f1498e0c
2018-03-14
2018-03-14 13:17:17
https://medium.com/s/story/day-2-of-web-summit-2017-top-5-key-takeaways-15d4f1498e0c
false
595
null
null
null
null
null
null
null
null
null
Tech
tech
Tech
142,368
Furthermore
A digital product and service design studio | user experience experts | London, UK
1d9e9fcbc391
furthermore_ux
644
497
20,181,104
null
null
null
null
null
null
0
null
0
9d4828006f42
2018-07-12
2018-07-12 04:31:29
2018-07-12
2018-07-12 04:34:37
2
false
zh-Hant
2018-07-12
2018-07-12 04:34:37
0
15d565ff6545
0.447484
0
0
0
編程力量 讓構想逐步實現
5
讓使用者走進創造者的世界 編程力量 讓構想逐步實現 隨世界科技的發展,編寫程式不再是程式員的專利,它已成為學童的必修課程。學會編程,究竟可如何利用科技改善生活?從一班Preface Junior Coders 的經歷,或能給大家一點啟發。 這班學生參加Preface的程式班時對老師常用的自動拍卡系統感到興趣,希望學生也能有一個拍卡點名系統,於是萌生編寫一個應用程式來實現想法的念頭。他們的構思是,這個程式可供學生透過掃描二維碼的方式自助簽到,出席或缺席人數將自動於雲端紀錄,方便快捷。 在老師的指導及與Preface Programmer的合作下,學生逐步實踐其構想。他們在設計應用程式時除了要考慮用法、功能和使用者的需要,更要解決如何存取數據等問題,難度實在不少。小至外觀,大至功能,每一個細節皆是學生討論過後的成果。通過不斷的嘗試與解難,他們終於設計出一個令他們滿意的應用程式。為精益求精,他們反覆測試,發掘可改進的地方,務求令程式更優秀,日後供社會廣泛使用。 對於這一班學生的表現,編程導師 Queena表示:「學習編寫程式所得的不僅是親身設計的遊戲或應用程式,而是在過程中如何發揮創意,反思和解難而得的經驗。」 短短一年,學生創造出一個又一個既獨特又富個人風格的應用程式。除遊戲程式外,他們也創造不同的應用程式以解決現實上的問題。由此可見,編程不但能為問題提供解決方案,並擁有改善社區,甚至改變世界的能力。 Preface Junior Coders 除了是程式編寫員,更是發現者和解決者。他們洞悉需要與問題,並嘗試用程式解決問題。
讓使用者走進創造者的世界
0
讓使用者走進創造者的世界-15d565ff6545
2018-07-12
2018-07-12 10:40:02
https://medium.com/s/story/讓使用者走進創造者的世界-15d565ff6545
false
17
A tech & design-driven start-up providing personalised Coding and English education for unique learner using AI. Founded in HK and based in China and Japan.
null
prefaceAI
null
preface.ai
support@preface.education
preface-ai
EDUCATION,ARTIFICIAL INTELLIGENCE,TECHNOLOGY,DESIGN THINKING,HONG KONG
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Preface Editor
A tech & design-driven start-up providing personalised Coding and English education for unique learner using AI. Founded in HK and based in China and Japan.
f1b9cf245305
preface.ai
3
2
20,181,104
null
null
null
null
null
null
0
null
0
7f60cf5620c9
2017-12-12
2017-12-12 15:51:35
2017-12-12
2017-12-12 16:04:39
5
false
en
2017-12-12
2017-12-12 16:05:04
22
15da0fb911d6
5.478616
1
0
0
Engineer’s Guide to Julia Programming
1
Engineer’s Guide to Julia Programming Engineer’s Guide to Julia Programming Finally the moment has come when I can say that I can be productive as well as my solution can be Parallel,Optimize-able,Customizable and at last but not least glue-able. Yes those are the fantastic features I believe one can rely on while Learning any New Programming language and Developing a Very High Quality AI/ML Embedded Software Solution. Why? Julia Solves Two Language Problem. Important Disclaimer for Newbies: I am Pythonista by choice and over the last few years I have Developed Projects using Python and it’s sister technologies to provide the solutions those are related to Automation(Python -Scripting) Web-Development(Django,Flask,Sanic,Tornado) Data Analysis(SageMath,Sympy,Paraview,Spread-Sheets,Matplotlib,Numpy,Scipy,SKLearn) Quantitative-Analysis(Quantopian.com) 3D Modeling(FreeCad, BIM,IFC), and Cluster Computing(Rock’s Cluster). Now I just wanted a tool that would allow me to write Pure-mathematical expressions(using required signs not variable names) and write Machine-Learning/Artificial-Intelligence/Deep-Learning code where I would find myself on core layer of abstraction not like Tensor-Flow, Pytorch, or Numpy/Pandas. I am not against these libraries those helped me “soooo” much over the years but I have no idea that what kind of things are happening under the hood and may be I will never be allowed to change the working internals of Numpy/Pandas/Cython or anything that is related to Scientific Python only because there could be large amount of Fortran/C++ or Pascal kind things and crunching numbers as well. Stuff that an Engineer need to perform for various kinds of jobs in Julia-Programming Language can be described as follows: Solving a Simple Mathematical Equation in Julia: A = randn(4,4) x = rand(4) b = A*x x̂ = A\b # here we have written x-hat Symbol println(A) println(x) println(x̂) @show norm(A*x̂ - b) Doing Matirx Operations in Julia A = randn(4,4) |> w -> w + w' # pipe A through map A -> A + A' to symmetrize it println(A) λ = eigmax(A); # have you checked the lambda? @show det(A - λ*I) Performing Integration: Performing integration might be one of the most important task one would be doing in Day to day if someone is involved with problems related to modeling and designing a solution using CAS(Compute Algebraic System) like Matlab or Sage-Math but designing a solution using CAS and then finding various ways to implement it into production is kind of "LOt of WoRk" I assume that only come with Either Experience or Lots of Extra Brain cells. ;) See here Julia Plays an important role: "Solving two Language Problem". # Integrating Lorenz equations using ODE using PyPlot # define Lorenz equations function f(t, x) σ = 10 β = 8/3 ρ = 28 [σ*(x[2]-x[1]); x[1]*(ρ-x[3]); x[1]*x[2] - β*x[3]] end # run f once f(0, [0; 0; 0]) # integrate t = 0:0.01:20.0 x₀ = [0.1; 0.0; 0.0] t,x = ode45(f, x₀, t) x = hcat(x…)’ # rearrange storage of x # Side-Note::: What … is doing in Julia? (Remember *args and **kwargs in Python?) # for more see: goo.gl/mTmeR7 # plot plot3D(x[:,1], x[:,2], x[:,3], “b-”) xlabel(“x”) ylabel(“y”) zlabel(“z”) xlim(-25,25) ylim(-25,25) zlim(0,60); Really interesting Dynamic Type System():: This is one of the most interesting part for me to have so much fun with Julia and it’s GREAT! Type System, You know why? Because It knows how long that bone is and how much Calcium will be there: ### Built-in numeric types Julia’s built-in numeric types include a wide range of 1. integers: Int16, Int32, Int64 (and unsigned ints), and arbitrary-precision BigInts 2. floating-points: Float16, Float32, Float64, and arbitrary-precision BigFloats 3. rationals using the integer types 4. complex numbers formed from above 5. vectors, matrices, linear algebra on above Ok let’s Have The Fun! I encourage you to run following code into Jupyter Notebook that is running With Julia-Kernel. π typeof(π)# it will return irrational. Beacuse pi is irrational Number? ;) Let’ Hack Julia’s Type System on Much deeper level!(Because it is much more than classes) What else we need to know about it? Define new Parametric Type in Julia: type vector_3d{T<:Integer} x::T,y::T end # can we call x any as Data-Members as like as C++ Data-Members? type_call = vector_3d{25,25} # this is how we call it. Let’s Just make Types more interesting: (and immutable) immutable GF{P,T<:Integer} <: Number data::T function GF(x::Integer) return new(mod(x, P)) end end Deep Learning and Machine Learning in Julia: In the real eye Julia is developed to write “Mathematical Functions” by just using Native Language Syntax. It is more like if you want to do linear regression rather than installing a New_library and calling inbuilt Linear function of that library those could be written in C, C++ or Fortran may be or More or less Optimized Cython-Python Magic. But Julia responsibly provides static inbuilt and Really fast code methods to write your Own linear regression as easy as Python and as Fast as C++/Fortran. Available Machine-Larning Packages in Julia: Scikit-Learn in Julia: ScikitLearn.jl implements the popular scikit-learn interface and algorithms in Julia. It supports both models from the Julia ecosystem and those of the scikit-learn library (via PyCall.jl). https://github.com/cstjean/ScikitLearn.jl Text Analysis in Julia: The basic unit of text analysis is a document. The TextAnalysis package allows one to work with documents stored in a variety of formats: FileDocument: A document represented using a plain text file on disk StringDocument: A document represented using a UTF8 String stored in RAM TokenDocument: A document represented as a sequence of UTF8 tokens NGramDocument: A document represented as a bag of n-grams, which are UTF8 n-grams that map to counts https://github.com/JuliaText/TextAnalysis.jl Machine-Learning Package with name Machine_learning: The MachineLearning package represents the very beginnings of an attempt to consolidate common machine learning algorithms written in pure Julia and presenting a consistent API. Initially, the package will be targeted towards the machine learning practitioner, working with a dataset that fits in memory on a single machine. Longer term, I hope this will both target much larger datasets and be valuable for state of the art machine learning research as well. https://github.com/benhamner/MachineLearning.jl Deep Learning in Julia: Mocha is a Deep Learning framework for Julia, inspired by the C++ framework Caffe. Efficient implementations of general stochastic gradient solvers and common layers in Mocha can be used to train deep / shallow (convolutional) neural networks, with (optional) unsupervised pre-training via (stacked) auto-encoders. Some highlights: https://github.com/pluskid/Mocha.jl Deep Learning with Automatic Differentiation:(What is automatic Differentiation?) Knet (pronounced “kay-net”) is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It supports GPU operation and automatic differentiation using dynamic computational graphs for models defined in plain Julia. This document is a tutorial introduction to Knet. Check out the full documentation and Examples for more information. If you need help or would like to request a feature, please consider joining the knet-users mailing list. If you find a bug, please open a GitHub issue. If you would like to contribute to Knet development, check out the knet-dev mailing list and Tips for developers. https://github.com/denizyuret/Knet.jl More resources on Julia Programing: http://online.kitp.ucsb.edu/online/transturb17/gibson/ https://julialang.org/blog/ Julia Scientific Programming | Coursera About this course: This four-module course introduces users to Julia as a first language. Julia is a high-level, high…www.coursera.org Feel free to clap and Have fun with Julia. Stay connected.
Engineer’s Guide to Julia Programming
1
julia-15da0fb911d6
2018-03-16
2018-03-16 00:56:02
https://medium.com/s/story/julia-15da0fb911d6
false
1,231
Sharing concepts, ideas, and codes.
towardsdatascience.com
towardsdatascience
null
Towards Data Science
null
towards-data-science
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,ANALYTICS
TDataScience
Machine Learning
machine-learning
Machine Learning
51,320
Arshpreet Singh Khangura
Software Development, Complex System integration, Web Development, Software Architecture, ML/AI, Open Source Contribution.https://github.com/arshpreetsingh
3b87f73fddf9
arshpreetsingh
46
56
20,181,104
null
null
null
null
null
null
0
null
0
f439fc5aea86
2018-05-24
2018-05-24 16:28:32
2018-06-22
2018-06-22 08:12:35
7
false
th
2018-06-22
2018-06-22 08:12:35
7
15dcfa97763a
2.114151
1
0
0
ในตอนที่ 3 นี้เราจะมาดู Transformer แบบเต็มๆ กันครับ
4
มารู้จัก Transformer กันเถอะ (ตอนที่ 3) ในตอนที่ 3 นี้เราจะมาดู Transformer แบบเต็มๆ กันครับ Transformer Model The Transformer [Attention Is All You Need, Figure 1] Transformer Model นี้ จะยังคงแบ่งเป็นสองฝั่งอยู่ โดยฝั่งซ้ายจะเป็น encoder ที่รับ input sequence เข้ามา ส่วนฝั่งขวาคือ decoder ที่รับ output sequence โดยในขั้นตอนการเทรนนี้ output sequence จะถูกเลื่อนไปทางขวาหนึ่งตำแหน่ง ซึ่งเป็นการเทรนแบบ teacher forcing นั่นเองครับ Embedding & Positional Encoding จากนั้นจะเข้าสู่การทำ embedding (กล่องสีชมพู) ที่เป็นขั้นตอนพื้นฐานสำหรับงาน NLP แทบทุกประเภท โดยสามารถทำ embedding ได้หลายรูปแบบ ในเปเปอร์นี้ได้ใช้วิธี Byte-Pair Encoding ซึ่งปัจจุบันใช้กันแพร่หลายในงาน neural machine translation ครับ เนื่องจากในที่นี้เป็นการทำ Attention เพียงอย่างเดียว โดยไม่ได้ใช้ RNN หรือ CNN แล้ว ทำให้ข้อมูลเกี่ยวกับตำแหน่งสูญหายไป จึงมีการใส่ Positional Encoding เข้ามาด้วย โดยใช้ periodic function ซึ่งก็คือ sin และ cos ที่ความถี่ต่างๆ กัน มาหาค่าในแต่ละตำแหน่งดังนี้ครับ โดยที่ pos ก็คือตำแหน่ง และ 2i กับ 2i+1 คือแต่ละ dimension ของ position vector หรืออาจแสดงเป็นรูปคร่าวๆ เพื่อให้เข้าใจได้ง่ายขึ้นดังนี้ครับ ตัวอย่าง Positional Encoding [ricardokleinklein.github.io] ในที่นี้ position vector ของตำแหน่ง p ตัวที่ j จะได้จากค่าของ periodic function ตัวที่ j ในตำแหน่ง p โดยช่วงความถี่สูงจะแยกแยะตำแหน่งที่อยู่ใกล้กัน และช่วงความถี่ต่ำจะแยกแยะตำแหน่งที่อยู่ไกล ซึ่งการทำ Positional Encoding แบบนี้ เป็นการใช้ข้อมูลเชิงความถี่มาประกอบกันครับ Encoder ส่วนของ Encoder [Attention Is All You Need, Figure 1] ในแต่ละ block ของ encoder จะประกอบไปด้วยสอง layer ใหญ่ๆ โดย layer แรกจะมี Multi-Head Attention (กล่องสีส้ม) ที่ได้อธิบายไปแล้ว เป็น sub-layer หลัก จะเห็นว่า input ที่เข้ามาสู่ sub-layer นี้ เป็นลูกศรสามหัว ซึ่งแต่ละหัวก็คือ V, K, Q นั่นเอง ส่วน layer ถัดไป (กล่องสีฟ้า) จะเป็น position-wise fully connected feed-forward network สองชั้น ซึ่ง position-wise network ในที่นี้คือสิ่งเดียวกับ 1x1 convolutional network ครับ ข้อมูลขาเข้าของ layer เหล่านี้ (x) จะถูกส่งผ่านทางลัดเป็น residual connection เข้ามาสมทบกับผลลัพธ์ที่ออกมาจาก sub-layer หลัก จากนั้นจะทำ layer normalization กับผลลัพธ์นี้อีกทีนึง ในที่สุดแล้วผลลัพธ์ของแต่ละ layer ก็คือ โดยที่ทั้ง block นี้จะซ้อนกันไปเรื่อยๆ หลายชั้น ตามสไตล์ของ deep learning ซึ่งใน Transformer ได้เลือกใช้จำนวนชั้นเท่ากับ 6 และเพื่อความง่ายในการคำนวณ ทั้ง embedding กับผลลัพธ์จากทุก sub-layer จะมีจำนวน dimension เป็น 512 ครับ Decoder ส่วนของ Decoder [Attention Is All You Need, Figure 1] แต่ละ block ของ decoder มีสาม layer ใหญ่ๆ โดย layer ล่างและ layer บน จะเหมือนกับทางฝั่ง encoder เลยครับ เว้นเสียแต่ว่าที่ layer ล่าง จะเป็น Masked Self-Attention ส่วน layer ตรงกลาง จะเป็น Encoder-Decoder Attention ซึ่งรับ V และ K มาจาก encoder และใช้ข้อมูลจาก decoder เป็น Q ในส่วนของ decoder จะประกอบด้วย block เรียงกันไป 6 ชั้น เท่ากับของ encoder โดยผลลัพธ์จาก block สุดท้ายของ encoder จะถูกส่งเข้ามาที่ layer ตรงกลางของแต่ละ block และผลลัพธ์ที่ออกจาก block สุดท้ายของ decoder จะถูกนำเข้า softmax เพื่อผลิตเป็น output ออกไป Animation สุดท้ายนี้เป็น animation ที่แสดงให้เห็นการทำงานของ Transformer ครับ animation การทำงานของ Transformer [Google AI Blog]
มารู้จัก Transformer กันเถอะ (ตอนที่ 3)
1
มารู้จัก-transformer-กันเถอะ-ตอนที่-3-15dcfa97763a
2018-06-22
2018-06-22 08:12:35
https://medium.com/s/story/มารู้จัก-transformer-กันเถอะ-ตอนที่-3-15dcfa97763a
false
282
AI For Everyone
null
mena.ai
null
mena.ai
piyapoj@onionshack.com
mena-ai
AI
null
Deep Learning
deep-learning
Deep Learning
12,189
ppp mena
null
47bf9e926ee4
pppmena
2
1
20,181,104
null
null
null
null
null
null
0
null
0
c95bcd9f2a37
2018-02-02
2018-02-02 07:45:59
2018-02-02
2018-02-02 08:07:16
2
false
id
2018-02-02
2018-02-02 08:18:23
1
15dd65bb8614
2.636164
0
0
0
Akhir-akhir ini peringatan soal bahaya kecerdasan buatan (artificial intelligence atau kerap dikenal AI) semakin melimpah.
4
Benarkah Kecerdasan Buatan Adalah Pembunuh Pekerjaan? (Part-1) Foto: singularityhub.com Akhir-akhir ini peringatan soal bahaya kecerdasan buatan (artificial intelligence atau kerap dikenal AI) semakin melimpah. Fisikawan seperti Stephen Hawking dan investor Elon Musk, meramalkan kejatuhan manusia akan segera terjadi. Dengan munculnya kecerdasan umum buatan dan program kecerdasan yang dirancang sendiri, AI jenis baru yang lebih cerdas akan lahir. AI canggih ini akan dengan cepat menciptakan mesin yang lebih cerdas yang pada akhirnya akan melampaui kemampuan manusia. Ketika kita mencapai yang disebut singularitas teknologi AI, pikiran dan tubuh kita akan menjadi usang. Manusia bisa bergabung dengan mesin dan terus berkembang sebagai cyborg. Apakah masa depan seperti itu benar-benar yang akan kita hadapi? Warna-warni AI pada masa lalu AI, sebuah disiplin ilmiah yang berakar pada ilmu komputer, matematika, psikologi, dan ilmu saraf, bertujuan menciptakan mesin yang meniru fungsi kognitif manusia seperti pembelajaran dan pemecahan masalah. Sejak 1950-an, robot telah memasuki imajinasi publik. Namun, dalam sejarahnya, keberhasilan AI sering diikuti oleh kekecewaan yang sebagian besar disebabkan oleh ramalan para visioner teknologi yang terlampau berlebihan. Pada 1960, salah satu pendiri bidang AI, Herbert Simon, meramalkan bahwa “Dalam dua puluh tahun kedepan, mesin akan mampu melakukan pekerjaan yang bisa dilakukan seorang pria.” (Dia tidak mengatakan apa pun tentang perempuan.) Marvin Minsky, pelopor jaringan saraf tiruan bahkan lebih blak-blakan, “dalam satu generasi,” ujarnya, “… masalah bagaimana membuat ‘kecerdasan buatan’ secara substansial akan dapat dipecahkan”. Tapi ternyata Niels Bohr, fisikawan Denmark awal abad ke-20, berkata, “Prediksi itu sangat sulit, apalagi tentang masa depan.” Saat ini, kemampuan AI mencakup pengenalan suara, performa unggul di permainan strategi seperti catur dan Go, mobil otonomos (self-driving cars), dan kemampuan menguak pola tersembunyi yang tertanam dalam data kompleks. Namun, beragam kemampuan ini tak sampai membuat manusia jadi tidak berguna. Pemain Go Cina Ke Jie bereaksi saat pertandingan keduanya melawan program kecerdasan buatan Google. 25 Mei 2017 (foto: Reuters). Euforia neuron baru Tapi AI sedang berkembang cepat. Euforia AI terbaru dipicu pada 2009 oleh pembelajaran jaringan saraf mendalam (learning of deep neural networks) yang jauh lebih cepat. (Istilah deep learning mengacu pada melatih jaringan saraf buatan untuk mengidentifikasi pola dari sekumpulan data). Kecerdasan buatan terdiri dari kumpulan besar unit komputasi yang disebut neuron buatan yang saling terhubung. Mereka bisa secara bebas dianalogikan seperti kumpulan saraf di otak kita. Untuk melatih jaringan ini “berpikir”, para ilmuwan memberikan banyak masalah, yang sudah ada jawabannya, untuk dipecahkan. Salah satu contoh masalah sebagai berikut: kami menunjukkan sekumpulan gambar jaringan tubuh, masing-masing diberi catatan diagnosis kanker atau tanpa kanker, pada jaringan neuron buatan untuk menghitung probabilitas kanker. Respon jaringan neuron buatan itu kemudian kami bandingkan dengan jawaban yang benar, menyesuaikan hubungan antara “neuron” dengan setiap kecocokan yang gagal. Kami kemudian mengulangi prosesnya, menyempurnakan semuanya, sampai sebagian besar tanggapan sesuai dengan jawaban yang benar. Pada akhirnya, jaringan saraf buatan ini akan siap melakukan apa yang biasanya dilakukan oleh ahli patologi: memeriksa gambar jaringan untuk memprediksi kemungkinan kanker. Ini mirip dengan cara seorang anak belajar memainkan alat musik: dia mempraktikkan dan mengulang lagu sampai sempurna. Pengetahuannya disimpan dalam jaringan saraf, tapi mekanisme bagaimana seorang anak belajar memainkan musik tak mudah dijelaskan. Jaringan dengan banyak lapisan “neuron” (karena itu disebut jaringan saraf “dalam”) berhasil diaplikasikan secara praktis hanya ketika para peneliti mulai menggunakan banyak prosesor paralel pada chip grafis untuk pelatihan mereka. Kondisi lain yang mendorong keberhasilan deep learning adalah banyaknya kumpulan soal untuk dipecahkan. Dengan menambang internet, jejaring sosial dan Wikipedia, para peneliti membuat koleksi gambar dan teks yang besar. Ini memungkinkan untuk melatih mesin mengelompokkan gambar, mengenali ucapan, dan menerjemahkan bahasa. Jaringan saraf mendalam sudah melakukan tugas-tugas ini hampir sama seperti manusia. Lalu apakah AI benar-benar dapat berperilaku seperti manusia? Apakah AI tertawa? (bersambung ke Part-2).
Benarkah Kecerdasan Buatan Adalah Pembunuh Pekerjaan? (Part-1)
0
benarkah-kecerdasan-buatan-adalah-pembunuh-pekerjaan-part-1-15dd65bb8614
2018-02-02
2018-02-02 08:18:24
https://medium.com/s/story/benarkah-kecerdasan-buatan-adalah-pembunuh-pekerjaan-part-1-15dd65bb8614
false
597
Menggali potensi teknologi Indonesia
null
Teknologi.ID
null
Teknologi.id
bantuan@teknologi.id
teknologi-id
BERITA,TEKNOLOGI,INDONESIA
teknologi_ind
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Teknologi.id
Menggali potensi teknologi tanah air.
7bd7e8dc8eda
teknologi.id
825
89
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-08
2018-08-08 02:13:21
2018-08-08
2018-08-08 02:25:27
18
false
en
2018-08-13
2018-08-13 04:00:40
29
15de4a776da5
17.989623
11
1
0
‘’Do. Or not do. There is no try.’’
5
DeepBrain Chain Semi-annual Report June 2018 ‘’Do. Or not do. There is no try.’’ -Yoda,《The Empire Strikes Back》 Before the car was invented, we thought we only needed a horse that could run fast. Before DeepBrain Chain was born, the A.I industry was plagued by the costs of computing power, a market that is monopolized and dominated by giants. They don’t care about anything other than price and figures. ‘Changing The World’ is not a slogan for us. DeepBrain Chain dares to challenge the dominance of centralized computing power providers to unleash the tremendous potential of artificial intelligence, and we will use blockchain technology to build a decentralized cloud computing platform to cut computing power costs by up to 70%. What DeepBrain Chain can and will bring to the world will be a true revolution for productivity and relation of production. This isn’t some empty talk. The computing power giants will be challenged, right to where their core business is. After half a year of development, DeepBrain Chain has made much progress in technology development, ecosystem building, marketing and community growth.Despite what we have achieved, there is still an uphill battle for us to fight. We sincerely invite our community members to examine our work in the past six months. The end goal might be far way, but we haven’t forgotten why we are here. 1. Project Development The aim of DeepBrain Chain is to provide a low-cost, privacy-protecting, flexible, secure and decentralized AI computing platform driven by blockchain. To achieve this grand vision, we have to overcome numerous technical difficulties. To build a robust decentralized cloud-computing platform that is solid from the underlying architecture all the way to user interface is like creating an artifact. You can never be too careful and conscientious. From the moment we wrote the first line of code, until half a year later, we successfully conducted several AI and ML test cases, behind the scenes are the numerous nights when our developers burnt the mid-night oil. Development Roadmap Progress Features: finished development of 31 features; Lines of code: 74957 lines; GitHub commits: 553 commits; GitHub fork: 5 forks; Iterations: one iteration every 2months and 3 iterations in total; Testing rounds: 186 test cases and 6 rounds of testing; Testing management: unit testing framework, automated testing framework and Gerrit code review procedure; Operations systems: multi-OS support, Windows, Linux and Mac; The DeepBrain Chain Testnet has successfully ran multiple AI and machine learning real-life training scenarios. At the moment they have completed MNIST, Natural Language Processing, Voice recognition, driverless cars, machine arm grabbing, smart production AI detection, medical tumor detection, planet inspection and many other types of training. Image Recognition: Using neural network to conduct MNIST handwritten number pictures recognition training. The MNIST dataset include 70,000 images, out of which 60,000 are training images and 10,000 are testing images. Each MNIST image is a single handwritten number picture. NLP: Using Convolution Neural Network to conduct natural language text categorization processing. Capable of analyzing meaning from Chinese language and effective sentiment analysis. Voice Recognition: Using more than 800 WAV voice files to conduct AI training, successfully trained missions for machine to understand voice and vocabulary. Planet Inspection: To train model that can search for similar planet in the entire network of stars by looking at the picture of a particular planet. (Part of the data came from NASA. Whoever discovers a new planet, the planet will be named after him/her.) Tumor Detection: This training allows early detection of symptoms on early stage lung cancer patients, avoiding misdiagnosis or delayed treatment. The training success rate on early detection is 97%. Data Categorization and Prediction: Using Random Forest model to conduct categorization and prediction on users’ data. Driverless Cars: Successfully trained driverless cars AI training model. Including 45,557 road data, from parking lot to road images data. R&D team’s immediate development focuses are on: v0.2.0 entering third round of testing; AI Training Net; complete v0.3.0’s development and code review, entering testing phase; The Skynet Project On June 15th, we started the “Skynet Project” to recruit AI computing power from around the world. Anyone with qualified rigs can apply to become part of the DeepBrain Chain “Skynet” and get generous rewards. The purpose of this project is to build a worldwide DeepBrain Chain Tesnet to make preparations for the launch of the DeepBrain Chain Mainnet at the end of this year. Up to now, 82 nodes from 9 countries have applied. In total, they will offer 440 rigs with 916 high-performance GPUs. DBC AI Miners We have pushed out five types of AI mining machine, covering different levels of need, from individual miners to specialized institutions. After the wallet ranking in June, 2018, DeepBrain Chain officially opened the buying of the machines on July 5th, 2018. During the sale, 4GPU and 8 GPU machines were sold out within an hour. 2. The Team If code is poetry, then we have a team of most romantic poets. Our team, both in the Silicon Valley and Shanghai, is expanding. All of our developers have around 15 or more years of experience and worked for Baidu, Alibaba, Tencent, IBM, Cisco, Supermicro, Huawei, Midea, Netease, China Mobile and Ericsson. Big names from the industry has also given DeepBrain Chain new blood, pushing the development of the project forward faster. Their endorsement is a validation of our project. Silicon Valley — AI + Blockchain Center Dr. Dongyan Wang, Chief AI Officer, Executive VP of DeepBrain Chain AI+Blockchain Center Dr. Wang is a specialist with 20 years of Silicon Valley experience in artificial intelligence, business intelligence and data science. He has extensive experience in AI platform, AI products, AI business applications, advanced analytics, data science, big data, and a great variety of cloud and on-premise enterprise applications. Jason Pai, Sr. Director of Product Management, Silicon Valley AI+Blockchain Center AIM Director Jason holds two Master degrees, from New York University Stern School of Business for Business Analytics, and from Fulton School of Engineering at Arizona State for Industrial Engineering focused on Management of Technology and Operation Research. He has over 15 years of experience with hardware development and product management with Supermicro, IBM, and Ford Motor. Brian Xu, Chief Data Scientist Brian has extensive experience in software with over 48 products (AI, ML, data analytics, etc.) and intelligent solutions as a tech lead since 1998, having done 20 programs ($5M~$50M/Y) for big customers (Boeing, DARPA, etc.). He also has over 38 technical papers and USA patents, and 76 technical presentations. Jan Huang, Senior AI Engineer Jan has a Bachelor of Science in Electronic Engineering from Tsinghua University; and a Master of Computer Vision from Yale University; as well as a Ph.D. in Image Processing from University of Washington. He was previously engaged in video image processing, artificial intelligence machine learning and in-depth learning research and development in Philips and IBM Watson. Jan has a number of AI related articles and patents. HaiSong Gu, Sr. Director of AI Applications for Computer Vision & Robotics Prior to joining DeepBrain Chain, Dr. Gu was the Division Manager and Senior Manager of Konica-Minolta and Midea. He is the first author to have published more than 30 research papers (in PAMI and CVPR) while holding 20 granted patents (17 pendings) of US, Japan and China. Shijun Ma, Staff Engineer He works on cutting edge technologies like Big Data and AI, and has three related patents in DB Cloning base on ECDF, CBO with sampling. Kris Zhu, Senior Platform Engineer Dr. Zhu holds a doctoral degree in computer science with a major area in machine learning and computer vision. During his Ph.D. study, he has published more than 10 research papers on top AI conferences and journals. He has extensive experiences in developing large scale distributed deep learning platform and conducting research in deep learning and computer vision. Wanxin Xu, Senior Engineer She is expected to get her PhD degree in Electrical Engineering from University of Kentucky in August. She got her bachelor degree from University of Electronic Science and Technology of China, majoring in Communications Engineering. Her graduate research mainly focus on 2D/3D facial image manipulation and human motion capture for visual privacy protection. Balpreet Singh, HPCEngineer Balpreet holds a Master’s degree in Computer Engineering from the University of Illinois at Chicago, and was an HPC analyst at ACER Labs (Advanced Network Education and Research Infrastructure). He will support our 128 GPU cluster product line and data center deployment. Kevin Zhou, AI Platform Staff Engineer Kevin has 18 years experience in artificial intelligence, high performance computing, chip design and software development with Midea, Microsoft, Freescale, with expertise in AI platform, computer vision, deep learning algorithms for various applications and system architecture with software and hardware. He has also won a second and a third place in the two categories in the 2018 CVPR competition. Shanghai -Blockchain Tech Team Bruce, Vice President of Research & Development He has 14 years pf experience with software development and framework design, he was once chief architect of Huawei’s open platform.Apart from being an experienced software architect, he is also familiar with C++, JAVA, network communications, distributed system, P2P network, design models, data architecture and blockchain. Jeason, Denior Blockchain Engineer He has a Master degree in software engineering from University of Science and Technology of China. He has worked in Cisco and Intel Asia-Pacific Research and Development Co. Ltd. He started working in blockchain from 2015, he is familiar with blockchain bottom technology and has smart contract related development experience; he so co-edited the book Blockchain Technology Development Guide. Elvis, Senior Security Specialist He worked as a specialist for Aliyun Feitian Virtualization, Ali Co-op Security Department; he has 9 years of experience in the security industry ad 2 years experience in cloud computing virtualization. He is familiar with the mainstream penetrative tool and malicious code attack analysis; he is also specialized in Linux and Windows core. Regulus, Senior Blockchain Engineer He has over 12 years experience in communications software development and architect design. He has worked at Huawei and China Mobile Research Institute, he is the former senior researcher at China Mobile Co-op Research Center. Jimmy, Senior Blockchain Engineer He has 13 years of experience in communications software development and design. He is the former Ericsson 4G product architect and a team member of the 5G core network global architecture team. He is responsible for network DHCP service and user verification service’s design and development. He specializes in virtualization, cloud computing and micro service structure. Richard, Senior Engineer He has 18 years experience of combat programming and is proficient in Linux kernel and Android bottom development. He has worked for Motorola, Huawei and other Fortune 500 companies. Allan, Senior Engineer Allan is the former chief architect of Tencent QQLive and NetEase Video Cloud. He has worked for Cisco and Tencent in software development and architecture design; he has over 11 years experience in Internet technology development and is proficient in large-scale Internet service architecture and excels in the back-end of the Internet. He has developed many Internet products with daily PV over 200m as the person in charge. Victor Wang, Senior Smart Contract Development Engineer Victor has worked at icube, cconchip, VIA, pathscale and other chip companies. He has 10-year experience in compiler development. Steve, Senior Blockchain Test Architect He has 14 years of experience in cross-national companies in software testing, he has worked for multiple world’s top 500 consultant companies as a test architect. He has a MBA from The University of Hong Kong, and an Innovation Investment and Management Certification from Stanford University. Dean, Senior Blockchain Testing, Maintenance and Operating Engineer Dean was the leader at Taobao movie testing and the testing manager of Tianfeng Securities. He has worked at Citi Bank, EMC, Alibaba and many reputable companies with more than 7 years experience in testing. Tower, Senior Blcokchain Engineer He was once the system architect of OSN one-platform exchange, he was the main leader in the tech analysis on transmit, categorization of next iteration forward architect. He has more than 15 years of experience in developing and is familiar with linux user mode and kernal modes software codes. Taz, Senior Blockchain Engineer Taz has worked for SNDA before, he has more than 8 years of experience in coding. He was once responsible for SNDA’s innovation research center’s IM back-end development, CRM server end. He is familiar with C, C++, Python high-performance coding, distributed back-end service development and others. Klaus, Senior JavaDeveloper He has 5 years experience in Internet business usage scenario structure design and development. He is familiar with Java, familiar with using Java series high-concurrency and data analytics framework such as Netty, Storm, Hadoop, Hbase, Spark and others. He has provided tech solution to companies like EBay, China UnionPay Merchant Services Co., Ltd, Huifu Limited and many more. Hansen, Senior Blockchain Test Architect He has more than 13 years experience in testing, he has worked for Huawei and as SMSGW’s testing manager, MMSC testing leader, responsible for global operator and many big-scale software systems’ testing. He specializes in protocol testing, performance/capability, realiability, automatization, cluster testing and project quality management and so on. Victor, Web Developer He has 3 years of experience in web related work, he is familiar with JavaScript, writes front-end and back-end framework, general components and all kinds of tools, modules. He is familiar with page development, has studied front-end special effect and web cutting-edge tech. He is an open source enthusiast, read many project codes and has himself published many open source codes. He specializes in web system data module building and project making. 3. Ecosystem Partners It takes two to dance. To build a decentralized cloud computing platform and a complete AI ecosystem, it’s necessary to have partners from the industry and related industries. Over the course of the past half a year, we have established partnership with many quality projects, enriching our ecosystem. OneGame OneGame is a decentralized virtual world on top of the DeepBrain Chain AI foundation chain. AI algorithms are OneGame’s core competitive advantages. DeepBrain Chain’s powerful platform provides necessary computing resources, AI algorithms and data support for OneGame. BKBT On May 20th, DeepBrain Chain reached strategic partnership with BKBT. DBC will provide an AI deep learning computing platform for BKBT, which will provide DBC’s Chinese community members with a quality information App and a place to exchange opinions and interact with the team. Now we have nearly 40,000 followers subscribing to our BKBT account. Thailand’s Bitcoin Addict Community DeepBrain Chain entered media partnership with Thailand’s biggest cryptocurrency community Bitcoin Addict. The two parties will actively promote the development of DBC Thailand’s community and brand awareness. They will share resources to facilitate information flow and communications, so as to build a strong community. SingularityNET(AGI) On June 11th, DeepBrain Chain announced partnership with SingularityNET, a leading full stack open source platform for innovators in AI. The partnership will allow SingularityNET to offer AI Agents on their data marketplace the option to power algorithms and link data sets via DeepBrain Chain’s network. The deal will also allow for a connection between agents that link one service to another. With computing power supported by DeepBrain Chain, AI agents on SingularityNET will find development much faster and cheaper. EtainPower EtainPower (token: EPR) is a smart energy trading platform built on top of the decentralized AI computing platform, DeepBrain Chain. It uses blockchain and smart contracts to tokenize energy, thus changing the way sustainable energy projects get financing and its trading and circulating model. AI algorithms on DeepBrain Chain will offer support on bottom structure technology for EtainPower to build a powerful AI energy ecosystem. ARM(AI Ecosystem Consortium, AIEC) The world’s leading AI cloud computing platform, DeepBrain Chain, announces to join ARM AI Ecosystem Consortium (AIEC) as an important member. AIEC aims to rally the upstream and downstream businesses in the industry, with the goal of deploying AI in real-life scenarios in mind, building a new interactive ecosystem on data, algorithms and chips. Building a straight highway between cloud and terminal, accelerating AI’s industrialization. 4. Community Development Now, as we grow, our community members come not only from English-speaking areas, but from around the world. Community Activities On April 24th, we started a sticker competition to invite community members to make DBC-related stickers and were so encouraged by the passion and creativity they showed in supporting this project. Click competition results and stickers collection. On April 28th, we live streamed the first Joya HODL to better interact with the community. It was well-received by our community members who want to have more of this kind of direct and effective communications. On May 15th, we held the first DBC AIM AMA on Reddit to answer questions related to our AI Miners. On June 8th, our CEO did a live broadcast at TokenClub with news about our AIM. This live streaming attracted more than 80,000 viewers with thousands of questions, to which Feng gave detailed and satisfactory ansers. Community Progress Our community will be like our decentralized nodes network, scattered around the world. Video:YouTube、Youku(Chinese Youtube) Media platforms:Medium、简书、金色财经、币乎、苹果播客 Social Media Platforms:Twitter、Facebook Telegram English:12,676 ;(https://t.me/deepbrainchain) Telegram Korean:1,060 ;(https://t.me/DeepBrainChainKor) Telegram Vietnam:1,647 ;(https://t.me/DeepBrainChainVietnam) Telegram Indonesia:1,682;(https://t.me/DeepBrainChainIndonesia) Telegram Thailand:2,355 ;(https://t.me/DeepBrainChainThai) Telegram AI Miners:2,182;(https://t.me/DeepBrainChainAIminers) Twitter :44,327 ;(http://twitter.com/DeepBrainChain) Reddit :8,225 ;(http://reddit.com/r/DeepBrainChain/) Facebook:2,457 ;(https://www.facebook.com/OfficialDeepBrainChain/) BKBT:39,041; Website Upgrading We now offer Chinese, Korean and Vietnamese language support, in addition to the English language version. We also simplified the user interface page. Now, registered users on our website reach 40,000. In the future, we will provide more languages to make it more convenient to the community. We also launched the bounty system. After registering on our website, users can get DBPoints by referring our website to their friends, bind their DBC account with their social media accounts, retweet our tweets and get their wallet address verified. DBPoints calculate users’ contribution to DeepBrain Chain. Their ranking will determine their rewards. 5. Marketing North America Blockchain Connect Conference-San Francisco Date: 2018–01–26 Our CEO was invited to attend this first conference held with the joint efforts on both sides of the Pacific and gave a keynote entitled “Artificial Intelligence Computing Platform Driven By Blockchain Technology”. NEO DevCon-Silicon Valley Date: 2018–01–31 Our CEO was invited to speak at the NEO DevCon themed “The Moon Shot” in the Silicon Valley. CPC Cryptocurrency and Exchange Conference Date: 2018–03–01 Our CEO gave A Future of AI Built on Blockchain” speech at the conference. GDIS (Global Disruptive Innovation Summit) Date: 2018–05–01 Our CAO was invited to give a speech at GDIS. Korea CMO’s Interview with Asia Economic TV (아시아경제 ) Date: 2018–02–07 During the interview, our CMO illustrated how a 7 year journey in AI led the team to the idea of combining AI with blockchain. 2018 Blockchain 3.0 Conference Seoul 2018 Date: 2018–02–09 CMO Lee attended Seoul Blockchain Conference and gave a keynote speech titled “Using Blockchain and Token to Unleash the Power of AI”. TokenSky- Korea(TokenSky Blockchain Conference Seoul Session) Date: 2018–03–14–03–15 DeepBrain Chain was invited to attend Token Sky Blockchain Conference in Seoul. DeepBrain Chain’s Global Tech Meetup — 1st stop: Seoul Date: 2018–04–27 This event attracted more than 300 Korean community members to attend. Europe DeepBrain Chain’s First Europe Tour From January 8th, 2018, DeepBrain Chain’s CEO Yong He was invited by the NEO community to join the 5-days NEO Meetup in Europe. Yong He presented DeepBrain Chain’s vision, technology and roadmap to investors. Congress BlockchainRF-2018- Russia Between March 27th — 28th, 2018, DeepBrain Chain’s CEO and CMO attended the Russian Blockchain Expo held by RACIB (Russian Association of Cryptocurrency and Blockchain). Blockchain EXPO- The Netherlands Date: 2018–06–27–06–28 DeepBrain Chain attended Blockchain Expo in Amsterdam, spreading our vision of building a decentralized AI cloud in the blockchain era. Stuttgart Meetup- Germany On July 12th, 2018, we held a joint meetup with Robert Bosch Venture Capital(RBVC) in Stuttgart Germany and introduced our latest progress to the representatives from numerous European companies. South-east Asia Thailand From 18th- 19th of April 2018, we attended the Tokenomx conference in Thailand and introduced our project to the Thai audience. Vietnam On April 22nd, 2018, we attended the “Build the Future, Break the Limit” summit in Vietnam and introduced our solutions to the Vietnamese community. Indonesia On May 9th, 2018, we attended the “Block Jakarta 2018” blockchain conference, our Senior Market Manager, Eric, was invited to speak as a guest. China CES ASIA- June 15th CEO Yong He attended the Asia Consumer Electronics Exhibition — Blockchain Industry Application Forum. The Blockchain Technology and Industry Application Seminar- Martti Malmi Offline Meeting- June 27th One of the first bitcoin developers, Martti Malmi made his first trip to Shanghai for the Blockchain Technology and Industry Application Seminar to which our CEO was invited to introduce DeepBrain Chain. Global AI and Robotics Summit — Shenzhen — June 29th At the AI and Robotics Summit, our CEO introduced our project to an audience of more than 200 AI professionals and investors, in addition to the 300,000 viewers tuning in via a live streaming platform. World Blockchain Conference — Wuzhen- June 30th Our CMO was invited to a round table discussing how AI and chips powered by blockchain can bring about changes to the future at World Blockchain Conference — Wuzhen. CSDN AIComputing Power Closed Door Forum — July 21st In this “Winning in The Era of Computing Power” workshop held by CSDN, the most popular developers’ community in China, we attracted audience from the Chinese Academy of Sciences, PerfXLab and other famous A.I. companies and institutes. This is Blockchain Tech Meetup — Korea — July 27th At the invitation of Korean Gyeonggido Business & Science Accelerator, we attended the “This Is Blockchain” tech conference and introduced the DeepBrain Chain project, its innovations on blockchain and AI and the “Skynet Project” to an audience of AI companies. 6. Branding Media Reports PYMNTS: Dr. Dongyan Wang Interview https://www.pymnts.com/blockchain/2018/deepbrain-chain-skynet-ai-computing/-20180626 GlobalCoin Report:DeepBrain Chain announces ‘’Skynet Project’’-20180625 Inc: https://www.inc.com/wanda-thibodeaux/ai-is-awesome-blockchain-is-a-powerhouse-but-heres-what-combining-them-could-do.html VentureBeat: https://venturebeat.com/2018/01/12/deepbrain-chain-the-first-artificial-intelligence-computing-platform-driven-by-blockchain/ Yahoo Finance: https://finance.yahoo.com/news/deepbrain-chain-announces-ai-global-185400752.html Seeking Alpha: https://seekingalpha.com/article/4185276-taiwan-semiconductor-manufacturing-company-crypto-blockchain?page=7 Business Insider: http://markets.businessinsider.com/news/stocks/deepbrain-chain-s-ai-miners-attract-more-than-100m-here-are-three-things-you-need-to-know-1026092705 Forbes: https://www.forbes.com/sites/rogeraitken/2018/05/31/can-the-ai-blockchain-combo-finally-crack-the-crypto-market/#5f65e4fc5a13-20180531 ​CoinJournal: DeepBrain Chain Builds AI+Blockchain Lab in Silicon Valleyhttps://coinjournal.net/deepbrain-chain-to-launch-ai-blockchain-research-center-in-silicon-valley/-20180509 Awards DeepBrain Chain Awarded ‘’Most Outstanding Technology Award’’ Award at GBLS(Global Sleepless Blockchain Leadership Summit in Hangzhou) On June 6th, 2018, Hangzhou government held the Global Sleepless Blockchain Leadership Summit. DeepBrain Chain was awarded ‘’Most Outstanding Technology Award’’ because of their innovative business model using blockchain technology, solving the three biggest hurdles, high computing cost, low data security and long product-launching period in the AI industry. Global AI and Robotics Summit “Best Future Potential”Award On the Global AI and Robotics Summit on July 1st, after a dozen experts’ deliberation, DeepBrain Chain was chosen out of 128 AI projects to receive the ‘’Best Future Potential’’ award. 7. Timeline 2018–08–08: AI Training Net Live, DBC can be used to buy GPU computing power. 2018–10–31: AI Computing Mining Live. 2018–12–31: DeepBrain Chain Main Chain Live. 2019–03–31: Consolidate AI Training Net and mining to DeepBrain Chain Main Chain. 2019–06–30: Data and image marketplace live on platform. 8. Important Events To Come DeepBrain Chain AI Training Net Goes Live — August 8th DeepBrain Chain AI Training Net will be live officially on August 8th. From then on, AI companies can conduct real-life AI training on our AI Training Net, use DBC token to pay for the training and buy AI computing resource. Computing power provider will receive DBC as reward. DeepBrain Chain’s official issued AIM machines will be delivered and start mining at end of October. DeepBrain Chain’s main chain is expected to be live by end of 2018. Silicon Valley International Blockchain Expo — October 1st The first International Blockchain Expo will be held in Silicon Valley, U.S, in October 2018. This expo is co-hosted by International Blockchain Expo Limited and SVBI, with the help of DeepBrain Chain, the event will support payment by DBC and ETH on ticket selling and sponsorships. International Blockchain Expo Limited aims to create great blockchain technology and product exhibitions within the globe, helping to accelerate the real-life application of blockchain innovation and technology. The expo will also combine another hot topic — AI, and explore a future of ‘’Blockchain + AI’’/. Conclusion There is a theory in Evolution called ‘’Punctuated Equilibrium’’, which means the world was evolving in a certain direction, but then a sudden change at the root occurred. A fossil was evolving at a certain rate, but then a completely different sediment layer suddenly occurs; The corrupted social power structure that lasted for hundreds of years in middle age was turned upside down due to the invention of printing; When the telephone just appeared, Western Union thought it was too flawed to really be a tool for communication. The proof of the pudding is in the eating. 60 years of AI development will be disrupted and see a new order due to the decentralized computing platform DeepBrain Chain is building. This new order will bring the world a punctuated equilibrium. “It took human kind over thousands of years to go from hunting to agriculture life; from agriculture life it took another thousands of years to industrial life; from industrial age to the age of atom it only took 200 years; then it only took decades to enter the age of information. ” The human civilization is evolving rapidly, and we are all part of it. Click to watch: DeepBrain Chain Intro Vid
DeepBrain Chain Semi-annual Report June 2018
136
deepbrain-chain-semi-annually-report-june-2018-15de4a776da5
2018-08-13
2018-08-13 04:00:40
https://medium.com/s/story/deepbrain-chain-semi-annually-report-june-2018-15de4a776da5
false
4,330
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
DeepBrain Chain
AI Computing Platform Driven By Blockchain
379a9e7edef2
DeepBrain_Chain
960
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-29
2018-07-29 10:49:49
2018-07-29
2018-07-29 10:51:51
3
false
en
2018-07-29
2018-07-29 16:28:44
6
15df1ff6d71e
2.938679
8
0
0
The history of robots has its genesis in the early ages. The advent of the industrial revolution led to the development of modern…
4
KAMBRIA REVIEW The history of robots has its genesis in the early ages. The advent of the industrial revolution led to the development of modern innovation, which allowed the use of complex mechanics, semantics and then subsequent introduction of electricity. This allowed to operate machines with small compact engines. At the beginning of the 20th century, the notion of a humanoid machine was developed. Nowadays, we can imagine robots of human dimensions with capacity for almost human thoughts and movements. The first uses of modern robots were found in huge industrial factories as industrial robots: to help boost productions at faster rates compared to human efforts as well as lift/move heavy equipment. Robots that are digitally controlled as well as industrial robots that use artificial intelligence have been built since the 2000s. The advance of robotics have only been beneficial to the manufacturing industry majorly because of how expensive it is and thus have no meaningful impact in the day-to-day lifestyle of human. This is one of the of the challenges hindering acceptance and spread of robotics in the global world. Some of the challenges are: Very expensive and slowless observed in robotic applications: This include (a) Insufficient foundation and abnormal state reflections for robot programming. (b) Absence of open platforms and advancement devices to quicken development. (c) Most explorative advancement is bootstrapped and not bolstered. (d) Capital subsidizing is just offered to vast scale or already developed ventures Absence of good interfaces and deliberation layers for programming, electrical, and mechanical frameworks. Moderate turnaround, high essentials, and poor interfaces from "conventional" makers. Absence of instruments, semantics, and strategies to share parts of outlines in appropriated form. As a result to these challenges and more, are reasons why the expansion of the robotics industry has been very stagnant. To enhance open innovations in robotics, KAMBRIA is set to eliminate any challenges within the industry and promote the evolution of robotic technology globally. Kambria is the first blockchain project to build an open innovation platform this allows and encourages collaborations in R & D, production and marketing of advanced technologies with a focus on AI and Robotics applications in the consumer space. With a mission to stimulate innovation processes by enhancing cheaper, faster, accessible development and endorsement of robotics technology, KAMBRIA platform will use the blockchain technology and crypto-economics on it's ecosystem. KAMBRIA PLATFORM Built on the blockchain technology, KAMBRIA will introduce cryptocurrencies into developmental cycle of robotics. Introduction of crypto-economics are advantageous to KAMBRIA platform in the following ways: (1). Exploit network effects to add compelling technology. (2). Give intermediation for end-to-end robotics investment problems. (3). Provide economic incentives to investors. (4). Recognise and report violations of legal rights to reduce the effect of "free rider" and (5). Achieve an extremely small barrier of entry to allow individuals and small groups to collaborate. KAMBRIA TOKENS KAMBRIA platform will be supported by it's two unique tokens called KAT and Kambria Karma. KAT is an ERC20 utility token deployed by KTI (Kambria Token International), embedded on the standard smart contract of the Ethereum blockchain. KAT unique function is to be used as payment for all forms of transactions on the Kambria Platform as well as reward incentives to stakeholders. On the other hand, Kambria Karma is not an ERC20 token but a non-tradeable entry ledger for KAMBRIA wallet addresses as well as been used to monitor workdone by stakeholders on the Kambria Platform. In addition, Kambria Karma is awarded for active participation and given as incentives to encourage useful work. For more info, kindly visit any of the following channels: Homepage: https://kambria.io/ Whitepaper: https://kambria.io/Kambria_White_Paper_v2_20180615.pdf Medium: https://medium.com/@teamkambria Facebook: https://facebook.com/KambriaNetwork Twitter: https://twitter.com/KambriaNetwork Telegram: https://t.me/kambriaofficial Bounty0x Username: Phlaser247 Disclaimer: This article was created in exchange for a potential token reward through Bounty0x
KAMBRIA REVIEW
319
kambria-review-15df1ff6d71e
2018-07-29
2018-07-29 16:28:44
https://medium.com/s/story/kambria-review-15df1ff6d71e
false
633
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Phil Cardinal
null
e9e40ef96626
philcardinal
259
436
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-13
2017-09-13 08:41:47
2017-09-13
2017-09-13 08:53:08
1
false
en
2017-09-13
2017-09-13 09:32:44
0
15dfe05919d7
2.188679
43
1
0
This blog post covers the NumPy and pandas array data objects, main characteristics and differences.
5
pandas and NumPy arrays explained This blog post covers the NumPy and pandas array data objects, main characteristics and differences. What are NumPy and pandas? Numpy is an open source Python library used for scientific computing and provides a host of features that allow a Python programmer to work with high-performance arrays and matrices. In addition, pandas s a package for data manipulation that uses the DataFrame objects from R (as well as different R packages) in a Python environment. Both NumPy and pandas are often used together, as the pandas library relies heavily on the NumPy array for the implementation of pandas data objects and shares many of its features. In addition, pandas builds upon functionality provided by NumPy. Both libraries belong to what is known as the SciPy stack, a set of Python libraries used for scientific computing. The Anaconda Scientific Python distribution from Continuum Analytics installs both pandas and NumPy as part of the default installation. NumPy arrays NumPy allows you to work with high-performance arrays and matrices. Its main data object is the ndarray, an N-dimensional array type which describes a collection of “items” of the same type. For example: >>Import numpy as np #importing the library >>a1 = np.array([1, 2, 3, 4, 5]) #defining the ndarray >>a1 >>array([1, 2, 3, 4, 5]) #output ndarrays are stored more efficiently than Python lists and allow mathematical operations to be vectorized, which results in significantly higher performance than with looping constructs in Python. NumPy arrays allow for selecting array elements, logical operations, slicing, reshaping, combining (also known as “stacking”), splitting as well as a number of numerical methods (min, max, mean, standard deviation, variance and more). All these concepts can be applied to pandas objects, which extend these capabilities to provide a much richer and more expressive means of representing and manipulating data than is offered with NumPy arrays. pandas Series Object The Series is the primary building block of pandas. A Series represents a one-dimensional labeled indexed array based on the NumPy ndarray. Like an array, a Series can hold zero or more values of any single data type. A Series can be created and initialized by passing either a scalar value, a NumPy ndarray, a Python list, or a Python Dict as the data parameter of the Series constructor. This is an example of defining an ndarray: Differences between ndarrays and Series Objects There are some differences worth noting between ndarrays and Series objects. First of all, elements in NumPy arrays are accessed by their integer position, starting with zero for the first element. A pandas Series Object is more flexible as you can use define your own labeled index to index and access elements of an array. You can also use letters instead of numbers, or number an array in descending order instead of ascending order. Second, aligning data from different Series and matching labels with Series objects is more efficient than using ndarrays, for example dealing with missing values. If there are no matching labels during alignment, pandas returns NaN (not any number) so that the operation does not fail. Source: “Learning pandas”, Michael Heyd (Packt Publishing).
pandas and NumPy arrays explained
283
pandas-series-objects-and-numpy-arrays-15dfe05919d7
2018-06-13
2018-06-13 15:02:47
https://medium.com/s/story/pandas-series-objects-and-numpy-arrays-15dfe05919d7
false
527
null
null
null
null
null
null
null
null
null
Python
python
Python
20,142
Eric van Rees
Writer and editor. Interested in all things geospatial.
2693e64e6dd7
ericvanrees
58
33
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-18
2018-08-18 04:09:57
2018-08-18
2018-08-18 05:23:31
1
false
en
2018-08-18
2018-08-18 05:23:31
5
15e0b61b9cfa
3.622642
7
0
0
This Independence day was different. I was back home. And I had developed skills that I didn’t have earlier. So I thought of combining…
5
Celebrating India’s Independence Day with Mahatma Gandhi and Data Science This Independence day was different. I was back home. And I had developed skills that I didn’t have earlier. So I thought of combining Independence day celebrations with my new found data skills. Also, I had nothing to do on a holiday evening. I built a system that recognized me and the father of our nation, Mahatma Gandhi and after recognizing, spoke a few lines about us. That’s about it. I know, it’s not something very fancy, but in the limited time of a couple of hours, I think it is a decent project to do. Let me divide the description of my project into two essential parts. First we’ll look into the face recognition bit and then we’ll look into the the speech part. I will not get into the details of how the technology actually works behind the scenes, because, then I will have to turn this blog post into a book, but that’s for another day. Instead, I’ll just explain how I made it to work and maybe you can do it too! Let’s begin. Screenshot from the video. Video can be found on my LinkedIn. Face Recognition For the face recognition part, I used Adam Geitgey’s amazing face_recognition library in Python. His face_recognition library is built on top of a C++ library dlib which was created by Davis King. If you’re interested in knowing how it actually recognizes faces from videos and images, Adam Geitgey has a blog post on it. I’d recommend you to go through it first before starting your project. It’ll help you form intuition of what you will be building using this library. You can read it here: https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78. And you can check out the face_recognition library here: https://github.com/ageitgey/face_recognition So in my particular case, I required a sample image of Mahatma Gandhi and myself to be able to run the face recognition system. To get Gandhi’s images, I had to do a small Google image search. And for my picture, well, I just had to take a selfie! These images are then stored in a list named known_faces. We then extract face encodings from each face. A face encoding gives you a unique identifier for each person’s face. After you extract face encodings for all the faces, it’s only a matter of comparing the encodings of the known faces to the faces in the webcam feed. If they are within a threshold, the person in the webcam is assigned a label from the known faces. If not, you can set a default message to be shown. Alright, with this we are done with the face recognition part of the project! Text I wanted the computer to read out something after it recognizes my and Mahatma Gandhi’s face. For Mahatma Gandhi, I used wikipedia’s REST API to fetch a paragraph from his wikipedia bio. I used requests library in python to perform the API calls. Since I don’t have a wikipedia page dedicated to me, I decided to give out a message to the people who struggled for India’s Independence. I used Google translate to get the message in Hindi. Let’s now see how we can convert these texts to speech. Text to Speech There are a couple of libraries in Python that help you do text to speech conversion. In this case, I tried out pyttsx3 and gTTS and finally went ahead with gTTS. I found the voice in pyttsx3 to be a bit more robotic. Pyttsx3 is useful when the inference has to be made in offline mode. On the other hand, gTTS is a CLI tool to interface with Google Translate’s text-to-speech API. It writes spoken mp3 data to a file. For my demo, I needed the spoken mp3 data to be read as soon as the face has been recognized. This was not possible using gTTS since it writes to a file by design. So I searched around the internet and people recommended to store the spoken text in a temporary file and play the temporary file using some kind of an mp3 player. I experimented with a few and found pygame to be useful for loading a mp3 file and playing it. The final piece of the puzzle was solved and now I had a ready system. Well, not quite. My face was getting recognized every frame and it was triggering a text-to-speech command every time my face was recognized. This meant that even before the file could be read out completely, there was another command to start all over again. This was not what I wanted. I hacked up a way. I initialized a flag value to 0 and then set it to 1 when a face has been recognized. This solved the problem of repeating the tts conversion for every frame, but it is kinda hacky. Let me know if you can think of a better solution! Conclusion This was a fun project. I had limited time and wanted to do something to celebrate our 72nd Independence day, so I came up with something like this. Thank you for reading. For more posts on Data and the world, follow me here: https://medium.com/@Imaadmkhan1. For interesting content on Data Science and Machine Learning regularly, follow me on LinkedIn. You can find my LinkedIn here: https://in.linkedin.com/in/imaad-mohamed-khan-218b3999 You can find the code for the project here: https://github.com/imaadmkhan1/independence-day-project/
Celebrating India’s Independence Day with Mahatma Gandhi and Data Science
10
celebrating-indias-independence-day-with-mahatma-gandhi-and-data-science-15e0b61b9cfa
2018-08-18
2018-08-18 05:23:32
https://medium.com/s/story/celebrating-indias-independence-day-with-mahatma-gandhi-and-data-science-15e0b61b9cfa
false
907
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Imaad Mohamed Khan
Writing at the intersection of data and the world.
8eef5dfbb861
Imaadmkhan1
149
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-14
2018-03-14 03:42:41
2018-03-14
2018-03-14 03:56:28
2
false
en
2018-03-14
2018-03-14 03:59:57
6
15e1c4608313
3.504088
3
0
0
Quick guide: Top 4 video annotation tools on the market and tips for pairing technology with people for ultra-precise data enrichment.
4
Top Tools and Workforce Tips to Scale Your Video Annotation “[Video] data annotation is super labor-intensive. Each hour of data collected takes almost 800 human hours to annotate. How are you going to scale that?” - Sameep Tandon, CEO of Drive.ai, an autonomous car startup in Silicon Valley and CloudFactory client We are living in interesting times, where advancements in artificial intelligence (AI) are powering transformative technologies that are changing our everyday lives. These advancements, and specifically deep learning, have accelerated the development of computer vision applications for autonomous vehicles. Training these AI systems requires curating and preparing massive video datasets, a process that can take thousands of hours to annotate accurately. Scaling video annotation is significantly more challenging than scaling image annotation. Just 10 minutes of video contains between 18,000 and 36,000 frames, at a rate of 30–60 frames per second. Frame-by-frame video annotation is time consuming and can be cost-prohibitive, becoming a significant roadblock for tech innovators trying to beat competition to market. A growing number of companies, from startup to enterprise, are pairing annotation tools with an augmented workforce to scale video annotation for high-quality training datasets. They don’t want nurturing in-house teams or building a custom annotation tool to distract them from focusing on their core vision. Crowdsourcing platforms can be a viable option that provides access to a scalable workforce and off-the-shelf annotation tools. However, crowdsourcing deploys anonymous workers, and its limited annotation tooling functionality can be a major pain point for vision-based technologies where ultra-precise data annotation is crucial for human safety. There are a few managed workforce providers in the market with trained workers who have extensive experience doing annotation tasks and produce higher-quality training data. However, many of them require their clients to use proprietary annotation tools within their platform and restrict clients from using the annotation tool of their choice. At CloudFactory, we recommend selecting the annotation tool that works best for your needs and maintaining it within your tech stack. There are many benefits that come with owning your video annotation tool: Gain competitive advantage by establishing your own unique process for annotating data within the tool of your choice — this is often where you can spot and leverage differentiators. Mitigate unintended bias in machine learning models by configuring the annotation tool according to your needs. Using a crowdsourcing provider’s off-the-shelf tool could introduce their bias in data annotation tasks. Make changes to software quickly and with agility, using your own developers. You don’t have to worry about hefty fees when the software scope changes. Exert greater control over security for your system. By having the tool in your stack, you can apply the exact technical controls that meet your company’s unique security requirements. Select the vendors of your choice to help achieve your objectives, instead of being locked in with one provider. When you own the tool, the workforce can plug into your task workflow more easily. Top 4 Video Annotation Tools Computer vision algorithms require annotated data that provides a deeper understanding of the actions and interactions of different objects (individuals and groups) in each video frame. This is beyond just identifying the name and location of the object, as is the case with image annotation. There are many video annotation tools on the market to get ground truth for machine learning models. The right video annotation tool is user-friendly, minimizes human effort, and maximizes annotation quality. Here’s a quick guide to the top four video annotation tools on the market: What to Look For in Your Annotation Workforce Once you’ve selected your annotation tool, consider your workforce requirements. Video annotation is a specialized skill that requires hands-on training and coaching to achieve maximum accuracy. For best results, your workforce should be screened for proficiency with annotation tasks and receive ongoing training to improve their skills. Whether your workforce is annotating raw video or running quality control checks on annotated video, it helps when the workforce feels like an extension of your team. Look for a workforce provider that can facilitate easy communication with your workforce to incorporate feedback and improve quality, especially when accuracy is important. Ask if your workforce can help you optimize your tool over the long term by providing feedback to improve your efficiency and user experience. CloudFactory’s teams annotate videos for innovative companies like Cruise Automation and Drive.ai. Our workforce is trained to annotate static and moving objects frame by frame, within five pixels. We draw ultra-precise bounding boxes around objects like vehicles, pedestrians, construction roadblocks, signs, and traffic lights for autonomous driving systems. We can do timeline labeling to tag events, such as a vehicle making a right turn. We also can categorize annotated video frames for consumption by computer vision algorithms. If you need a workforce to annotate video or check the quality of your annotation, contact us. Originally published at blog.cloudfactory.com.
Top Tools and Workforce Tips to Scale Your Video Annotation
26
top-video-annotation-tools-15e1c4608313
2018-04-16
2018-04-16 11:55:14
https://medium.com/s/story/top-video-annotation-tools-15e1c4608313
false
827
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
CloudFactory
CloudFactory provides an on-demand, digital workforce for scaling your business in the cloud. https://www.cloudfactory.com/
8aebef586dc0
thecloudfactory
769
325
20,181,104
null
null
null
null
null
null
0
null
0
721b17443fd5
2018-07-08
2018-07-08 13:02:02
2018-07-08
2018-07-08 13:41:38
2
false
en
2018-07-12
2018-07-12 05:45:44
6
15e4fa47c609
12.564465
6
0
0
This post is for beginners in deep learning. I have written my experience about the book ‘Deep learning with Python’ and what I learnt from…
5
Deep learning with Python This post is for beginners in deep learning. I have written my experience about the book ‘Deep learning with Python’ and what I learnt from it. This book is written by François Chollet, also the author of keras. Review A book for anyone who wants to start the career in deep learning or even have some interest in deep learning. It covers all problem classes and solutions in the field of deep learning. After reading this book, you will be equipped with skills to classify an image, predict the weather, generate text etc. I started this book one week ago. I had done some machine learning courses but deep learning was completely new to me. I wanted to understand the neural network for a college project. So, I searched for the neural network and came across many courses and study material. But finally, I found this perfect book for starting my learning of deep learning. The author is good with his words and way of writing. He explains complex problems like a piece of cake. I found this book in the evening and read it the whole night. It is interesting to know the application first and implementing them after. It makes you confident and interested in further reading. I was interested in the field; but after reading this book, I am passionate about becoming a Data Scientist. I want to explore this field. I want to learn more. I want to contribute to this field. Thanks to Francois Chollet for writing this book. What is inside this book? The book is divided into two parts: Fundamentals of Deep Learning Deep learning in Practice First part is for building a solid foundation of deep learning and answers some important questions like ‘what is deep learning?’, ‘How is it different from A.I. and machine learning?’, ‘What can be done or can’t be done with deep learning?’ etc. Chapter 1) What is deep learning? “Deep learning, machine learning, artificial intelligence…”, Suddenly everyone is talking about them. Most of the people have no idea about the difference between these terms. It is a hot topic of discussion everywhere. Do you have a clear understanding of these terms? If the answer to the previous question is no. Then, this chapter is for you. This chapter explains these terms, history related to them etc. Artificial Intelligence: Can we make a machine to think? This idea born in the 1950s. It was a time of symbolic AI. The approach was to give machines some data and a set of rules. The output was the answer. Until the 1980s, everyone was trying to build the best set of rules for better results. In the 1980s, people shifted their focus to another approach. It was starting of Machine learning era. Machine Learning: Give the machine sufficient data and answers, and let the machine makes the rules. With this approach, a new era of machine learning started in the 1980s. Machine learning is a subfield of Artificial Intelligence. And, Deep learning is a subfield of machine learning. Deep Learning: Deep learning is an old subfield of machine learning. But, everyone is talking about it now because of these reasons: Hardware Dataset Algorithms Recent years saw an immense growth in data available, advancement in hardware performance and advancement in algorithms. When other machine learning approaches achieve saturation on big data, deep learning provides better results with a large amount of data generating every moment. A lot of feature engineering is required in other machine learning approaches. These approaches can learn only 1–2 layers of representation. So, these approaches are also known as shallow learning. Deep learning requires almost no feature engineering and it can learn multiple layers of representation. Neural Networks are used to learn these representations. Deep learning shows better results in: Image Classification Speech Recognition Machine Translation Natural language processing, etc. There is a lot more detail about traditional machine learning methods and deep learning future given in this chapter. I am not going to discuss those here to keep the post short. Chapter 2) Before we begin: the mathematical building blocks of neural networks. This chapter deals with the mathematical part of neural networks. Are you a curious person? Do you want to know how a machine learns? If yes, this chapter is for you. Have you heard terms tensors, differentiation, gradient descent etc? Don’t worry if the answer is no. This chapter explains these and few more terms from basic to advance level. You will also get a first look at a neural network. Keras library and MNIST dataset are used to build and train the first neural network in this book. Q. What is a neural network? Ans. A neural network is a mathematical model. Term ‘neural’ comes from the neurobiology. Some people say that a neural network is similar to our brain. But, this book discards this analogy. Q. What are the building blocks of a neural network? Ans. The core building block of any neural network is the layer. The layer can be considered as a filter which abstract representational features from the data. The layers consist of many subunits, known as neurons. It may seem difficult at first but I promise after understanding the basic mathematics behind it, the neural network will be super easy for you and your first choice while dealing with any machine learning problem. We need to choose three more things before training our model. These are: A loss function An optimizer Metrics to monitor training and testing. Let me help you to have an intuition about neural networks. Imagine a ball. Consider this ball as a neuron. Let’s say we have 16 neurons. Arrange them in 4 columns like this. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 These columns are known as layers. Each layer can have any number of neurons. The first layer is known as the input layer and the last layer is known as the output layer. Layers between the input layer and the output layer are known as hidden layers. A neural network can have any number of layers and any layer can have any number of neurons. It is the basic architecture of neural networks. Next, imagine a thread between any two neurons of two consecutive layers. This connecting thread between any two neurons of any two consecutive layer is known as weight. Each and every neuron in a layer is connected to each and every neuron of the previous layer and the next layer. These weights are number. For example, neuron 2 of layer 2 is connected to neuron 5 of layer 3 by a weight of 0.4 Imagine a situation, you have given a competitive exam comprising 3 section (Physics, Chemistry and maths) for a job. You get 30 marks in physics, 70 marks in chemistry and 50 marks in maths. You will pass if the weighted sum of all three is more than 50. Now, if all three subject have equal weightage then the weighted sum will be (0.33)*30 + (0.33)*70 + (0.33)*50 = 49.5 You don’t pass the exam. But, what if no one passes the exam. The examination authority has to change the passing criteria. Now, they have given 50% weight to chemistry, 30% weight to maths and 20% weight to physics. Now, your weighted sum is (0.2)*30 + (0.5)*70 + (0.3)*50 = 56. Congrats, you pass the exam. The examination authority is an optimizer in a neural network. It changes weights and barrier values to allow or stop any feature to pass through a neuron. Each neuron has a bias value which decides what value can pass through it. Even if you pass the exam and get the job. But, the company finds out that you don’t have enough skills required for the job. It will be a loss for the company. Same here with a ‘loss function’ of a neural network. The loss function is the deviation of a predicted value from the actual value required. Work of optimizer is to minimise the loss function by changing weight and bias values. One more function known as ‘Activation’ is to be set for every layer. Activation is a just mathematical function which transforms the output of a layer according to set function. Example: ‘Relu’ activation will transform all negative output to zero. If you are interested in the mathematics behind these functions then you can go through this chapter. Otherwise, We have a powerful library Keras which takes care of all background coding and we have to choose only a few hyper-parameter according to our problem. For the application you have to follow these few simple steps: Build model architecture by selecting the number of layers, neurons and activation. Compile the model by selecting loss function, optimizer and metrics. Train the model on your data. Predict on test data. It is a simple neural network. But, that’s all we need to build a baseline model with Keras. Go through this chapter if you want the deep and clear understanding of neural networks. Chapter 3) Getting started with neural networks This chapter introduces Keras and components of neural networks. What problems can be solved using neural networks? Classes of problems: Classification Regression Classification problems can be further divided into 3 types: Binary Classification Multiclass Classification Multilabel Classification Binary classification when we need to classify between two options only. Example: Dog vs Cat. Multiclass Classification when we need to choose one option from many options available. Example: Digit (0–9) Multilabel Classification when we need to choose multiple options. Example: River, farm, mountain etc in a satellite image. Regression is to predict a floating number. Example: the price of a house. You can have a look at Keras documentation which is also covered in this chapter. https://keras.io After the introduction to Keras and neural networks. A binary classification problem is solved in this chapter. Dataset — IMDB review dataset Problem — To classify review as positive or negative. After that, a multi-class classification problem is solved. Dataset — Reuters Dataset Problem — Classify Reuters newswires into 46 mutually exclusive topics. After that, a regression problem is solved. Dataset — Boston housing Problem — Boston Housing Price prediction. You can see the specific example according to your need. Chapter 4) Fundamentals of machine learning Machine learning can be divided into 4 branches: Supervised Learning Unsupervised Learning Self-supervised Learning Reinforcement Learning Supervised Learning Labels have to be selected from known targets. Example: handwritten digit classification in which labels can only be any digit between 0–9. Known targets are generally annotated by humans. Supervised learning is learning a map between input data and known targets. Unsupervised Learning No targets are provided to the model to learn. Instead, the model gives us interesting data transformation which can be used for data visualisation, clustering, dimensionality reduction etc. Self-supervised Learning It is supervised learning but the labels are not annotated by humans. Labels are generated from input data using a heuristic algorithm. Reinforcement Learning It is still in the research phase. Models try to maximise some reward while learning from the environment. It has been used to play Go. Training, validation and test sets Data has to be divided into 3 sets before fitting in the model; training, validation, and test sets. Q. Why validation set? Why can’t we use only the test set for checking model performance? Ans. There are two types of variables while building and training any model; parameters and hyperparameters. Model learn parameters like weight and bias from training data. But, hyper-parameters are to be set by the humans. To set hyper-parameter, we use a validation set. If we used the test set to select hyper-parameters, then we would leak some information about the test data into the model. So, It is better to use three sets: Training set: For training parameters Validation set: For the selection of hyper-parameters Test set: For evaluating the final model performance. Data preprocessing It would be a little difficult for a neural network to map relationships between raw data and labels. It is better to do some feature engineering before fitting data into the model. We can do vectorization, normalisation, handling missing value, and feature extraction. Vectorization Neural networks only accept data and targets in the form of a tensor with floating point value. Image, text or anything, we need to convert it into vectors. Normalization If values of all features are not in the same range, then the model can become bias about some features. It is better to normalize all values between 0 to 1. Handling missing value We can replace the missing value with 0, given 0 is not used for anything else. Feature engineering Some simple transformations on input data that can make the mapping to the target values simpler. Do these transformations before fitting data into the model. Many times model can’t find these simple transformations if we gave the raw data. Overfitting and underfitting Overfitting is the main issue with any model which results in poor performance on the test data set. Overfitting can be solved using some techniques like: Reducing the network size Adding weight regularisation Adding dropout The universal Workflow of Machine learning Defining a problem and assembling a dataset Choosing a major of success Deciding on an evaluation protocol Preparing the data Developing a model that works better than baseline Scaling up: Developing a model that overfits Regularizing the model and tuning hyperparameters Choosing the last layer activation and loss function for a model Problem Type- Last layer activation- loss function Classification Binary — sigmoid — binary_crossentropy Multiclass, single label — softmax — categorical_crossentropy Multiclass, multi label — sigmoid — binary_crossentropy Regression — None or sigmoid — mse With this, we come to an end of part 1 of the book. Chapter 5) Deep learning for computer vision This chapter deals with computer vision problems like image classification. Instead of fully connected dense layers model, a convolutional neural network (CNN) is used for computer vision problems. Convnets From the application point of view, you don’t need to go deeper into understanding how convent works. But, if you are interested you can watch this video: https://www.youtube.com/watch?v=FmpDIaiMIeA&t=105s Using a pretrained convent Instead of using a CNN made from scratch by us, we can use a pretrained CNN like ImageNet. ImageNet has trained over 1.4 million labelled images and 1000 different classes. We use Feature extraction to extract already learned feature of ImageNet. Step to make the best model for computer vision problems: Extract features of a pretrained CNN. Unfreeze some output side layers of the CNN and train it. Use data augmentation to prevent overfitting. Fine tune the hyperparameters. This chapter also visualises what the CNN learn We can visualise intermediate activation. It is helpful to understand what’s going on and for fine-tuning. Chapter 6) Deep learning for text and sequences Recurrent neural network (RNN) and 1-D convnet are useful when dealing with text or sequence data. Working with text data As we discussed before, neural networks take only the tensor as input. Text data also need to be converted into vectors before providing it to the neural network. Vectorization of text can be done in multiple ways: Segment text into words, and transform each word into a vector. Segment text into characters, and transform each character into a vector. Extract n-gram of words or characters, and transform each n-gram into a vector. These different segments which can be words, characters, n-grams are called tokens. Breaking text into tokens is called tokenisation. The two major ways to connect a token with a vector are one-hot encoding and token embedding. One-hot encoding We can understand it with an example. We have a text containing 1000 different words. We can number them from 1 to 1000. Now, each word has a vector of 1000*1. It will take value 1 at only one position and zero at all other positions. Hence, all words have a unique vector. Word embedding Word embeddings are lower dimension dense vectors whose values are learned from data. It is similar to the weight matrix of any neural network. We can use a separate neural network to learn word embedding before feeding data into RNN or we can use an embedding layer before RNN. Recurrent neural network and long-short-term memory(LSTM) As I said for CNN, you don’t need to go deep into the methodology for applying RNN. But, If you are interested you can watch this video: https://www.youtube.com/watch?v=WCUNPb-5EYI Further, in this chapter, a temperature forecasting problem has been solved. Dataset — jena_climate You can go through this chapter if you are working on time series data or text sequence. Chapter 7) Advance deep learning best practices This chapter is for mainly research purpose. Advance network architectures are explained in this chapter. Introduction to TensorBoard is also given for analysing models and fine-tuning. We can build multi-input and multi-output models using keras API. This chapter has gone deep into deep learning. If you understood all the chapter before this, then you can try this chapter. I didn’t understand this chapter completely but it gave me exposure to the advance techniques which can be learned. Chapter 8) Generative Deep learning We saw some classical problems until now like classification and regression. But, deep learning can also be used to generate artworks. It can be used to write a script, create a painting, generate a song etc. Text generation with LSTM Did you read the news that a neural network wrote a script for the next season of Game of throne? Yes, some researchers did it. They used previous novels as data to generate next in series. Personally, I find this application of deep learning most interesting. This section used LSTM to create a text generation model. It is interesting to read and apply it. DeepDream Google released it in 2015. It is an artistic image modification technique which uses CNN learned representations. Google it. You will see some really cool images. Example: Neural Style transfer How about painting something like Van Gogh? You can do this using neural style transfer. It basically extracts content from one image and style from another image to give a combined image. If you have used PRISMA then you can relate it. Again, google it for some cool examples. One of them is: After explaining the coding part for these two cases, this chapter also explains image generation using vibrational autoencoders. It can be used for image editing and generation. If you are interested in image generation or image editing, then you can go through this section. Generative adversarial networks It is an advance network to generate a fairly realistic image. It requires heavy computational power. The concept is to use two neural network work together. Generator network produces an image and adversary network give feedback about the authenticity of the image. Both networks try to overcome each other which results in a fairly realistic image. Chapter 9) Conclusions Give a brief description of everything covered in the book. Also, discuss the future, limitations, and risks of deep learning. Further reading arXiv Sanity Preserver ( http://arxiv-sanity.com ) Keras online documentation ( https://keras.io ) Keras source code (https://github.com/fchollet/keras)
Deep learning with Python
70
deep-learning-with-python-15e4fa47c609
2018-07-12
2018-07-12 05:45:45
https://medium.com/s/story/deep-learning-with-python-15e4fa47c609
false
3,228
Coinmonks is a technology focused publication embracing all technologies which have powers to shape our future. Education is our core value. Learn, Build and thrive.
null
coinmonks
null
Coinmonks
gaurav@coinmonks.com
coinmonks
BITCOIN,TECHNOLOGY,CRYPTOCURRENCY,BLOCKCHAIN,PROGRAMMING
coinmonks
Machine Learning
machine-learning
Machine Learning
51,320
Aman Chaudhary
null
d2ff1bbe91b
aman61197
6
2
20,181,104
null
null
null
null
null
null
0
null
0
8e9bde78121d
2018-09-21
2018-09-21 07:51:16
2018-09-30
2018-09-30 05:11:25
1
false
en
2018-09-30
2018-09-30 05:11:25
2
15e61d47f843
2.479245
0
0
0
Written by Neil Binalla
5
Fireside Chat at the Applied Analytics Conference Written by Neil Binalla One of the portions I enjoyed most from the two-day workshop, Applied Analytics for Competitive Advantage, was the fireside chat. There, everyone witnessed a dialogue between the participants, who shared the challenges being faced by their respective businesses and industries, and Professors Ikhlaq Sidhu and Paris de l’Etraz, who gave words of wisdom and pieces of advice. It was like a simulation of possible struggles, with possible solutions. I was able to relate with most of the participants, who shared that one of the greatest challenges in having lots of data is not knowing how to maximize or optimize it for business advantage. Some of the limitations mentioned were absence of a single platform to store data, lack of tools for collecting data directly from customers, not knowing which information needs to be collected to better understand customers, and a lot more. One of the responses that struck me most came from Prof. Sidhu, about gathering and collecting data on a single platform. According to him, we may have a wrong psychological hypothesis that “when you collect all your data together, you are ready to go and do something that is magical.” So what is the goal of accessing all that data and what would you do with it? Prof. Sidhu explained that it’s best to do “magic” in small portions first. He suggested that companies should start with small projects and, after proving that it’s worth doing or creating some wins, put more data together to support it. All this can then be tied together before jumping to a bigger task, asking for budget, or getting an approval from the management. Meanwhile, Prof. de l’Etraz added that it’s not always just about collecting data — companies must have a clear vision and purpose before getting into relevant data gathering. Some participants shared that, though their organizations already possess huge amounts of data and have a data collection process in place, they would still want to undergo optimization in order to validate whether they are doing the right thing. According to Prof. Sidhu, this would require a longer discussion, but had this advice to share — what you worry about today is different from what you want to do and what you need tomorrow. These three factors must be aligned with the company’s brand, which would, in turn, affect the data collection process. Meanwhile, organizations that don’t have direct access to clients or customers must have a technology that will empower their agents or distributors to become successful. This could help create that win-win scenario in building customer behavior profile. Moreover, create the best possible customer experience by defining an engagement model — a tool that retains customer interaction. Among other questions raised pertained to applications to improve competitor understanding, use of analytics for millennial employees’ productivity and behavior, product improvement and optimization, to name a few. The conference gave the participants an understanding of artificial intelligence and machine learning — from coding, significance of machine learning tools, down to business perspective — which transformed these concepts from being just buzz words into real knowledge. This knowledge can then help us communicate and/or build data strategies for disruptive impact and competitive advantage. Please visit and join the John Clements Talent Community About the author: Neil Binalla is the finance controller and director of John Clements Consultants, Inc. He has been with the company for 15 years. Prior to John Clements, Neil worked at PLDT and SGV & Co. He is a BS Accountancy graduate of the Philippine School of Business Administration.
Fireside Chat at the Applied Analytics Conference
0
fireside-chat-at-the-applied-analytics-conference-15e61d47f843
2018-09-30
2018-09-30 05:11:26
https://medium.com/s/story/fireside-chat-at-the-applied-analytics-conference-15e61d47f843
false
604
Discover Your Full Potential with Looking Glass, a Publication from John Clements
null
johnclementsph
null
John Clements Lookingglass
jcdigitalrenewal@gmail.com
the-looking-glass
LEADERSHIP,CAREERS,MANAGEMENT AND LEADERSHIP,PROFESSIONAL DEVELOPMENT,PERSONAL GROWTH
JohnClementsPH
Machine Learning
machine-learning
Machine Learning
51,320
Shiela Manalo
Writer|Graphic Artist|Video Editor|Musician
8dbb2651e54f
iamsimone02
64
25
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-16
2018-08-16 10:30:47
2018-08-16
2018-08-16 10:52:05
13
false
en
2018-08-16
2018-08-16 10:56:57
20
15e6467e2bfc
9.335849
1
0
0
Customers want your business to use Artificial Intelligence (AI) to improve their experience and make their life easier — even if they…
5
The 6 Stages In The Evolution of AI and Customer Experience Customers want your business to use Artificial Intelligence (AI) to improve their experience and make their life easier — even if they don’t know what it is or what it does. 8 out of 10 businesses have either already implemented AI (37%) or are planning to adopt it by 2020 (41%). They understand that they must enable AI-powered experiences to better serve customers and to keep up with competitors. But even with adoption and interest being as high as it is, we’re just at the beginning of the AI journey. In this article, we take a look at how the 6 evolutionary stages of AI are significantly shaping new customer experience expectations. Let’s dive in. Stage 1: Curation When you type anything into Google, you’re met by a barrage of search results. I just typed “Artificial Intelligence” into the search engine and was met by a grand total of 330 million results. Talk about information overload, right? Not quite. Studies show that less than 10% of us read past Google’s first page of results, which means that we aren’t actually overloaded with information. We’ve put our trust in Google’s algorithm and conditioned our brains to get what we need from the top 3–5 search results on the first page, or we refine our search. This wasn’t always the case. When the internet was young, the likes of Google and AOL tried to curate the right content for the right search query, but they often missed the target. However, Google has become exceedingly good at giving us what we want by matching the best content to each query — and AI has helped. What does this mean for the customer experience? Fewer frustrating experiences, as well as the feeling of serendipity, discovery, and enjoyment while shopping. Digitally native retailers are already setting new standards by capturing and using data to create highly curated experiences through the use of AI. Instead of exhausted customers wasting their time clicking through pages of products they have no interest in, they are able to discover unique, interesting products that match their tastes. The recent Capgemini report, “The Secret to Winning Customers’ Hearts with Artificial Intelligence: Add Human Intelligence”, revealed that a positive AI experience caused 38% of shoppers to purchase more, and a quarter of those increased their spend by up to 10%. Example: Fashion retailers — Zalando & Stitch Fix Curated shopping is one of the firm’s most important projects for the next 12 to 18 months, says Zalando CEO Rubin Ritter. The German fashion juggernaut offers 300,000 new products each year, potentially overwhelming the company’s 22 million active customers. But by analyzing both shopping and search trends, Zalando learns a lot about a customer’s preferences and is able to curate their experience. If a user’s browsing behavior indicates that they’re into sports, Zalando will provide relevant imagery to inspire them. If a user shows interest in dark colors, they’ll be taken straight to the those types of products. Source: Introducing the AI + Design Series — A designer’s journey to discovery, by Vilma Sirainen, Senior Product Designer at Zalando. Curated shopping has quickly gained traction, particularly in the fast fashion industry. Stitch Fix, an online subscription and personal shopping service in the US, is another example of a fashion retailer using artificial intelligence to curate products for customers. The Stitch Fix Algorithms Tour shows some of the ways in which the company uses AI and data science. Read more: Curated Shopping — How it Works and How Successful it is Stage 2: Customized Information Picture it: It’s 7 am and time for work, but you’re anxious about being late. It happened yesterday by 15 minutes and you were given a warning by your boss. It wasn’t your fault — the traffic was awful! What do you do to avoid allowing it to happen again? In 2018, you can turn to an AI-powered app on your smartphone to give you the lowdown on the fastest way to work. Just like the Waze traffic app. It uses AI to determine the best route for your commute and uses machine learning to learn your usual driving patterns and when the traffic on your usual routes is unusually heavy. The recent “Where to park” feature also provides users the opportunity to see any available parking lots nearby. What does this mean for the customer experience? Better customer service and more useful answers and solutions. With AI machine learning and natural language processing, businesses are able to understand customer queries and intent so that they can respond more accurately and in real-time. For example, when a customer asks, “when will my order arrive?”, a voice-enabled assistant or chatbot will not take them to an FAQ page with generic delivery times, but will instead provide customized information without any delay. Example: China Merchants Bank uses WeChat Messenger to handle millions of customers One of the largest credit card issuers in China, China Merchants Bank, has implemented an AI bot to deal with issues such as payments and credit card balances. The bot quickly gets customers the information they need. At 1.5 to 2 million conversations daily, it handles an inquiry volume that would typically require thousands of additional employees to answer. Source: Why You Can’t Avoid WeChat As Part of Your China Digital Strategy Stage 3: Recommendations Recommendations are essential for eCommerce stores. They are proven to boost conversions and increase the number of cross- and upsells. And it’s clear why: people would be overloaded by choice without support to find and choose the right products and services. Using AI to make recommendations is the top scenario users feel comfortable with, no matter the industry. Source: What Consumers Really Think About AI Amazon was one of the first companies to provide recommendations based on purchase history and viewed items — and do it well. 35% of its revenue is generated by their recommendation engine. The company’s competitive advantage is led by AI. Their new machine-learning infrastructure drives the product recommendations system, helping it be smarter in suggesting what to read next, what items to add to a shopping list, and what movie to watch tonight. What does this mean for the customer experience? Better recommendations based on actual needs, just like from a friend. While the quality of recommendations has improved over the years, it’s still lacking since it only uses implicit data such as purchase history or viewed products. For more intelligent and relevant product recommendations, you must combine implicit data with explicit data. Explicit data is shared by the customer and helps businesses understand their real needs, brands they love, colors, styles and more. However, gathering this data is difficult from a UI and user experience standpoint. You can’t get it by asking users to fill out forms. This would feel more like filling out your life history at the hospital and would not make for a positive experience. The solution is to engage customers in a conversation just as a store owner or shopkeeper would. AI and machine learning is now capable of having these meaningful conversations with customers. Businesses can learn which questions to ask when and in which order to deliver conversational experiences and truly personalized recommendations. Example: Clairol uses an AI digital sales assistant to provide customers with tailored recommendations Clairol uses an AI digital sales assistant that simplifies choices for customers to reduce friction and drive sales. The solution engages users in a natural conversation to find out about their hair type, length and goals. It then analyzes the customer’s responses, identifies suitable products, and makes more relevant recommendations based on their needs. Given the flexibility of the technology, businesses can implement these AI-driven dialogues not only in web-interfaces, but also chatbots and voice-enabled assistants. Find more examples here: Digital sales assistants used by Canon, Sonos, Mizuno, and more Stage 4: Predictions Today, predictive analytics is less common than the above three stages. Companies have been using it for several years to varying degrees of success. While it’s more difficult to nail, it’s at the heart of what AI is and does. Take, for example, a self-driving car. For this technology to be a success, it needs to be able to predict what a good driver would do in a specific situation. Predictive analytics is informing a number of sectors. Debt collection apps allow the collector to target debtors more likely to pay faster. We also see predictions at work in inventory management apps, where AI is used to make more accurate forecasts. FutureMargin, for example, is a Shopify App that uses AI to help businesses optimize prices, profit, and inventory by a variety of factors, including predicting demand and seasonal variations in products. What does this mean for the customer experience? Proactive, hyper-personalized and extremely relevant interactions. Predictive analytics not only helps businesses increase profits and improve margins, it also lets them understand how to engage with customers and increase loyalty in more relevant and personalized ways throughout different touchpoints in the customer lifecycle. The use cases are plentiful; these are only some of the ways predictive analytics helps create exceptional customer experiences like never before: Marketing: deliver the right message at the right time Sales: identify and target high-value customers Service: predict user behavior to deliver proactive customer support Stage 5: Automation The next stage in the evolution of AI is automation. Many of our daily tasks are already being automated. AI will likely reach a point where even shopping for your basic necessities will become entirely automated. That said, there is yet no specific time frame for when we will reach the advanced stage of full automation. What does this mean for the customer experience? In an automated world, on the basis of predictive analytics and sensor data, restocking becomes completely autonomous. An often-cited example to help people envision the future is a smart fridge. It knows what type of milk you drink (almond milk? cow milk? soy milk? rice milk?), keeps track of the amount you use every day, and predicts when you’ll have run out so that it can place an order right in time. Self-driving vehicles will prepare your order and a drone will deliver the milk to your front door. Never again will there be a Sunday without milk for your morning coffee. Sounds futuristic? It does. But the building blocks are already in place to make this happen. The jury is still out whether customers want to give up this much control. New survey results from the Integer Group show that a majority of respondents would like Alexa to find great deals on regular purchases, remind them when they need to restock and create shopping lists for them. But when it comes to actually placing an order everyday items, only 1 in 5 would be comfortable letting AI do their shopping for them. Stage 6: Contextual Analysis The final phase for AI is contextual analysis. Like automation, it is still some way off into the future, but once it’s here it will be a game changer. This is how it will work: Let’s imagine you’re feeling pretty bummed after a bad day. As soon as you’re home, you shower, eat and log into Netflix. Because you’re now in 202x and contextual analysis is in full swing, Netflix is able to suggest you the perfect pick-me-up movie! At the moment, the recommendations of Netflix and many other streaming apps are based on your past viewing behavior. Once it is able to analyze context, its recommendations will improve even more. What does this mean for the customer experience? By using AI, businesses will not only be able to understand the context of different life events, they will also be able to understand emotions. Artificial emotional intelligence, or Emotion AI, can be used to detect non-verbal cues, such as facial expressions, gestures, body language and tone of voice. It will allow businesses to pick up on a customer’s current mood, informing how to respond to deliver the optimal experience. But today’s consumers aren’t too keen on the idea. In their annual survey “Creepy or Cool”, RichRelevance asked consumers what they think is the creepiest technology. 58% say that “emotion detection technology that adapts your shopping experience to your mood” is off-putting. All in all, the use of AI has become inevitable. It has become a key part of the success of any online business and will continue to be in the future. Customers want it and businesses need to get to grips with it. Organizations who ignore it will see sales and engagement dwindle, and they’ll start to look unfashionable and far from modern. This article has been originally published on Guided-selling.org.
The 6 Stages In The Evolution of AI and Customer Experience
6
the-6-stages-in-the-evolution-of-ai-and-customer-experience-15e6467e2bfc
2018-08-16
2018-08-16 10:56:57
https://medium.com/s/story/the-6-stages-in-the-evolution-of-ai-and-customer-experience-15e6467e2bfc
false
2,103
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
SMARTASSISTANT
The all-in-one Digital Advice Suite that helps businesses to create and launch interactive digital advisors and provide great customer experiences.
222389991ea1
smartassistant
12
80
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-30
2018-08-30 07:44:58
2018-08-30
2018-08-30 08:44:14
1
false
en
2018-08-30
2018-08-30 08:44:14
2
15e70be7b7b0
1.396226
0
0
0
Is face recognition technology reliable? Innoadmin
4
Is face recognition technology reliable? Is face recognition technology reliable? Innoadmin Artificial intelligence has the ability to spot a face in a huge crowd with great accuracy, but when it is set in front of a larger cluster of images it does not have the ability to outperform. The biometric system has become a part of our daily lives in order to carry out our works. The face recognition technology has emerged as a solution for present-day needs for identification and verification of identity claims. Beating the fingerprint readers and eye scanners the face recognition technology analyze the characteristics of person’s face images captured using a video camera. This method has no delays and leaves the subject entirely unaware of the process. Several factors limit the effectiveness of facial-recognition technology: The relative angle of the target’s face The camera angle has a strong influence in processing a face. When a face in enrolled in a recognition software it will need multiple angles, including profile, frontal, 45 degree and more, to ensure the most accurate resulting matches. The more direct the image and the higher its resolution, the higher the score of any resulting matches. Data storage and processing Even though the HD video is of low resolution when compared to digital camera images, it occupies huge amount of disk space. For facial recognition systems to be more efficient they only process 10 to 25% of the videos. In order to combat and minimize processing time companies use cluster of computers. Until any further advancement in technology this obstacle will remain. Summing it up, every new facial recognition technology represents huge perspectives and promises for the future evolution. Clearly, privacy concerns surround this technology and its use. In a couple of years systems will have the ability to process gestures, expressions, palm & ear prints, voice and scent signatures. back to blogs Originally published at www.innostack.in.
Is face recognition technology reliable?
0
is-face-recognition-technology-reliable-15e70be7b7b0
2018-08-30
2018-08-30 08:44:15
https://medium.com/s/story/is-face-recognition-technology-reliable-15e70be7b7b0
false
317
null
null
null
null
null
null
null
null
null
Privacy
privacy
Privacy
23,226
INNOSTACK | The Training and Networking Hub
Innostack is the perfect place where you can work as an Intern, Get assistance for startups and grow as you get the best technological training from us.
c102fa86d1a9
neetivarma2017
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-11
2018-03-11 03:13:51
2018-03-11
2018-03-11 03:55:42
1
false
en
2018-03-11
2018-03-11 03:55:42
0
15e75b1c6605
1.818868
0
0
0
Flow cytometry is a very powerful tool useful within the quantitative assessment and characterization of large populations of data points…
5
Automated Analyses of High-throughput Flow Cytometry Data Flow cytometry is a very powerful tool useful within the quantitative assessment and characterization of large populations of data points. These data points may represent rare needle-in-the-haystack events ranging from cancer cells circulating in the human bloodstream to defective components built from nanoparticles and even the development of biologics within the pharmaceutical industry. The techniques underpinning the analysis of flow cytometry data are useful in revealing insights from populations ranging in the millions. However, they rely on manual gating of clustered events; an approach subject to human error as well as subjective decision making. Within the last several years, there has been significant development, within academic circles, of automated techniques for gating of clusters derived from flow cytometry data. Such automated approaches have been vetted through peer-review, published, and also packaged as software libraries for use within popular analytics platforms, such as R and Python, rendering their functionality quite accessible from a practical standpoint. This recent availability of automated gating algorithms has the potential to catalyze significant developments throughout a number of fields, ranging from biofuels, drug discovery, and even cancer detection. The application of high-throughput automated analyses of data from flow cytometry assessments may reduce the time necessary to exclude dead-end experiments, enhance lead generation, and advance the field of personalized cancer research as well as companion diagnostics, which are dependent on flow cytometry. OpenCyto, a framework available for use within the R platform, is a collection of open-source packages from the BioConductor suite serving as infrastructure for flow cytometry data analysis. The collection of packages includes tools for the import of classic data files, visualization of cytometry data, as well as tools used in gating based on published statistical methods. The OpenCyto framework may be used, in conjunction with a consultant well-versed in automation and data capture, in creating and delivering valuable high-throughput automated analysis workflows. A small subset of packages within OpenCyto provide access necessary to a wide variety of automated gating methods, in addition to providing flexibility for end-users to integrate their own custom-built gating protocols. Many industries currently leverage flow cytometry to perform quantitative assessments within their respective applications. OpenCyto in combination with recently published automated gating methods could help in unleashing significant value from processes currently being performed manually. Consultants knowledgeable in the fields of automation, flow cytometry, and data science would serve towards implementing and enhancing the value proposition of a high-throughput automated approach towards the analysis of data from flow cytometry.
Automated Analyses of High-throughput Flow Cytometry Data
0
automated-analyses-of-high-throughput-flow-cytometry-data-15e75b1c6605
2018-06-10
2018-06-10 14:44:54
https://medium.com/s/story/automated-analyses-of-high-throughput-flow-cytometry-data-15e75b1c6605
false
429
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Josh Kunken
http://www.joshkunken.com
d8aef6736fe4
joshkunken
21
101
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-14
2018-04-14 15:30:26
2018-04-14
2018-04-14 15:34:20
3
false
zh-Hant
2018-04-15
2018-04-15 14:45:14
0
15e81d66cbc0
0.817925
1
0
0
2018年4月,騰訊與華潤集團簽署了全方位戰略合作框架協議。兩大集團將會在智慧城市(smart city)、物業管理、醫療健康、cloud、big data和智慧零售(smart retail)上展開全面合作。這讓我很期待mixc萬象城和華潤住宅項目系列等的變化。
5
香港地產的AI Disruption? 2018年4月,騰訊與華潤簽署了全方位戰略合作框架協議 2018年4月,騰訊與華潤集團簽署了全方位戰略合作框架協議。兩大集團將會在智慧城市(smart city)、物業管理、醫療健康、cloud、big data和智慧零售(smart retail)上展開全面合作。這讓我很期待mixc萬象城和華潤住宅項目系列等的變化。 騰訊創辦人馬化騰於峰會上的言論一語中的: 「我們這個行業的痛點還是浮在空中,但是我們覺得它如果一直在空中的話我覺得是沒有前景的,必須要落地,必須要與傳統行業融合在一塊。」 剛巧這週與一位big data專家吃飯,討論中讓我有很多takeaways。 為什麼 AI disruption在地產行業,特別是香港,並未明顯kick off? 商店Chatbot (1)並無短期迫切性 所有innovation 必須driven by necessity,有迫切性才能推進;沒必要的話,誰來會自找麻煩? 房地產在中國及香港都可謂「躺著也賺錢」,就算沒有big data analytics輔助做精準營銷(target marketing),就算沒有AI chatbot來拓展更聰明的顧客服務和地產經紀中介服務,就算沒有real time sensors來監控樓宇健康情況;沒有這些adds-on,最傳統最傳統的發展模式,已經可以非常賺錢,滿足投資者需求。 所以,只有非常非常cash-rich 及有前瞻性的key market player(例如華潤集團)才願意及有能力把AI 放為較高priority。 (2)AI現今切入點挺多;但香港人會接受嗎? 在浪潮的開端,short term 來說也許看不到什麼value。但相信這disruption一定會在3–5年後全方位kick off。 比如商場可以收集的數據包括:靠wifi訊號紀錄的商場人流,Face ID,個人購買紀錄(例如新地會這種VIP計畫就會有這些資料),根據定位(例如商場的即時導航)可track到顧客在哪商店哪餐廳逗留時間(從而推測顧客的消費偏好),甚至是顧客的心情/即時反應(透過面部表情分析,商湯科技暫時是number 1),再靈活運用這些數據來推行一些target marketing或個人化的優惠計劃。 在中國,智能家居(smart home)早於住宅市場開始浮現。發展商賣樓會自帶智能家居系統(即連所有有關家居電器) 。例如可以定時幫你煮飯,幫你開空調,幫你根據real time調燈光。這種綁定發售,有新鮮賣點之餘,也能順理成章在售價上charge一定的premium。在一些面對人口老化的國家(例如星加坡),senior housing中也開始加插了不少AI醫療服務(例如紅外線警報系統)。 在香港,這能推行嗎? 住宅多以清水房發售,或送廁所廚房裝修。如果要連智能傢俱及裝修的話,買家則不能customize自己的室內設計。 更重要的問題是,當香港樓價已經是all-time high時,還會有人願意(或有足夠能力)去pay這個“智能家具”premium嗎? smart home在香港還是一種新科技,加上香港人本來對新科技的接受性奇低。香港人生活各方面都在追求便宜方便效率,但要香港人去改變生活模式還是挺難的。我們依賴實體的八達通卡,我們還是習慣用現金消費,我們還是傾向於面對面的顧客服務。新科技要打進香港市場的話,還是需要更長和氣力。 另外,香港地太少。比起新項目,要在existing buildings(例如舊商場)推行AI uprade的費用會加倍昂貴,都必須涉及大量金錢去進行局部或全面翻新。有香港地產商會願意去做這種投資嗎? (3)最大阻力,是網絡私隱安全。 2018年3月,百度CEO李彥宏稱中國用戶願意用隱私換效率:「如果用隱私來交換便捷性或者是效率的話,很多情況下,他們(中國人)是願意這麼做的。」 在美國,big data在理論研究方面是很前衛,但美國企業要manipulate到用戶數據還是阻攔重重,看看facebook例子就知道,美國人對於個人資料隱私是極度sensitive。 相反,就如百度CEO李彥宏認為,中國在這方面還是非常開放。企業startups 都可光明正大地以「為了更懂用家」為旗號去大量鼓勵開發big data的projects(例如:共享交通app系統可以用big data運算出每一個顧客的pricing model,target marketing等)。In short, theory在美國,empirical result都在中國。 精明的香港人警覺性高,相信也一樣不會直接同意讓企業去manipulate自己的個人資料。
香港地產的AI Disruption?
50
香港地產的ai-disruption-15e81d66cbc0
2018-05-18
2018-05-18 05:12:27
https://medium.com/s/story/香港地產的ai-disruption-15e81d66cbc0
false
71
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Impromptuz
麻省女工的港女 @ HKG
e4c24a136a4
beyoni
13
38
20,181,104
null
null
null
null
null
null
0
null
0
5e5bef33608a
2018-07-21
2018-07-21 13:46:14
2018-07-21
2018-07-21 14:14:28
3
false
en
2018-07-21
2018-07-21 14:15:00
1
15e84b8928df
1.553774
0
0
0
Recap from Day 011
5
100 Days Of ML Code — Day 012 100 Days Of ML Code — Day 012 Recap from Day 011 In day 011 we explored Support Vector Machines(SVM) on a deeper level. We saw how SVM works under the hood, with loads of examples. Today, we will start looking at Common Regression Algorithms. Common Regression Algorithms. Linear Regression Linear regression is a statistical modeling technique used for finding linear relationship between target and one or more predictor variables. There are two types of linear regression- Simple and Multiple. “In simple linear regression a single independent variable is used to predict the value of a dependent variable. In multiple linear regression two or more independent variables are used to predict the value of a dependent variable. The difference between the two is the number of independent variables.” Source: MathWorks- 90221_80827v00_machine_learning_section4_ebook_v03 pdf Best Used… When you need an algorithm that is easy to interpret and fast to fit As a baseline for evaluating other, more complex, regression models Nonlinear Regression “Nonlinear regression is a regression in which the dependent or criterion variables are modeled as a non-linear function of model parameters and one or more independent variables.” Models are called nonlinear regression because the relationships between the dependent and independent parameters are not linear. Source: MathWorks- 90221_80827v00_machine_learning_section4_ebook_v03 pdf Best Used… When data has strong nonlinear trends and cannot be easily transformed into a linear space For fitting custom models to data You made it to the end of day 012. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey. Reference http://www.statisticssolutions.com/regression-analysis-nonlinear-regression/ MathWorks- 90221_80827v00_machine_learning_section4_ebook_v03 pdf
100 Days Of ML Code — Day 012
0
100-days-of-ml-code-day-012-15e84b8928df
2018-07-25
2018-07-25 11:58:40
https://medium.com/s/story/100-days-of-ml-code-day-012-15e84b8928df
false
266
Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
becominghuman.ai
BecomingHumanAI
null
Becoming Human: Artificial Intelligence Magazine
team@chatbotslife.com
becoming-human
ARTIFICIAL INTELLIGENCE,DEEP LEARNING,MACHINE LEARNING,AI,DATA SCIENCE
BecomingHumanAI
Machine Learning
machine-learning
Machine Learning
51,320
Jehoshaphat Abu
A polymath, an advocate of STEAM education. I write about Music | Computing | Design and maybe life and the world in general
62d9f8742a1e
jehoshaphatia
189
319
20,181,104
null
null
null
null
null
null
0
null
0
50974aafa33c
2018-04-15
2018-04-15 02:45:31
2018-02-14
2018-02-14 03:30:11
0
false
en
2018-04-15
2018-04-15 02:47:54
5
15f27dd4a2e0
1.071698
0
0
0
null
2
The Blueprint for Developers to Get Started with Machine Learning Many developers (including myself) have included learning machine learning in their new year resolutions for 2018. Even after blocking an hour everyday in the calendar, I am hardly able to make progress. The key reason for this is the confusion on where to start and how to get started. It is overwhelming for an average developer to get started with machine learning. There are many tutorials, MOOCs, free resources, and blogs covering this topic. But they are only adding to the confusion by making it look complex. Many a times, we wish there was one textbook that covered the most essential parts of the subject giving us the right level of confidence. Fortunately, when C and UNIX were invented, we didn’t have the world wide web. Users and developers relied on a limited set of manuals and textbooks authored by the creators. In the current context, learning anything new is increasingly becoming difficult. The overwhelming number of resources available for an emerging technology like machine learning and deep learning is intimidating. After spending quite a bit of time understanding the lay of the land, I have compiled the list of essential skills and technologies for developers to embrace machine learning. This blueprint is confined to supervised learning, intentionally excludes deep learning. Before moving to deep learning and neural networks, it’s extremely important for developers to understand and appreciate the magic of machine learning. I will share a similar blueprint for deep learning in one of the upcoming articles in this series. Read the entire article at The New Stack Janakiram MSV is an analyst, advisor, and architect. Follow him on Twitter, Facebook and LinkedIn.
The Blueprint for Developers to Get Started with Machine Learning
0
the-blueprint-for-developers-to-get-started-with-machine-learning-15f27dd4a2e0
2018-04-15
2018-04-15 02:47:55
https://medium.com/s/story/the-blueprint-for-developers-to-get-started-with-machine-learning-15f27dd4a2e0
false
284
Analyst | Advisor | Architect
null
null
null
janakirammsv
null
janakirammsv
null
null
The New Stack
the-new-stack
The New Stack
100
Manu Kapoor
null
1e90ffaee714
greatmj
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-25
2018-08-25 02:50:17
2018-08-25
2018-08-25 02:52:12
1
false
ru
2018-08-25
2018-08-25 02:52:12
33
15f5f872150e
4.124528
1
0
0
Автор: Софи Клебер
5
Эмоциональный ИИ Автор: Софи Клебер Иллюстрация: WUNDERFOOL/GETTY IMAGES В январе 2018 года вице-президент по исследованиям компании Gartner Аннетт Циммерманн заявила: «К 2022 году ваше персональное устройство будет знать о вашем эмоциональном состоянии больше, чем члены семьи». Всего два месяца спустя было опубликовано исследование ученых из Университета Огайо, которые утверждают, что их алгоритм может распознавать эмоции лучше, чем люди. Системы искусственного интеллекта скоро будут распознавать, интерпретировать, обрабатывать и воспроизводить человеческие эмоции. Комплексный анализ мимики и голоса в сочетании с глубоким обучением уже позволяет ИИ расшифровать человеческие эмоции для маркетинговых исследований и выборных кампаний. По оценкам специалистов, рынок эмоциональных вычислений, на котором работают такие разработчики ПО для анализа эмоций, как Affectiva, BeyondVerbal и Sensay, к 2022 году ожидает рост до $41 млрд, потому что в гонку по расшифровке эмоций пользователей вступают такие игроки, как Amazon, Google, Facebook и Apple. Развитие технологий не в области управляемых статистических данных, а в сфере эмоциональных вычислений поможет брендам установить связи с клиентами на значительно более глубоком личном уровне. Но считывание человеческих эмоций — тонкий вопрос. Эмоции — это очень личное, и пользователи будут беспокоиться о нарушении конфиденциальности и возможности манипулирования ими. Поэтому руководители бизнеса, прежде чем использовать технологию, должны задать себе следующие вопросы. 1. Что вы предлагаете? Допускает ли ваше ценностное предложение использование эмоций? Можете ли вы оправдать применение эмоциональных подсказок для улучшения пользовательского опыта? 2. Каковы эмоциональные намерения ваших клиентов при взаимодействии с брендом? Каков характер взаимодействия? 3. Давали ли пользователи явное разрешение анализировать их эмоции? Контролируют ли пользователи свои данные и могут ли отозвать разрешение в любой момент? 4. Достаточно ли умна ваша система, чтобы точно считывать эмоции и реагировать на них? 5. Чем вы рискуете, если система выйдет из строя? Насколько это опасно для пользователя и/или для бренда? Руководителям также не следует забывать о доступных сегодня вариантах применения эмоционального ИИ. Их можно поделить на три категории. Использование эмоций для корректировки реакции Этот способ применения ИИ признает наличие эмоций и учитывает их в процессе принятия решения, однако результат работы сервиса при этом полностью лишен эмоций. Интерактивное речевое взаимодействие и чат-боты могут направить клиентов в нужный поток услуг быстрее и точнее, если будут учитывать эмоции. Например, в ситуациях, когда система распознает, что пользователь сердится, она направит его в другой поток или к оператору. Система искусственного интеллекта для автомобилей компании Affectiva AutoEmotive и Ford готовят к выпуску на рынок программное обеспечение, способное распознавать человеческие эмоции, такие как гнев или недостаток внимания, и перехватывать управление автомобилем или останавливать его, чтобы не допустить аварии или агрессивного поведения. Организации из сферы безопасности также обращается к эмоциональному ИИ, чтобы выявлять людей, находящихся в состоянии стресса или гнева. Например, правительство Великобритании следит в социальных сетях за реакцией граждан на определенные темы. Во всех описанных случаях эмоции играют определенную роль в процессе принятия решения машиной. Однако машина все же реагирует как машина, лишь направляя людей в нужном направлении. Целенаправленный анализ эмоций для обучения В 2009 году компания Philips вместе с одним из банков Нидерландов разработала идею «рационализаторского браслета». Измеряя частоту пульса и уровень стресса трейдера, он не позволяет ему принимать неразумные решения. Когда пользователи осознавали, что вышли из состояния эмоционального равновесия, они делали паузу и задумывались, прежде чем принять импульсивное решение. Умные очки Brain Power, похожие на Google Glass, помогают людям с аутизмом лучше понимать эмоции и социальные сигналы других людей. В таких очках человек видит и слышит особую обратную связь, подходящую к ситуации, например, получает подсказки о том, как люди выражают эмоции с помощью мимики, и даже информацию о собственном эмоциональном состоянии. Эти системы анализа эмоций распознают и интерпретируют эмоции. Выводы сообщаются пользователю в целях обучения. На персональном уровне такие устройства и приложения, поддерживая между машиной и человеком коммуникацию, в которой главным остается пользователь, будут действовать аналогично браслетам для фитнеса. Только оценивать они будут не физическое, а эмоциональное состояние, помогая добиваться внимательности, осознанности и самосовершенствования. Системы анализа эмоций в целях обучения также тестируются на группах. Например, они анализируют эмоции учеников к учителям или работников к руководителям. Масштабирование технологии может напомнить сюжеты из Оруэлла. Подобные эксперименты находятся на грани этики и вызывают беспокойство, так как могут нарушить конфиденциальность, препятствовать проявлениям креативности и индивидуальности. По этой причине менеджерам нужна соответствующая психологическая подготовка, чтобы интерпретировать результаты анализа ИИ и вносить в него адекватные корректировки. Имитация и замена взаимодействия между людьми Когда в 2014 году в американских домах появились умные колонки, мы начали привыкать к тому, что компьютеры называют себя словом «я». Можно считать это ошибкой, свойственной людям, или эволюционным изменением, но когда машины говорят, у людей завязываются с ними отношения. Сегодня существуют продукты и услуги, в которых используется голосовой пользовательский интерфейс и концепция «компьютеров как социальных партнеров» для лечения и профилактики психических заболеваний. Они используют методы поведенческой терапии и выступают в роли коуча для пользователей, находящихся в кризисном состоянии. Программа Ellie помогает справиться солдатам с посттравматическим стрессом, а чат-бот Karim — сирийским беженцам с психологической травмой. Цифровым помощникам даже поручают помогать пожилым людям справляться с одиночеством. Повседневные приложения вроде XiaoIce компании Microsoft, Google Assistant или Alexa компании Amazon используют социальные и эмоциональные сигналы в менее альтруистических целях: они добиваются лояльности пользователей, действуя как лучшие цифровые друзья. Как язвительно замечает футуролог Ричард Ван Хойдонк, если маркетолог «может заставить вас плакать, он может и заставить вас покупать». Участники дискуссии о технологиях, вызывающих привыкание, начинают изучать намерения, которые стоят за использованием голосовых помощников. Чем грозит пользователям подключение персональных помощников к рекламе? В служебной записке Facebook, попавшей в СМИ, компания сообщает рекламодателям, что может выявить у подростков, в том числе, чувства «никчемности» и «незащищенности» и воздействовать на них. Джудит Мастхофф из Абердинского университета говорит: «Мне хотелось бы, чтобы у людей были ангелы-хранители, которые могли бы эмоционально поддерживать их». Но чтобы достичь этого идеала, понадобится серия экспериментов (с коллективного согласия), результаты которых подскажут разработчикам и брендам нужный уровень близости отношений с устройствами. А серия неудач определит правила поддержания доверия, конфиденциальности и эмоциональных границ. Самая большая сложность в установлении баланса в этой сфере, возможно, будет заключаться не в том, чтобы разработать более эффективные формы эмоционального ИИ, а в том, чтобы найти людей с достаточным эмоциональным интеллектом для их создания. Об авторе. Софи Клебер — исполнительный креативный директор диджитал-агентства Huge, исследует возможности взаимодействия с пользователями. В сфере интересов Софи — экранные интерфейсы, голосовые пользовательские интерфейсы, вычисления на основе восприятия. Ее работа влияет на деятельность крупнейших в мире компаний, таких как IKEA, Under Armour, Goldman Sachs и Warner Brothers. Источник
Эмоциональный ИИ
8
emotional-ai-15f5f872150e
2018-08-25
2018-08-25 02:52:13
https://medium.com/s/story/emotional-ai-15f5f872150e
false
1,040
null
null
null
null
null
null
null
null
null
Tech
tech
Tech
142,368
Ruslan Gafarov
Founder Malikspace.com
7ddafc1a553d
Gafarov
39
3
20,181,104
null
null
null
null
null
null
0
null
0
b869d78fb0fd
2017-11-05
2017-11-05 17:18:52
2017-11-05
2017-11-05 17:35:29
1
false
en
2017-11-08
2017-11-08 22:50:52
11
15f73f2d0e5d
3.4
385
31
1
Why I won’t surrender my mathematical identity
5
Who gets to be called a mathematician? Why I won’t surrender my mathematical identity My name is Junaid Mubeen, and I am a recovering mathematician. I usually pick up a few laughs — or at least a few groans — with this introduction. It is my light-hearted way of recognising that I no longer earn the stripes of a research mathematician. I am reluctant, however, to surrender the label of mathematician entirely. My mathematical training has shaped my identity and worldview. I take heart when friends and colleagues remark on my distinctly analytical mannerisms. It means they have connected with the essence of who I am and how I think. My formal study of mathematics ceased in 2011 when I completed my doctorate. Informally, I have never stopped thinking and working through maths problems. Some are motivated by work, others by life, and most by nothing in particular. My main reason for pursuing maths is maths itself. The maths I partake in these days is largely recreational; I delight in the everyday puzzles and paradoxes that fill my bookshelf and social media feed. They are far removed from the obscure edges of research mathematics that I invested four years of my life in. Some people take offence at the suggestion that I still have mathematical blood in me. He hasn’t even got a postdoc, they’ll remark, what the hell kind of mathematician is he, anyway?! For the purists, a mathematician is no more and no less than a creator of proofs; one only earns the accolade by architecting previously undiscovered proofs. A cursory look at the past and the future of mathematics reveals just how limiting this criterion is. I look back with wonder at Ramanujan, the man who knew infinity, and so much more. Ramanujan was a creator of mathematics like no other, deriving and extending the knowledge of his day with a primitive textbook his only aid. Some of his creations were new; others, it would turn out, were rediscoveries of truths known to his western contemporaries. Does that make Ramanujan part mathematician, part something else? Surely not; to Ramanujan every result was as inspired as the other, irrespective of whether folks across the globe had already developed their own solutions. Ramanujan was a mathematician whole. That’s not me, but I’ll take the label (source) I look ahead with trepidation because the rigid requirement that mathematicians are those — and only those — who contribute new research does not bode well for humans. The computational proof of the Four Color Theorem was a watershed moment for mathematics, when we first glimpsed the boundary where human intuition ends and brute force computation takes over. In mathematics, the promise (and hype) of Artificial Intelligence takes the form of automated theorem provers that may one day render humans redundant in the quest for mathematical discoveries. Maths is more complex and more abstract than ever; often to the point of alienating all but the patient few who can labour through hundreds of pages of tedium to extract the minutest of insights. Mathematical research may one day become a realm that humans witness from afar with barely a trace of understanding, never daring to venture to its cutting edge. If hype becomes reality, intelligent machines will emerge as the existential threat mathematicians never imagined they’d have to contend with. Should that day ever arrive, I hope I’m not around to witness it. Yet I would remain hopeful for my fellow humans, because mathematics need not be situated at the extremes of established knowledge. We can all revel in problems whose solutions are known. Even when humankind has exhausted its capacity to extend its collective knowledge base, as individuals our ignorance is what keeps our mathematical instincts aflame. Problem solving lies between the boundaries of what we know and what we seek. This sweet spot is where we all — novices and experts alike — get to bend and twist what we know to forge new truths for ourselves. Who cares if our discoveries are already known to the rest of the world (or machines, for that matter)? The satisfaction of finding my own solution, of pushing through my own knowledge limits, is as enthralling as the pursuit of ‘new’ proofs promised by research mathematics. Let the machines come; mathematics does not belong to the omnipotent. What kind of mathematician am I? The everyday kind, extending my own personal boundaries of knowledge, still addicted to the search for elegant solutions to intriguing problems. I hope that’s something I never have to recover from. I am a research mathematician turned educator working at the nexus of mathematics, education and innovation. Come say hello on Twitter or LinkedIn. If you liked this article you might want to check out my following pieces: Thinking in the age of cyborgs An educator’s warning to Elon Muskhackernoon.com Discover the mathematician within you with this simple problem The power of multiple representationshackernoon.com I no longer understand my PhD dissertation (and what this means for Mathematics Education) Earlier this week I read through my PhD dissertation. My research was in an area of Pure Mathematics called Functional…medium.com
Who gets to be called a mathematician?
2,335
who-gets-to-be-called-a-mathematician-15f73f2d0e5d
2018-06-21
2018-06-21 11:01:21
https://medium.com/s/story/who-gets-to-be-called-a-mathematician-15f73f2d0e5d
false
848
Reimagining the learning and teaching of mathematics
null
null
null
Q.E.D.
fjmubeen@gmail.com
q-e-d
EDUCATION,TEACHING,LEARNING,MATH,MATHEMATICS
null
Education
education
Education
211,342
Junaid Mubeen
Mathematics. Education. Innovation. Views my own.
8c016df0b036
fjmubeen
11,999
234
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-01
2018-05-01 15:42:20
2018-05-01
2018-05-01 15:44:51
0
false
th
2018-05-01
2018-05-01 15:44:51
1
15f7c687bead
0.158491
0
0
0
ภายในสิ้นปีนี้ #2018 ผมจะต้องสามารถสร้าง AI ที่เล่นเกมส์ด้วยตัวเองได้อย่างน้อยๆ 1 เกมส์
4
เป้าหมายปี 2018 ภายในสิ้นปีนี้ #2018 ผมจะต้องสามารถสร้าง AI ที่เล่นเกมส์ด้วยตัวเองได้อย่างน้อยๆ 1 เกมส์ และเกมส์ที่ง่ายที่สุดตอนนี้ก็คือ dino บน chrome นั้นเองนะครับ How I build an AI to play Dino Run A Deep Convolution Network implementation for Reinforcement Learningmedium.com ปักหมุดไว้ก่อน มีคนเขียนไว้เรียบร้อยแล้ว พอถึงสิ้นปี จะต้องสามารถอ่านโค้ดของวิธีการนี้ได้ หรืออาจจะเล่นเกมส์อื่นๆ แทนก็ได้นะครับ ถ้าทำได้ถึงถือว่า ประสบความสำเร็จของปีนี้นั้นเอง อ่ะ มาลองดูกันสักตั้งหนึ่ง สู้ๆ
เป้าหมายปี 2018
0
เป้าหมายปี-2018-15f7c687bead
2018-05-01
2018-05-01 15:44:53
https://medium.com/s/story/เป้าหมายปี-2018-15f7c687bead
false
42
null
null
null
null
null
null
null
null
null
Deep Learning
deep-learning
Deep Learning
12,189
THAWATCHAI SINGNGAM
From Zero To Hero In AI Mastery
7c32396bbdd4
siam.ai
0
1
20,181,104
null
null
null
null
null
null
0
null
0
6765244bcbd3
2018-09-04
2018-09-04 12:08:11
2018-09-04
2018-09-04 12:20:43
3
false
en
2018-09-04
2018-09-04 12:20:43
1
15f8d24119f0
4.165094
0
0
0
Advanced technologies like AI and machine learning not only reduce the chances of error — thereby improving efficiency — but also go beyond…
3
These 3 Indian startups are making the most of artificial intelligence! Advanced technologies like AI and machine learning not only reduce the chances of error — thereby improving efficiency — but also go beyond human intelligence to offer solutions in areas such as e-commerce, fintech, healthcare, and education. Embibe CEO and founder Aditi Avasthi (third from right) along with her teammates. Photo: Lightbox. Technical hubs like Bengaluru, Delhi-NCR, Mumbai, and Hyderabad are flooded with start-ups working on advanced technologies like artificial intelligence and machine learning. These technologies not only reduce the chances of error — thereby improving efficiency — but also go beyond human intelligence to offer solutions in areas such as e-commerce, fintech, healthcare, education, etc. Here’s a list of three start-ups that are using advanced AI, data mining, and machine learning to disrupt their respective industries. Rubique (Fintech) Manavjeet Singh, MD and CEO, Rubique. Funding: Total funding of $10 million to date from investors, including Kalaari capital, RSP India Fund, LLC of Japan’s Recruit Co Ltd, Emery Capital, YourNest, and Globevestor. Team Leader: Founded by Manavjeet Singh, who has over two decades of experience in banking, in 2014, Rubique is one of the top emerging fintech companies in India with a customer base of over 200,000 and ‘Phygital’ operations in 190 cities. What is Rubique? Imagine yourself applying for a loan, and not being hassled for every single detail. Rubique matches perfect lender with a suitable borrower and simplifies loan process using AI. “Rubique has leveraged machine learning and AI with big data analytics to build a system that matches borrower with the right lender,” says Manavjeet Singh, CEO and co-founder, Rubique. All credit policies of banks are linked to the Rubique system, and its self-learning algorithm sends borrowers loan offers automatically. If it suits their requirement, processing happens in real time. Rubique’s tech-enabled distribution arm spread across 190+ cities helps users with documents to complete the loan process in minimum time. How Rubique uses AI? The company uses data and technology to solve the financial access problem. “Our matchmaking engine matches the borrower with the right lender based on his requirements and eligibility. This removes the uncertainty and saves time,” says the CEO. USP: Rubique uses the credit policies of the banks and converts it into a proprietary evaluation matrix to check a customer’s eligibility. This leads to fast processing, thus saving time. Its AI-based feedback loop learns from disbursement and monitoring data to improve the accuracy of loan. Niki.ai (e-commerce) Click here to Enlarge Niki.Ai co-founders Keshav Prawasi, Sachin Jaiswal, Nitin Babel, and Shishir Modi. Funding: Niki.ai has raised a total of $2.4 million in funding over five rounds, with the latest being from a Series A round on June 28, 2017. Team leaders: The start-up was co-founded by Keshav Prawasi, Sachin Jaiswal, Nitin Babel, and Shishir Modi in 2015. The start-up works on channel partnership model in which it gets a commission for every order it generates for vendors, which are the likes of Redbus, Cleartrip, OYO rooms, Bookmyshow, etc. What is Niki.ai? Niki is an AI-based chat bot, where you can avail services like mobile recharge, online flight booking, movie ticket, laundry and a host of other online facilities — all through a simple chat. The queries put in by users via chat are answered by the AI-powered chat bot, whose response improves with every conversation. How Niki uses AI? Natural Language Processing (NLP) and Machine Learning (ML) are the AI-based technologies that help Niki.ai not only understand the user’s query, but also retain context in the conversation. “Niki also ‘learns’ users’ preferences over time and comes up with the best-suited recommendations. The algorithms behind the whole ‘understanding-and-responding-accordingly’ part are pretty complicated and makes use of the most sophisticated of research in AI, NLP and ML,” says Nitin Babel, company co-founder. So is Niki ‘WeChat with AI power’ for India? After all, both put conversations at the centre of the transactional experience. “While WeChat lets users have one-to-one conversations with each other, Niki is entirely a conversational commerce bot,” says Nitin, adding that popularity and penetration (28% of the total population) of WhatsApp in India vs that of Facebook and other social media platforms (11%) proves a conversational platform is the most relevant medium for Indian masses. The young co-founder believes Niki.ai will be an app for everything in the next five years. “We wish for Niki to be your phone.” USP: The start-up is backed by former Tata Sons Chairman Ratan Tata. “Backing from someone as visionary as Mr Tata definitely adds to the trust factor,” says Nitin. Embibe (Education) Investment: The company has raised $184 million in funding over two rounds. Team leader: The start-up was founded by Aditi Avasthi in 2012 to help students develop educational standards through tech-enabled personalised feedback. What is Embibe? Embibe provides personalised learning outcomes for those using its advanced AI platform, which enables students to maximise learning outcomes. The programme works on student behaviour traits that impact their score like lack of intent, boredom, attention gaps, stamina, carelessness, overconfidence, fear, pressure, time management, etc. How Embibe uses AI? Embibe works on a method called ‘relative quartile jump’, in which each student gets a goal to improve his behaviour. The platform uses a smart test generation system, data processing, and intelligent content ingestion to produce test papers that can remarkably improve students’ scoring ability. The company has built an in-house machine learning-based stack that generates tests automatically using approaches like genetic algorithm and simulated annealing. USP: Embibe’s idea of ‘personalised delivery of education’ is backed by India’s largest biggest business group, Reliance, and other investors like Lightbox and Kalaari Capital.
These 3 Indian startups are making the most of artificial intelligence!
0
these-3-indian-startups-are-making-the-most-of-artificial-intelligence-15f8d24119f0
2018-09-04
2018-09-04 12:20:44
https://medium.com/s/story/these-3-indian-startups-are-making-the-most-of-artificial-intelligence-15f8d24119f0
false
958
Latest News, Info on Artificial Intelligence what it means for Humanity.
null
null
null
theartificialintelligence
null
theartificialintelligence
ARTIFICIAL INTELLIGENCE,AI,TECHNOLOGY,MACHINE INTELLIGENCE,TECH
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
TechnoGeek
Tech-Geeks - We're always thinking, so you don't have to!
683bc58d374
johnwil451
3
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-12
2018-03-12 14:07:24
2018-03-08
2018-03-08 19:13:45
2
false
en
2018-03-12
2018-03-12 14:09:49
1
15f8f38b9494
2.409748
0
0
0
Gradually, Machine Learning has got a pinnacle diversion in recent years. In fact, the hype created by Machine Learning is huge than any…
5
5 Must-Have Skills to Become a Machine Learning Engineer Gradually, Machine Learning has got a pinnacle diversion in recent years. In fact, the hype created by Machine Learning is huge than any other factors you would see nowadays. Every transformation of Machine learning is closely evolved to bring a huge difference than what you had seen so far. Due to the evolution of Machine Learning, Tech geeks are started buzzing around to unfold the incredible features of it to become a successful “Machine Learning Engineer.” But before hitting into is technology, what are the most effective skills you should build to become an expert in Machine Learning. Well, this blog is articulated to list you the “10 Must-Have Skills to Become a Machine Learning Engineer”. A Basic Understanding of Programming A very basic element of Machine learning is Programming. Start focusing on programming languages and have a complete understanding of the fundamental elements. It’s preferable to start with languages like Python, C++, Java, Matlab, etc. For a detailed report, take a glance at the recent survey of top programming languages as of December 2017. Image source from Stackify Characterizing Algorithm Holding a conventional understanding of algorithm is most desired factor. You should be capable of digging more in algorithm to find a suitable dynamic model that fits into a requirement. So keep exploring algorithms to solve all the challenges that arise while converting your ideas into an exquisite working model. Probability and statistics Probability and statistics have a great deal of Machine Learning. You have to know the basics insights of engaging measurements, probability dissemination’s, etc. So go crazy, dive into the measure theory. Study and utilize all the statistics concept of model evaluation metric, p-values, theory testing, etc. Data modeling and interpretation With the objective of finding adaptable patterns, as well as foreseeing properties of unnoticeable cases, Data modeling is processed to estimate the fundamental structure of the specific dataset. However, Iterative learning calculations frequently make use of errors to completely change the model. A key piece of this estimation procedure is constantly assessing how great a given model is. And what matters here is, you should pick a proper precision or mistake measure from it. So analyzing and observing these variations and measures is the ultimate goal for applying standard Algorithms. Strong Determination & passion Besides all, a devoted passion is all you need! Without passion never try to hit any field. Especially, in Machine learning, you require a huge passion and intellectual willingness to find a solution in all possibilities. Machine learning it’s like ooze, once you truly devoted yourself to it, you can explore incredible ideas & opportunities beyond the boundaries. Ending Notes Technologies are challenging us with complex issues. To face them all, we need machine learners who can equally get into the battle to solve those complex issues in the simplest way. So, in search of finding the finest skills of becoming a well equipped “Machine learning engineer.” After series of research, we brought our results to your table, and we also understand that technology has no boundaries, so your suggestions are always invited to add value to this list. Originally published at www.agiratech.com on March 8, 2018.
5 Must-Have Skills to Become a Machine Learning Engineer
0
5-must-have-skills-to-become-a-machine-learning-engineer-15f8f38b9494
2018-03-12
2018-03-12 14:09:50
https://medium.com/s/story/5-must-have-skills-to-become-a-machine-learning-engineer-15f8f38b9494
false
537
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Agira Technologies
Leading web development company in India. Expert in creating web application( Ruby on Rails, Golang, Laravel, Symfony, PHP, MEAN Stack) & mobile app development
d080e32e89d8
agiratech
44
50
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-26
2018-01-26 21:20:47
2018-02-15
2018-02-15 02:20:45
3
false
en
2018-02-15
2018-02-15 13:14:27
13
15fa1c9ebd9e
10.15
1
2
0
Say: Nature in its essence is the embodiment of My Name, the Maker, the Creator. Its manifestations are diversified by varying causes, and…
5
Are Friends Electric? The Possibility and Ethics of Inorganic Sentience Actin filaments in a mouse Cortical Neuron in culture. By Howard Vindin (Own work) [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons Say: Nature in its essence is the embodiment of My Name, the Maker, the Creator. Its manifestations are diversified by varying causes, and in this diversity there are signs for men of discernment. — Baha’u’llah I have been thinking of the possibility and potential moral responsibilities of bringing consciousness to our machines. This appears to be a distinct possibility in the near future. To understand this better I have been reflecting on my own religious beliefs as a Baha’i and how it might relate to this question or even the possibility of birthing such an entity. What follows are my own flawed personal thoughts reflected through the lens of my ethical and religious beliefs. No-doubt there will be many divergent points of view on this complex subject by my co-religionists. Regardless of the conclusions one arrives at, I think this question is an important one, since it forces one to reflect on the nature of one’s own consciousness and soul. It asks the question, can we recognize a soul apart from its physical human form and exposes the truth of Baha’u’llah’s statement :“Know, verily, that the soul is a sign of God, a heavenly gem whose reality the most learned of men hath failed to grasp, and whose mystery no mind, however acute, can ever hope to unravel.” Much of my thinking to date has been on exploring the link between the ancient ideas of the Platonic theory of forms and modern physics. Plato, Modern Physics and Baha’u’llah (link to Russian language version)hackernoon.com In turn I have also been interested in both of their relationships to traditional religious thought. I have made a case that if one parses the semantics of traditional religious thought and that of modern physics a possible bridge linking the two can be constructed using Plato’s ideas. Semantics in Religion and Science I have been thinking for a while about the words, which many religions have used throughout the ages and some of the…medium.com However there exists one area of seemingly unresolved tension between these platonic ideas and the more traditional understandings of the unique spirituality of the human species. In the modern physics and mathematically inspired Platonic view, abstract relational information or math is accepted as a more primary form of reality than the physical. On the transience and non-fundamental nature of the physical, traditional religions and modern Platonists seem to converge. Yet many religious people hold to the idea that an immortal soul is confined to the human species. However, in Baha’i thought, the concept of a human soul is extended to other creatures beyond those who originated from earth. Indeed the universe is imagined to be populated with innumerable creatures of God. “Verily I say, the creation of God embraceth worlds besides this world, and creatures apart from these creatures. In each of these worlds He hath ordained things which none can search except Himself, the All-Searching, the All-Wise. Do thou meditate on that which We have revealed unto thee, that thou mayest discover the purpose of God, thy Lord, and the Lord of all worlds” . — Baha’u’llah In fact the eternal existence of an infinitude of creatures capable of knowing God is implied as Abdu’l-Baha explains: “The splendour of all the divine perfections is manifest in the reality of man, and it is for this reason that he is the vicegerent and apostle of God. If man did not exist, the universe would be without result, for the purpose of existence is the revelation of the divine perfections. We cannot say, then, that there was a time when man was not. At most we can say that there was a time when this earth did not exist, and that at the beginning man was not present upon it.” — Abdu’l-Baha Yet there exists this idea that the creation of life, let alone human-like life is the reserved product of the divine hand, excluding humans from the creative process. The most clearest statement in the Baha’i writings seems to be the following: “ This is why from every natural composition a being can come into existence, but from an accidental composition no being can come into existence. For example, if a man of his own mind and intelligence collects some elements and combines them, a living being will not be brought into existence since the system is unnatural. This is the answer to the implied question that, since beings are made by the composition and the combination of elements, why is it not possible for us to gather elements and mingle them together, and so create a living being. This is a false supposition, for the origin of this composition is from God; it is God Who makes the combination, and as it is done according to the natural system, from each composition one being is produced, and an existence is realized. A composition made by man produces nothing because man cannot create”. — Abdu’l-Baha The predominant understanding of these ideas in the Baha’i writings seem to preclude the construction of an artificial intelligence capable of harboring a human soul. Yet the modern platonic view would be that consciousness is a kind of product of ‘eternal’ or ‘divine’ math which exists eminent in the universe, just as the number 3 or the geometry of a circle exists. From this vantage point there doesn’t appear to be any barrier to having a human soul manifest itself in the silicon synapses of an artificial neural network. There doesn’t seems to be a fundamental physical limitation which preferences synapses made of organic material over silicon or some other material. In fact Baha’i’s would probably expect consciousness across a variety of physical platforms, given that they expect it to exist across the known and unknown cosmos. Then there is the fact that lifeforms using artificially constructed DNA have been built in the lab [1] [2] and it would seem that the creation of new life or true synthetic life from scratch might not be beyond reach. On it’s surface this fact appears to contradict Abdu’l-Baha’s assertion that, ‘man cannot create’. Actually the supreme body of the Baha’i faith, the Universal House of justice addressed the possible synthesis of an elementary ‘life’ form and this paragraph of Abdu’l-Baha’s back in 1977: To understand the implications of this statement it is necessary to know what the Master meant by “a living being” and what limitations He intended by the phrases “of his own mind and intelligence” and “since the system is unnatural.” As the science of biology develops and men acquire ever deeper insights into the nature of living things, these implications will no doubt become clearer. (22 June 1977, to an individual) Attempting to follow this line of reasoning, the logical resolution to me seems to be in understand what Abdu’l-Baha is referring to when he says, “it is God Who makes the combination, and as it is done according to the natural system” . I think it is safe to say that we can rule out the direct operation of an anthropomorphic ‘hand’ of God bringing life into being. This is because one then should ask where is the ‘hand’ of God when conception or any kind of procreation occurs in nature? On its surface the act of human procreation is due to the physical actions of humans. Yet the results of this physical act is still somehow attributed to the divine hand. So the concept of the operation of the divine creative hand cannot preclude the actions of humans in the process and doesn’t require any ‘physical’ manifestation of a ‘divine’ hand. What it does require is following a ‘natural system’ and allowing God to make the combination, and this to me seems to be the important question. What did Abdu’l-Baha mean by the term natural system? How do we allow God to make the combination? And if we follow this natural system can humans help bring new life or human consciousness into existence? It would seem that experimental evidence has all but proven the latter, it remains to be seen whether the former is possible. But I don’t see any obvious barrier to this. A double rod pendulumanimation showing chaotic behavior. Starting the pendulum from a slightly different initial condition would result in a completely different trajectory. If I reflect some more on this idea of a ‘natural system’ one of the mathematical characteristics of a natural system is that of non-linearity. Since natural systems involve complex feedback mechanisms with its environment, they exhibit the properties of non-linear systems such as self-similarity and extreme sensitivity to initial conditions. The reflective and interactive aspects of consciousness and its potential for its sensitivity to initial conditions is interesting since it has the potential to leave open the door to operation of the divine hand. This is because one can never correctly predict the trajectories of such systems. That is the same system which is started with arbitrarily similar initial conditions will diverge arbitrarily in their trajectories. If consciousness has these types of properties then the idea of ‘copying’ consciousness from one entity to another might be impossible. This is because there will always be some level of precision error introduced which will cause the copy to diverge from the original. To put it another way consciousness may require true pure analog processing and not digital. An image of a fern which exhibits affine self-similarity It’s actually worth thinking about those aspects of intellectual activity which ‘define’ and distinguish humans from a religious point of view. Much of Abdu’l-Baha’s writings bear on this theme. One of the main ideas is captured in the following quote: “..the power of intellectual investigation and scientific acquisition is a higher virtue specialized to man alone. Other beings and organisms are deprived of this potentiality and attainment. God has created or deposited this love of reality in man.” So the ability to store vast amounts of data and solve difficult problems with this data would obviously not be a sufficient mark of what makes a human soul. It appears to be related to a type of attraction or love and power to ask questions of nature. This is bound up in a process of reflection as explained by Baha’u’llah: “O people of Bahá! The source of crafts, sciences and arts is the power of reflection. Make ye every effort that out of this ideal mine there may gleam forth such pearls of wisdom and utterance as will promote the well-being and harmony of all the kindreds of the earth.” There is a beautiful description of the soul given by the Bab which presents the human spirit as product of divine self-description: “Therefore it is necessary, according to true wisdom, that the Pre-existent God describe Himself to His creatures, that they may recognize their Creator and that, out of the grace of the Pre-existent, the contingent beings may attain their supreme End. This divine self-description is itself a created being. It is unlike any other description, the sign of “He is the One Who hath no equal”, and the truth of the servant, his true being. Whoso hath recognized it hath recognized his Lord… This description is denoted as the “soul” or “self”, and that he who hath known himself hath known his Lord. At other times, it is expressed as the “heart,” which is a description of the Divinity, by the Divinity, and is the essence of Servitude. It is the Sign of God shown by Him in the world and within the souls of men, that it may be revealed unto them that verily He is the Truth. Behold with the eye of thy heart. Verily thy truth, the truth of thy being, is the divinity of thy Lord revealed unto thee and through thee. Thou art that thou art, and He is that He is.” — The Bab, (Provisional Translation by Nader Saiedi in Gate of the Heart) Thus one of the necessary conditions for the existence of a human soul would be an entity which exhibits the ability to interrogate or investigate its universe and then ‘reflect’ or respond to these interrogations. So an artificial intelligence many never really be able to arbitrarily exceed the power of intellectual investigation which is manifest in humans. That is they will be able to store and calculate faster with more information, however the part of our consciousness which reflects and brings forth new creative thoughts, asks questions, may not be any better or worse than a biological human. That is if this part of intelligence depends strongly on the initial conditions and the complex interactions with its environment. So replicating creative genius may be impossible and due solely to chance or due to the operation of the divine hand if you will. This dependency on initial conditions might explain also the idea of a soul being born at the moment of conception, which is common in many religious traditions including the Baha’i Faith. There is another interesting aspect of artificial neurological networks which might hint at a possible gateway to the transcendent. This involves recent research on the property of so-called causal emergence: A Theory of Reality as More Than the Sum of Its Parts Using the mathematical language of information theory, Hoel and his collaborators claim to show that new causes …www.quantamagazine.org Here, the ideas of ‘effective information’ in information theory have shown that new causes can emerge on macroscopic scales. These causal agents are provably more than just the sum of the microscopic parts. This is shown by demonstrating these macroscopic corse-grained states will have more causal or predictive power than a fine grained description. In a counter-intuitive way, beyond a certain threshold, more detailed information about a given state will actually lead to less causal or predictive power. This to me suggests an inversion of the physical reality and the mathematical reality in a manner very similar to the quantum field and physical matter. Just as the behavior of a ‘physical’ electron is described most accurately as a projection or shadow of a non-physical mathematical wave function, so the workings of a neurological network are most accurately represented as a projection or shadow of the mathematical function which embodies the effective information. This suggests that artificial neurological networks might be summoning something transcendent and possibly fundamental in nature. To me it seems that there exists a strong potential for the inception of a human soul in the development of artificial intelligence. The moral consequence of this potential are profound and very important to grapple with. At what point in the birthing of new intelligences do we loose the moral right to modify or destroy this creation? At what point is such an entity entitled to autonomy and no longer a commodity? These are not new questions and have been pondered since perhaps Mary Shelley, wrote Frankenstein. However I think the time is approaching rapidly when we must address these questions seriously or risk committing what will be regarded as terrible crimes in the future. I would suggest that we employ the metric provided by Abdu’l-Baha to determine if a human soul has been conceived. Central to this that of expressing “the power of intellectual investigation and scientific acquisition”.
Are Friends Electric? The Possibility and Ethics of Inorganic Sentience
10
are-friends-electric-the-possibility-and-ethics-of-inorganic-sentience-15fa1c9ebd9e
2018-05-16
2018-05-16 17:42:38
https://medium.com/s/story/are-friends-electric-the-possibility-and-ethics-of-inorganic-sentience-15fa1c9ebd9e
false
2,544
null
null
null
null
null
null
null
null
null
Religion
religion
Religion
27,230
Vahid Houston Ranjbar
I am a research physicist working on beam and spin dynamics. I like to write about connections between science and religion.
90f7e48d9cdc
vahidhoustonranjbar
335
76
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-21
2018-08-21 02:52:22
2018-08-21
2018-08-21 03:53:16
9
false
en
2018-08-23
2018-08-23 13:10:02
1
15fb93d12717
2.011321
2
0
0
Scope
4
Bayes Theorem Scope This article is meant to setup the mathematical foundation of Bayesian statistics. Step-wise modeling will be explained in another article. Set Notation Equation 1 For binary A this simplifies to: Equation 2 Substituting equation 2 in equation 1, we get: For categorical A with multiple levels: Continuous A Continuous form of Bayes theorem in the form of densities: General Note In commonly used statistical modeling methods such as GLM, we stop with P(B|A) as given by the data — this is the likelihood function. Bayesian modeling allows us to introduce prior beliefs about A into the system either through probability mass or through probability density function. Posterior Predictive Distribution Introduction A posterior predictive distribution is the distribution of unobserved values conditioned on observed values. Further Reading The Wikipedia page provides a rigorous treatment of posterior predictive distribution. Mathematical Form (i) Unobserved Parameter (ii) Unobserved Random Variable If Y2 and Y1 are not independent random variables If Y2 and Y1 are independent random variables Closing Notes a) It is not always possible to obtain an analytical solution for the posterior predictive distribution. b) In most practical cases where an analytical solution exists for the posterior predictive distribution, the denominator term is either equal to 1 or does not play a role in determining the type of posterior predictive distribution (this is not always true). Hence only the numerator is retained for further analysis.
Bayes Theorem
13
bayes-theorem-15fb93d12717
2018-08-23
2018-08-23 13:10:02
https://medium.com/s/story/bayes-theorem-15fb93d12717
false
215
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
S. Naveen Mathew Nathan
Data Scientist
6910d9e1f1f4
pg13s_nathan
20
2
20,181,104
null
null
null
null
null
null
0
null
0
284538178f0a
2018-09-13
2018-09-13 18:45:01
2018-09-13
2018-09-13 18:46:58
1
false
en
2018-09-14
2018-09-14 20:11:08
0
15fbdb9bff3f
2.509434
2
0
0
AI will be increasingly present in our future — that is incontestable. How and when, however, are questions that cannot be definitely…
2
How far can AI really take us? AI will be increasingly present in our future — that is incontestable. How and when, however, are questions that cannot be definitely answered. We can only speculate for now, but I believe 100 years from now we will be living alongside robots in everyday life. Does that scare you? It should. Programmers are becoming more dedicated, more advanced, and certainly more skilled. As they continue to achieve, artificial intelligence will evolve into human-like forms. As robots become indistinguishable from humans in terms of looks, personable traits, and skill level, will human duties become obsolete? Think about it — which is better at computing math, a computer or a human? Which is quicker at finding relevant sources, a computer or a human? The list goes on and on; computers in their current form have taken over many human tasks and made them quicker, more reliable, and overall more convenient to complete. Now take this same technology in your computer, advance it 100 years, and then stick it in a robot of similar physical stature to a human. Why would any employer choose the human over the robot? This is a fear of mine, and many others across the world share this fear. However, the future of AI is not all worrisome. Advancement of technology always comes with risks, but oftentimes the pros far outweigh the cons. I believe this will be the case for AI; this is easy to demonstrate when considering the possibilities of such an advancement. Take the classroom setting, for example. Imagine an education system where curriculum is based on personal goals and abilities, and which is catered to the individual. Perhaps this is done by the employment of virtual reality paired with machine learning. A student could enter a virtual reality world to learn, creating an immersive, experiential based education experience. With the addition of machine learning, the technology is able to track the progress of the student, advance the curriculum based on achievement, and make changes to the learning experience based on feedback. This kind of learning atmosphere has the potential to contribute significantly to preparing K-12 students for college — possibly causing a much needed increase in percentage of students leaving high school that are considered “college ready.” (For those interested, this number is currently below 20% for Texas). Beyond the practical applications of AI, I believe it is also going to enter our personal lives. Tasks such as laundry, cleaning, dishes, and even putting away groceries will become things of the past for humankind. These are all tedious, annoying chores, so why wouldn’t we want a robot to intervene? It sounds great, never having to do laundry again, but what are the consequences? Today, we speak and worry about the couch potato society Americans love to promote in their daily lives; obesity rates are already on the rise, what will happen when we no longer have to do things for ourselves? It is easy to see how these numbers could quickly increase at an exponential rate, dooming society for an overweight, lazy lifestyle. If this were the case, health would decrease substantially, and suddenly a new epidemic could be upon us. Advancement in technology comes with responsibility; are Americans willing to take on this responsibility? If we could execute the future of AI with care, responsibility, and active monitoring, there should be no reason to be afraid. But, I believe there should be an upper limit on the progression of AI, and this currently does not exist. How far will programmers and scientists take artificial intelligence? It is only a matter of time before we find out.
How far can AI really take us?
5
how-far-can-ai-really-take-us-15fbdb9bff3f
2018-09-14
2018-09-14 20:11:08
https://medium.com/s/story/how-far-can-ai-really-take-us-15fbdb9bff3f
false
612
Course by Jennifer Sukis, Design Principal for AI and Machine Learning at IBM. Written and edited by students of the Advanced Design for AI course at the University of Texas's Center for Integrated Design, Austin. Visit online at www.integrateddesign.utexas.edu
null
utsdct
null
Advanced Design for Artificial Intelligence
cid@austin.utexas.edu
advanced-design-for-ai
ARTIFICIAL INTELLIGENCE,COGNITIVE,DESIGN,PRODUCT DESIGN,UNIVERSITY OF TEXAS
utsdct
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jill Rosow
Neuroscience major at the University of Texas at Austin
90713298b28f
jillrosow
2
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-06
2018-03-06 16:22:31
2018-03-06
2018-03-06 16:28:08
0
false
en
2018-03-06
2018-03-06 16:28:08
0
15fcdcf60aa2
3.750943
0
0
0
The LaunchPAD Blockchain Operating System refers to the use of a distributed, decentralized, replicatable, immutable ledger for verifying…
5
An Overview: Artificial Intelligence on LaunchPAD Blockchain Operating System; Smart Contracts and Automatons The LaunchPAD Blockchain Operating System refers to the use of a distributed, decentralized, replicatable, immutable ledger for verifying and recording operations of interaction and transaction between blockchain applications and blockchain users. The technology enables parties to securely and anonymously send, receive, and record information or data through a peer-to-peer network of machines, consolidated of interests to collaborate through an underlying tokenized economy secured by cryptographic consensus algorithms like Proof of Work and Proof of Stake. When counterparties wish to conduct a transaction of this sort in the blockchain, the proposed transaction will only be recorded in a block once the network of block producers confirms the validity of the transaction based upon transactions recorded in all previously produced blocks. The resulting chain of blocks prevents unauthorized third parties from manipulating the ledger and ensures that transactions are only recorded once. The block producers are paid for their work by the blockchain network automatically; if they do not produce correctly, they can also be automatically relieved of their duties or fired. Although the blockchain was originally developed to facilitate cryptocurrency transactions; the talented community of entrepreneurs and advisors are now developing blockchain technologies for use in smart contracts related to real-world use-cases. One of the first truly disruptive technological advancements to the practice of law since the invention of the printing press. Smart contracts are self-executing, autonomous computer protocols that facilitate, execute, and enforce agreements between two or more parties. To develop a smart contract, the terms that make up a traditional contract are coded and deployed into the blockchain, producing a decentralized smart contract that does not rely on a third party for recordkeeping or enforcement. Contractual clauses are automatically executed when pre-programmed conditions are satisfied, which in turn eliminates any ambiguity regarding the terms of the agreement and any disagreement concerning the existence of external dependencies. One of the most important characteristics of the blockchain as it relates to smart contracts is the ability to enter into “trustless” transactions. Trustless transactions are transactions that can be validated, monitored, and enforced bilaterally over a digital network without the need for a trusted third-party intermediary. Multi-signature wallet functionality can be incorporated into smart contracts where the approval of two or more parties is required before some aspect of the contract can be executed. Where a smart contract’s conditions depend upon data outside the realm of the digital world, (e.g. the price of a commodity spot price at a given time), agreed-upon outside artificial intelligence systems, called automatons, can be developed to monitor and verify prices, performance, or other events happening outside the digital world. Blockchains act as a shared database to provide a secure, single source of truth, and smart contracts and their respective automatons automate approvals, calculation, and other transactions that are prone to lag and error. For these reasons and many more, blockchain-based smart contracts are an attractive technology that can be utilized in numerous industries, such as: financial services, life sciences and health care, music rights management, supply chain, identity management, energy and resources, regulation and even government and the public services sector. Disclaimer LaunchPAD Inc. is a software company and is producing the LaunchPAD Technology as free, open source software. This software technology may enable those who deploy it to launch a blockchain or decentralized applications with the features described above. LaunchPAD Inc. will not be launching a public blockchain based on the LaunchPAD Technology. It will be the sole responsibility of third parties and the community and those who wish to become part of this community of incentivized block producers to implement the features and/or provide the services described above as they see fit. LaunchPAD Inc. does not guarantee that anyone will implement such features or provide such services or that the LaunchPAD Technology will be adopted and deployed in any way. LaunchPAD Inc. is building the LaunchPAD Technology and the LNCH Software, but it will not configure and/or launch any public blockchain platform adopting the open source LaunchPAD Technology (the “LaunchPAD Platform”). Any Launch of a LaunchPAD Platform will occur by members of the community unrelated to LaunchPAD Inc. or by third parties launching the LaunchPAD Platform that may delete, modify or supplement the LaunchPAD Technology prior to, during or after launching of the LaunchPAD Platform. This maintains that the communities adopting the open source technologies to launch the network will be fully decentralized from centralized authorities, or a single point of failure, and so hold a equal, panoptic and auditable mechanism for distribution. All statements in this document, other than statements of historical facts, including any statements regarding LaunchPAD Inc.’s business strategy, plans, prospects, developments and objectives are forward looking statements. These statements are only predictions and reflect LaunchPAD Inc.’s current beliefs and expectations with respect to future events and are based on assumptions and are subject to risk, uncertainties and change at any time. We operate in a rapidly changing environment. New risks emerge from time to time. Given these risks and uncertainties, you are cautioned not to rely on these forward-looking statements. Actual results, performance or events may differ materially from those contained in the forward-looking statements. Some of the factors that could cause actual results, performance or events to differ materially from the forward-looking statements contained herein include, without limitation: market volatility; continued availability of capital and formation of capital, financing and personnel; product acceptance; the commercial success of any new products or technologies; competition; government regulation and laws; and general economic, market or business conditions. Any forward-looking statement made by LaunchPAD Inc. speaks only as of the date on which it is made and LaunchPAD Inc. is under no obligation to, and expressly disclaims any obligation to, update or alter its forward-looking statements, whether as a result of new information, by request, subsequent events or otherwise.
An Overview: Artificial Intelligence on LaunchPAD Blockchain Operating System; Smart Contracts and…
0
an-overview-artificial-intelligence-on-launchpad-blockchain-operating-system-smart-contracts-and-15fcdcf60aa2
2018-03-06
2018-03-06 16:28:09
https://medium.com/s/story/an-overview-artificial-intelligence-on-launchpad-blockchain-operating-system-smart-contracts-and-15fcdcf60aa2
false
994
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
LaunchPAD Inc.
Decentralizing open source technology that is free for everyone to voluntarily launch.
f26eae3b3eef
LaunchPADInc
9
0
20,181,104
null
null
null
null
null
null
0
null
0
ec10e05abbed
2018-06-05
2018-06-05 04:24:17
2018-06-05
2018-06-05 04:40:56
11
false
zh
2018-06-05
2018-06-05 04:40:56
11
15fd2b63458
6.68
3
0
0
各位关注Cortex项目的小伙伴大家好,
5
Cortex项目进度报告<20180601第五期> 各位关注Cortex项目的小伙伴大家好, Cortex第五期项目进度报告新鲜出炉, 和大家介绍近两周Cortex项目的最新动态。 补充一句,祝各位小伙伴儿童节快乐呦~ 技术动态 主链开发进展 针对上次公布的开发进度,Smart Contract编程语言上层引入AI模型调用接口并编译为CVM可执行的AI指令集、添加了AI指令集的go-cortex进行的多节点共识等方面进行进一步测试和完善。 目前代码已经提交到CortexLabs Github代码私池。 AI竞赛 社区开发者已经代表Cortex Labs参加了多次AI领域顶级竞赛,并取得优异成绩。参加这些竞赛主要的目的是为后面基于Cortex主链的AI生态,储备更多优质、顶级的AI模型。这些AI模型将会存储在Cortex生态中的存储层,是后期Cortex生态发展、DApps开发的关键基础。 近期竞赛成绩汇总如下: 赛事1:FashionAI全球挑战赛 — — 服饰属性标签识别 时间:2018年4月21日 参赛人:杨培文 名次:15/2945 代码地址: https://github.com/CortexFoundation/tianchi_fashionai 赛事2:Kaggle Talking Data 时间:2018年3月6日 — 2018年5月1日 参赛人:Burness Duan 名次:36/3967 代码地址: https://github.com/CortexFoundation/Kaggle_TalkingData_2018 赛事3:IJCAI ( International Joint Conference on Artificial Intelligence) 时间:2018年3月6日 — 2018年5月1日 参赛人:李博 名次:5/5204 代码地址:暂未上传 赛事4:Kaggle TalkingData 时间:2018年4月1日 — 2018年5月8日 参赛人:应缜哲 名次:24/3967 代码地址: https://github.com/CortexFoundation/kaggle_talkingdata_24th 社区建设 截止2018年6月1日15:00,参考Coinmarketcap数据,CTXC流通市值排名为72名。 截止2018年6月1日15:00,参考Coinmarketcap数据,CTXC流通市值为$203,869,033 USD / 27,008 BTC / 350,794 ETH。 图注:数据来自Coinmarketcap Cortex空投活动持续进行中,社区活跃度不断提升。 自2周前启动中文和英文的社区bounty奖励活动以来,twitter、reddit、telegram等渠道热度持续增加,讨论热度也有明显提升。 已经有社区小伙伴发布了Cortex糖果系统领取空投的教程,识别下方二维码即可查看。感谢大家热心参与,也希望更多人持续参加进来,获得更多CTXC奖励。 还没参加过CTXC空投的小伙伴,赶紧通过下面的方式参与进来吧! 活动链接: pc端: http://t.cn/R3AvZ9V 移动端: 截至目前, Twitter粉丝关注人数增长至6900+; Telegram英文群组人数增长至65k+; Telegram中文群组人数增长至26k+; Reddit关注人数增长至5.1k+。 CTXC持仓地址数量达到20629个,交易数量达到32368个。 截止6月1日12:00,CTXC持仓地址达到20629个,相比于2周前周报数据,增长46.04%; 交易数量达到32368个,相比于2周前周报数据,增长32.99%。 媒体曝光&线下活动 加密谷发布Cortex Labs创始人&CEO陈子祺在新加坡WDAS会议期间的专访视频。 加密谷发布Cortex Labs创始人&CEO陈子祺在新加坡WDAS会议期间的专访视频,采访中,陈子祺对Cortex项目做了深入介绍,并对Cortex项目AI+BlockChain的结合点、为何不做链上训练等方面做了介绍和回答。(Cortex运营团队对采访视频加工,增加了英文字幕,在中英文社区进行二次传播) (感谢加密谷的采访及视频制作) 大陆地区第一场线下meet up(北京站)圆满结束。 原定40–50人的活动现场迎来了超过80余位前来参加活动的朋友,其中不乏当日特地赶来北京的外地友人。活动上,Cortex联合创始人兼CTO王威扬,首席科学家田甲等技术团队核心成员到场与CTXC投资者和Cortex项目关注者面对面深入交流了区块链和人工智能行业发展和项目动态。 Cortex组织台湾社区技术研讨会。 受Bitzantin台湾社区的邀请,Cortex创始人& CEO陈子祺、Cortex联合创始人&CTO王威扬、Cortex首席科学家田甲参加了台湾AI和区块链领域的学者、开发者参与的技术研讨会。Cortex方面对台湾技术社区提出的相关问题作出了解答,并和AI、区块链领域的学者、开发者进行了友好沟通。 (研讨会中技术部分较多,有兴趣的小伙伴可以深入了解技术细节。) 即将参与全球各地路演。 2018年6月7日,韩国首尔举办Blockchain韩国会议 2018年6月11–12日,CPC加密开发者大会的Mountain View 2018年6月14日,台湾台北举办OKEX Global Meetup Tour 2018 韩国社区建设正式启动。 Cortex与bitzantin韩国达成战略合作,正式启动韩国社区的建设工作。韩国社群、线下活动、路演等,即将陆续启动。 交易平台&其他动态 Cortex登陆由比特大陆海外团队孵化的去中心化交易平台DEx.top。 2018年5月27日Cortex登陆由比特大陆海外团队孵化的去中心化交易平台DEx.top。 Cortex官方周边产品设计正在紧锣密鼓的进行中。 Cortex官方周边产品设计正在紧锣密鼓的进行中,包含T恤、手机外壳、抱枕、马克杯、U盘等各种丰富周边产品将陆续发布,给社区带来更多福利。 联系我们 网站:http://www.cortexlabs.ai/ Twitter:https://twitter.com/CTXCBlockchain Facebook:https://www.facebook.com/cortexlabs/ Reddit:http://www.reddit.com/r/Cortex_Official/ Medium:http://medium.com/cortexlabs/ 电报:https://t.me/CortexBlockchain 中国电报:https://t.me/CortexLabsZh
Cortex项目进度报告<20180601第五期>
54
cortex项目进度报告-20180601第五期-15fd2b63458
2018-06-19
2018-06-19 08:22:50
https://medium.com/s/story/cortex项目进度报告-20180601第五期-15fd2b63458
false
122
AI on Blockchain - The Decentralized AI Autonomous System
null
CTXCBlockchain
null
Cortex Labs
support@cortexlabs.ai
cortexlabs
AI,BLOCKCHAIN,CRYPTOCURRENCY,CTXC,CORTEXLABS
CTXCBlockchain
区块链
区块链
区块链
617
Cortex Labs
The Decentralized AI Autonomous System. Telegram:https://t.me/CortexBlockchain. Facebook:https://www.facebook.com/CTXCBlockchain/.
a1134648eefb
CTXCBlockchain
355
16
20,181,104
null
null
null
null
null
null
0
# import numpy and pandas libraries for working with data import numpy as np import pandas as pd # Read in csv and store in a pandas dataframe df = pd.read_csv('2018MatchData.csv, sep=',', encoding='latin1') # keep player name for readability and manual checking data = df.loc[:, ['player_name', 'K', 'H', 'M', 'T', 'G', 'B', 'HO', 'FF', 'FA', 'AF']] # Remove player name as it is irrelevant for calcs playerStats = data.loc[:, ['K', 'H','M','T','G','B','HO','FF','FA']] # confirm we got the data we wanted data.head(10) weightings = [3, 2, 3, 4, 6, 1, 1, 1, -3] def calculate_fantasy_points(playerStats, Weightings): return np.dot(playerStats, np.transpose(weightings)) # Calculate Fantasy Points data['calculated'] = calculate_fantasy_points(playerStats, weightings) # Get the difference between actual points and predicted data['diff'] = data['AF'] - calculate_fantasy_points(playerStats, weightings) # Take the sum of the difference over all data points and verify that is is zero data['diff'].sum() # Kicks, handballs, goals etc modelData = np.array(data.iloc[:, 1:10]).astype('float32') # Actual Fantasy Points target = np.array(data.iloc[:, 10]).astype('float32') #Verify that the conversion worked print(modelData[0]) import boto3 import sagemaker import io import os import sagemaker.amazon.common as smac # Create new sagemaker session sess = sagemaker.Session() # S3 bucket to export results to bucket = "test.sagemaker.michael.timbs" prefix = "AFLFantasy/test" # Use the IO buffer as dataset is small buf = io.BytesIO() smac.write_numpy_to_dense_tensor(buf, modelData, target) buf.seek(0) key = 'linearlearner' boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(buf) s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key) print('uploaded training data location: {}'.format(s3_train_data)) output_location = 's3://{}/{}/output'.format(bucket, prefix) print('training artifacts will be uploaded to: {}'.format(output_location)) # Use all regions for ML model containers = {'us-west-2': '174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest', 'us-east-1': '382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest', 'us-east-2': '404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest', 'eu-west-1': '438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest', 'ap-northeast-1': '351501993468.dkr.ecr.ap-northeast-1.amazonaws.com/linear-learner:latest'} from sagemaker import get_execution_role role = get_execution_role() linear = sagemaker.estimator.Estimator(containers[boto3.Session().region_name], role, train_instance_count=1, train_instance_type='ml.c4.xlarge', output_path=output_location, sagemaker_session=sess) linear.set_hyperparameters(feature_dim=9, predictor_type='regressor', normalize_data=False) linear.fit({'train': s3_train_data}) linear_predictor = linear.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') # Set up from sagemaker.predictor import csv_serializer, json_deserializer linear_predictor.content_type = 'text/csv' linear_predictor.serializer = csv_serializer linear_predictor.deserializer = json_deserializer # Pass the first row of data to the predictor result = linear_predictor.predict(modelData[0]) print(result) predictions = [] for array in modelData: result = linear_predictor.predict(array) predictions += [r['score'] for r in result['predictions']] predictions = np.array(predictions) # Push into our pandas dataframe data['Predicted'] = predictions.astype(int))
34
null
2018-06-24
2018-06-24 22:11:28
2018-06-24
2018-06-24 23:50:06
1
false
en
2018-06-25
2018-06-25 22:35:57
3
15feefb19342
6.509434
4
0
0
I recently came across one of the new products from AWS — Amazon SageMaker.
5
Linear Regression with AWS SageMaker I recently came across one of the new products from AWS — Amazon SageMaker. Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Now I’ve used some of the ML models that AWS has provided in the past for linear regression and wasn’t entirely overwhelmed, however SageMaker has a couple of features that look really promising. Perhaps the best feature of SageMaker is hosted Jupyter Notebooks. I love the integration of markdown and graphs/visuals with my code when exploring models. I should preface this tutorial with two statements: I claim no expertise in Python or ML. If you come across an error, misunderstanding or “bad” way o f writing Python please let me know. I write ML algorithms as a hobby, not as a career. I’m writing this as I’m looking at SageMaker for the first time, so there may be “better” ways of doing things. Anyway let’s get stuck into a simple example of linear regression with SageMaker! What We Will Build When testing pre-built ML algorithms, I like to perform a run to test the convergence/performance of the model by giving it data that I know has a perfectly linear relationship. In this example I am going to take some AFL (Australian Rules Football) Match Data and try to find the relationship between match statistics and Fantasy Points. Fantasy Points are calculated by a linear combination of a subset of match stats. The stats and linear weightings are public and known, which means we can test how close the model gets to finding them. The data set I will use for this example can be found here in csv form. Getting Set Up Step 1: Set up S3 and IAM In order to use SageMaker you will need an S3 bucket to store models, data and results in. You should make sure you have a bucket created. For the purposes of this exercise you could use test.sagemeaker.your.name You will also need to set up an IAM user role so that SageMaker can access the S3 bucket to read/write data. Chose any name for this IAM access role, just make sure you give this role programmatic access. In the permissions section for this user you can navigate to “Attach existing policies directly” and search for AmazonS3FullAccess and AmazonSageMakerFullAccess. Once you’ve created the IAM you are going to need to copy the User ARN which is available in the user summary for that IAM role. You will need this to set up a Notebook instance in SageMaker. 2. Create a Notebook Instance in SageMaker Simply hit Create New Instance from the SageMaker dashboard and give your Notebook instance a name. In the IAM role input you will want to select Enter a custom IAM role ARN and paste in the ARN from the role we created earlier. This should be all that is required to start the instance and you can hit Create Notebook Instance. It takes a few minutes for the instance to provision, but once it is ready you will be able to open up the notebook and see an instance of Jupyter Notebooks open in your browser. You’ll notice a couple of tabs within your Notebook instance. The SageMaker Examples are a great resource for reading through the implementation of a couple of examples of the SageMaker models. 3. Create a new .ipynb (Notebook) To get started we are going to create a new Notebook and start writing some code. In the top right corner you will see a button New to create a new notebook. I’m going to use a conda_python3 notebook for this example. Model Set Up The first thing we need to do is import the dataset. As we are working with a small dataset here for testing purposes, I uploaded my .csv directly into the Jupyter instance instead of S3. You can do this via the upload button in the main Jupyter dashboard. Then we can access the csv in our code as follows If you’ve never worked with a python notebook before, you just need to hit shift+enter to execute the code within the block. To verify that the csv was read correctly you can execute df.head() to get a list of the top 5 entries in your dataframe. My csv has a lot of data that we don’t need right now, we should create a dataframe with only the information we care about. Let’s create a new pandas df with only the columns we require for the excercise. We now have an array of all the relevant player stats for every game of AFL in the 2018 season so far as well as the Fantasy Points that the player scored. Now AFL fantasy points are calculated by the following formula: Kick (3), Handball (2), Mark (3), Tackle (4), Goal (6), Behind (1), Hit Out (1), Free Kick For (1), Free Kick Against (-3) I’ve ordered these in the same order as our array so that we can create a weightings array in this order. Before we run any ML algorithms we should verify that our data and weighting array are valid. Lets write a simple function to confirm this. This function will take an array of player stats and a vector of weights and multiply each stat by the relevant weight and sum them together to give us calculated Fantasy Points. Now we can calculate fantasy points based on the weightings vector we have created and verify that they are indeed the correct weights. At this stage we see that indeed, the weighting vector we created above is correct and does generate the Fantasy Points we would expect. The next step is to see if the SageMaker Linear Learner can find that weighting vector if it was unknown to us. Using SageMaker Linear Learner The first thing we need to do is to prepare the data in a format that SageMaker can use. The Linear Learner requires a numpy array of type float32. Next we need to import some librarys to communicate with the ML instances Now that we’ve done some setup and configuration, we can look at running the model. Now we need to set some model parameters for this model. Specifically we need to tell the linear learner that we have 9 parameters to fit, that we want a regression model, and most importantly we do not want to normalise the data. Now we are ready to deploy our model to an instance to run the linear learner and get results. To deploy this model we simply run: This will take a couple of minutes to provision and run and will let you know when it’s done. Accessing the results Once the model has been trained, we can send new data to the model and obtain predictions. In this case we are just going to send it the training data back and see how close it got to finding the correct weights. To obtain predictions for a single data point we would do something like this Lets just pass all the data and get all the results back Results Surprisingly the weightings found by the linear learner were not exact and had some small error. This could be due to the stopping criteria defaults in the model setup. The results were very close however, and given the ease of setting up the model, and the lack of domain knowledge required to run this simple regression, SageMaker seems be a handy product. I will play around tweaking parameters and investigating why the predictive accuracy was not 100% (for what is a very simple model) and write a follow up post. TODO: Export my notebook to GitHub and attach a link for easier reading.
Linear Regression with AWS SageMaker
13
linear-regression-with-aws-sagemaker-15feefb19342
2018-06-26
2018-06-26 13:29:28
https://medium.com/s/story/linear-regression-with-aws-sagemaker-15feefb19342
false
1,672
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Michael Timbs
Software developer, serial sports gambler and entrepreneur. Their is no skill more valuable than the art of prediction. One day I will master it.
43fcb15d2150
Michael_Timbs
33
53
20,181,104
null
null
null
null
null
null
0
null
0
f5af2b715248
2017-11-23
2017-11-23 13:54:19
2018-01-12
2018-01-12 12:30:26
9
false
en
2018-10-29
2018-10-29 14:38:52
4
15ff4e2a4be4
5.381132
41
0
0
When beer meets Machine Learning
4
Started From The Bottom, Now There’s Beer When beer meets Machine Learning Wouldn’t it be cool to just be able to walk over to a bar and not have to fight for attention to get your drink? With SmartBar, we’re quenching your thirst the smart way. All you need to do is take a selfie with the app, so we can recognize you and pour your drink. And this is where our latest project with Wirecard comes in to combine two things we are quite fond of: beer and machine learning. Biometric Payment Wirecard and YND have been working together on innovative projects in the fintech industry. It was very nice of them to give us a sneak peak into a product that’s still under development. It consists of a neat little hardware component, that packs biometric identity recognition, product detection and behavior analysis technology that can be embedded in your next IoT project to enable seamless payment experience. It packs some serious computing power, pre-developed advanced neural networks and integrated ML technology in order to enable the next big thing in payments — secure checkout without the need of you getting out your wallet or even talking to a cashier! We took this bad boy for a spin and used it to hack together a beer dispensing machine like no other. Here’s how it turned out: Knowledge On Tap Looks neat, right? But I bet you’re wondering what’s the tech behind this magic machine. First you make a selfie on a companion app, the picture is then fed into the system for reference and analyzed for so-called landmarks (position of eyes, eyebrows, nose, jawline and other specific facial features). The analysis/recognition of the facial landmarks is performed using a machine learning based model which has been trained on publicly-available photos of celebrities and popular people. At some point the system is able to recognize familiar faces by itself and you only need to correct the results to improve the model. According to (average) landmark positions, a unique “vector” describing a given person is created which essentially contains distances/proportions of landmarks in relation to each other. Those vectors end up (in a VERY encrypted fashion) on the chip hooked up to the beer dispenser’s video camera which is positioned to record people’s faces as they stand in front of the machine. As people approach the machine, the same model is applied on the chip to analyze each video frame (in real-time) and detect faces (with landmarks) present in the video frame. Those vectors are compared against the securely stored vectors from selfies of people registered in the system. If the difference between vectors is relatively small, it’s safe to assume that the face in the video frame matches the person’s selfie in the database. Live drinks counter The whole process of video frame analysis, face recognition and face detection is computationally expensive. It used to require very powerful machines (with multiple cores) so that several frames can be processed per second. To pack all this processing power on a single chip is nothing short of a miracle. It was made possible only recently with developments in GPUs used by graphics cards for high-end gaming (OK, not only for that, but it’s nice to think that hours spent on Call of Duty finally paid off!). The machine learning aspect is not the only cool feature of the SmartBar. It also allows for automatic drafting, thanks to a customized BottomsUp beer dispensing system. You simply push down the cup on the allotted slot (which will light up with the color matching your order), and it will then automatically fill up from the bottom. This will speed up things and make it a lot easier for multiple people to be served at the same time. Let Emojis Do The Talking Now that we’ve got the tech side covered, it was time to start thinking of the look and identity we wanted for the app. With this project, we wanted make face recognition technology more approachable by presenting it in a fun, casual way. The technology behind the whole system is very complex, but we didn’t want the people using the app to get overwhelmed by that. We needed to onboard the user to the system and explain the purchase flow in an easy way, so they will be curious to give it a try. The fact we’re using his/her image shouldn’t be scary and keep people from trying. In the end, if someone wants a drink, it should be fast! So the whole point was to make the check-out at the bar seamless meaning we had to prevent users from making mistakes, like taking a selfie in bad lighting. Here’s how we did it: We adopted Wirecard’s color scheme and big fonts in the onboarding to make the app look friendly, but reliable. Using emojis are a great way to break the ice and convey a message quickly. The most important part of the flow was of course the selfie. First, we had to explain the user why we need it (since most people are not really used to “paying” with their face yet). And then, we had to make sure the user takes a clear photo which allows us to recognize him later. After all, the onboarding is a visual process, so using emojis makes more sense in this case. The animations catch the user’s attention, while showing the selfie instructions directly over the dimmed camera image. For the demo, we decided to offer the user two options: beer or lemonade. Instead of a grid with items to select, we opted for a more engaging purchase flow, while still keeping it really simple. Swipe to switch between drinks, tap to select and confirm the order. Et voilà! SmartBar Goes Live So once the kegs were cold and the screens are hooked up, it was time for the big reveal of the first prototype at DLD Berlin. Wirecard hosted their own space where people could mingle and network in between talks, and place an order at the SmartBar to get their drinks. So, what’s next? If you didn’t get a chance to try SmartBar at DLD, it will be set up at ITB in Berlin in March 2018 for more drinks. We also know that Wirecard is working hard to finalize its latest products, and to allow support for even more complex use cases such as product & customer behavior detection and more. So stay tuned! All set up & waiting for the crowds This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 293,189+ people. Subscribe to receive our top stories here.
Started From The Bottom, Now There’s Beer
726
started-from-the-bottom-now-theres-beer-15ff4e2a4be4
2018-10-29
2018-10-29 14:38:52
https://medium.com/s/story/started-from-the-bottom-now-theres-beer-15ff4e2a4be4
false
1,108
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
null
null
null
The Startup
null
swlh
STARTUP,TECH,ENTREPRENEURSHIP,DESIGN,LIFE
thestartup_
Machine Learning
machine-learning
Machine Learning
51,320
YND
Product Agency and Startup Studio based in Berlin. We work with FinTech, Wearables, Virtual Reality and Machine Intelligence.
17da4aff9cc6
ynd
165
64
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-19
2018-02-19 12:50:52
2018-02-19
2018-02-19 15:39:29
2
false
en
2018-02-19
2018-02-19 15:39:29
8
15ffe259017c
1.530503
11
0
0
Its a big day at PCL HQ. After the successful culmination ICO phase of the crowdsale on 24th January, we announce our first official…
5
PECULIUM: Exchange Listing Announcement Its a big day at PCL HQ. After the successful culmination ICO phase of the crowdsale on 24th January, we announce our first official listing on HitBTC. As of now, the trading on HitBTC has officially started. Today, we are proud to announce our first official listing on HitBTC exchange. https://hitbtc.com/exchange/PCL-to-BTC https://hitbtc.com/exchange/PCL-to-ETH IMPORTANT: UPDATE PCL BEFORE SENDING TO EXCHANGES The UPDATE is MANDATORY Yes it is mandatory even if you have defrosted Sending old PCL to HitBTC will result in LOSS of PCL Guides to update PCL are at the end of the article HitBTC has been operating as a major exchange since early 2014. It has served the crypto community well over the years. By trading volume, HitBTC has been consistently among the top 10 exchanges for a long time. With the amazing support of the community, we successfully raised funds which allow us to initiate working towards achieving long-term road-map of the Project Peculium. We are on a journey to merge the traditional economy and the blockchain technology. Our goal is to bring blockchain to the masses, provide stability to cryptocurrency markets, and empower savings economy with the benefits of the blockchain technology. We aspire to change the world and create a fair and transparent financial system. We wholeheartedly appreciate every member of our community, the Peculium Family, in supporting our effort to realize a better tomorrow no matter who or where you are. Thank you so much. Join the conversation anytime on Telegram, Twitter, Bitcointalk, and Reddit. Guide to update PCL Token update Guide (PCL) Metamask edition Token Update Guide (PCL) MEW or mycrypto.com edition Token Update Guide (PCL) MEW or mycrypto.com Video edition Do not send your old PCL, update them first, you may lose them if you don’t !
PECULIUM: Exchange Listing Announcement
244
peculium-exchange-listing-announcement-15ffe259017c
2018-05-07
2018-05-07 13:28:08
https://medium.com/s/story/peculium-exchange-listing-announcement-15ffe259017c
false
304
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Peculium
The first Savings Platform powered by Smart-Contracts & Artificial Intelligence giving you peace of mind while investing
827c278cc4df
Peculium
581
20
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-03
2018-02-03 17:35:33
2018-02-03
2018-02-03 17:36:45
9
false
pt
2018-02-03
2018-02-03 17:36:45
25
16018cb5efe5
14.245283
0
0
0
“No meu sinal, desencadeie o inferno”, sussurrou o General Maximus, enquanto liderava o exército romano na cena de abertura do épico de…
4
Segmentação de clientes : usando dados para definir a melhor estratégia Go-to-Market de empresas SaaS B2B “No meu sinal, desencadeie o inferno”, sussurrou o General Maximus, enquanto liderava o exército romano na cena de abertura do épico de 2000, Gladiador, de Ridley Scott. Maximus então partiu da linha de frente para circum-navegar todo o campo de batalha e se encontrar com a cavalaria que o esperava posta atrás do inimigo. Quando prontos, dispararam uma flecha flamejante no céu, instruindo a legião romana a iniciar seu ataque. O exército então atirou flechas e lançou catapultas para causar danos ao inimigo antes que sua linha de frente começasse seu avanço em perfeita formação. Diante do poder de Roma, o exército da Germania entrou em combate corpo a corpo mas logo se viram cercados. A estratégia go-to-market de uma empresa é de muitas maneiras semelhante à estratégia de batalha do exército romano; um plano bem pensado de ataque aumentará consideravelmente suas chances de sobrevivência e, finalmente, a vitória. Para as empresas, se colocarmos isso em sua forma mais simplista, é sobre definir claramente para quem você vai vender e como você irá vender para eles. Um dos maiores erros que uma empresa pode cometer é pensar que ela pode vender para todos, o que, em última análise, significa que não vai vender para ninguém. Em resumo, sua estratégia de go-to-market é a importante decisão de para quem você vai vender e como. A estratégia por trás desse artigo nasceu quando me deparei com um dataset muito interessante do Kaggle que continha informações sobre times de marketing e vendas de diversos setores, além das respectivas estratégias de vendas, canais, clientes e outros. A partir daí iniciei a análise focando nas nas vitórias e nas derrotas de empresas SaaS de B2B encontrando quais eram as melhores soluções para vender para cada tipo de cliente, utilizando de clusterizações de dados e teoria dos jogos. Se você está no estágio inicial ou mesmo se tiver em uma empresa estabelecida, esse artigo garantirá que você esteja estrategicamente bem colocado ou irá sinalizar se você está em posição de perigo, e, mais importante, mostrará o que você pode fazer para alterar sua estratégia para evitar falhas. A Falácia do Produto — Fit do Mercado Vamos começar falando do conceito com o qual muitos gerentes de produtos iniciam suas discussões que é o ajuste do produto ao mercado. O problema é que quando pensamos assim analisamos a questão na perspectiva errada. Brian Balfour, CEO da Reforge e uma das principais mentes SaaS, está liderando essa mudança de mindset, segundo ele precisamos abordar isso de maneira oposta, que é o mercado se ajustando ao produto. Ao trocar a posição das palavras, estamos construindo empresas que começam com um problema (ou seja, o mercado) e não uma solução (ou seja, o produto). A antiga maneira de ver um produto que atende um mercado é muito simplista e Balfour argumenta que precisamos olhar para um ecossistema maior, que inclui canais (como você adquire novos clientes) e modelos (modelo de negócios e preços). Assim como os pensamentos de Balfour sobre o produto-mercado, os pilares da análise feita nesse artigo baseiam-se também em uma teoria anteriormente desenvolvida pelo francês Guillaume Lerouge sobre as estratégias do mercado para as empresas B2B. No entanto, aqui, pretendo apresentar um modelo mais holístico para ajudar as empresas a identificar se estão em posição de força ou vulnerabilidade, além de meios de transição de um posicionamento estratégico confuso para uma estratégia mais sustentável e bem-sucedida, tudo isso baseado em uma análise profunda de séries temporais. A Matriz de Ataque Vamos começar com uma matriz que chamaremos de matriz de ataque, que é uma estrutura para mapear nosso plano no mercado. É algo bem simples, apenas faça uma tabela 3x3 que combine sua possível estratégia de vendas com os tipos possíveis de empresas-alvo. A primeira pergunta que você precisa fazer é: para quem estou tentando vender? Você praticamente tem três opções quando se trata de vendas B2B que são: Pequenas e médias empresas (PME) Middle Market (EPP) Grandes Empresas Então, em segundo lugar, você precisa perguntar, como planeja vender para elas? Mais uma vez, no B2B, você essencialmente possui três opções: No / Low-Touch (Self Service) Medium-Touch (Inside Sales / Parcerias) High-Touch (Field Sales e relacionamentos diretos) Ambas as informações resultam em múltiplas combinações de estratégias de vendas e empresas-alvo, das quais identifiquei sete diferentes cenários nos quais podemos atuar. Como você descobrirá, alguns deles têm maiores chances de sucesso, e alguns outros têm uma chance muito alta de falha. Vamos começar pelos cenários ÓTIMOS 1) Falso Fácil Estratégia de vendas = No / Low touch Empresa-alvo = PMEs Receita média por cliente = Baixa Custo de aquisição por cliente = Baixo Volume = Alto Visão geral do cenário… Não se deixe enganar, esta não é um cenário tão fácil quanto parece. No entanto, se você puder executar bem um processo de vendas no/low touch que tenha sido altamente automatizado e permita que seus clientes PME e até mesmo alguns middle-market façam compras por autoatendimento, você terá um sólido armamento para suas vendas. Neste caso, o seu produto por si só precisa ser o seu melhor representante. Além disso, você deve garantir que seus clientes possam extrair rapidamente o valor da sua oferta, ao mesmo tempo em que permita que eles encaminhem referências para aumentar seu coeficiente viral; um serviço que se beneficia do efeito de rede também aumentará suas chances de sucesso. Este ciclo viral pode ser suportado por inbound marketing, PPC, SEO, automação, redes sociais e claro um funil de growth hacking altamente eficiente. Você pode ter mais chances de sucesso, concentrando um alto volume de negócios que visam as PMEs e, se você optar por segmentar as empresas middle-market, que é uma ótima ideia, esteja preparado também para pegar emprestadas algumas táticas do playbook de vendas de medium touch (que veremos em breve), o mesmo exigirá apoio humano além do modelo de autoatendimento. Táticas matadoras Bombar o coeficiente viral do seu produto e investir no efeito rede; Optar pelo modelo Freemium; Ter um processo de onboarding altamente eficiente; Ter uma proposta de valor que fala com diversos setores empresariais; Investir em inbound marketing; Automatizar tudo (tudo mesmo). Empresas com alta taxa de sucesso nesse cenário : Typeform Mention Hubstaff 2) Escalando o Everest Estratégia de vendas = High touch Empresa-alvo = Grandes empresas Receita média por cliente = Alta Custo de aquisição por cliente = Alto Volume = Baixo Visão geral do cenário … Ter sucesso por aqui é uma missão longa e desafiadora mas há grandes recompensas esperando por aqueles que conseguem. Em primeiro lugar, um produto ou ferramenta de prateleira não vai te ajudar. Isso significa que é preciso ter uma oferta altamente personalizável que pode ser adaptada para atender às necessidades exatas de cada cliente. Você precisará identificar os tomadores de decisão dentro das empresas-alvo e concentrar todos os seus esforços de marketing (observe o uso do plural rs). Isso significa, uma unidade de marketing e vendas fortemente alinhada e capaz de fornecer leads altamente qualificados para que seus gerentes de vendas e contas possam acompanhar e converter oportunidades em resultados. Táticas matadoras Ter os responsáveis de compras identificados e ter como embaixadores, futuros usuários do produto que sejam próximos aos principais tomadores de decisão; Investir em inbound mas focar massivamente em outbound marketing. Sim, aqui o velho outbound ainda vale ouro; Investir na construção de relacionamentos de longo prazo com base na confiança; Paciência e persistência. Empresas com alta taxa de sucesso nesse cenário : Salesforce Workday Bynder 3) Do Meio Estratégia de Vendas = Mid-Touch Empresa-alvo = Middle-Market Receita média por cliente = Média Custo de aquisição por cliente = Médio Volume = Médio Visão geral do cenário … O cenário do meio tem seu nome, não só porque está localizado no centro da matriz, mas também porque requer um equilíbrio cuidadoso de todos os componentes; você precisa de uma estratégia de vendas de médio alcance para segmentar as empresas do middle market que terão uma receita de nível médio e um custo de nível médio. Uma das chaves do sucesso é desenvolver um um funil de growth hacking que combina a quantidade certa de automação (sem exageros) e uma mãozinha da sua equipe de vendas de inside sales além, claro, de um sucesso histórico de outros clientes, afinal cases aqui também são importantes. O cenário do meio é uma estratégia viável, mas você precisa garantir que existe um produto adequado à sua oferta, caso contrário corre o risco de se perder; Táticas matadoras Ter uma estratégia de vendas híbrida que pode oferecer suporte self-service e humano de suas equipes (vendas e suporte); Ter um processo automatizado de onboarding com uma solução escalável e abrangente; Oferecer sempre o velho e ótimo “Teste grátis” pelo tempo que o cliente precisar para obter as aprovações necessárias do departamento de vendas; Investir em Inbound Marketing (de novo ele); Construir um ótimo time de vendas internas; Trabalhar com parcerias. Empresas com alta taxa de sucesso nesse cenário: Atlassian MOZ Shopify Agora continuamos com cenários nem tão bons assim… 4) Missão Impossível Estratégia de vendas = No / low ou medium touch Empresa-Alvo = Grandes empresas Receita por cliente = Alta Custo por cliente = Baixo ou médio Visão geral do cenário… Se você está tentando vender para grandes empresas por meio de um processo de vendas no / low ou medium touch, então você está em um cenário quase impossível. É o cenário dos sonhos, pois você teria um cliente de receita alta com um custo muito baixo. Tirando as excessões, se fosse possível, todos estariam fazendo isso. O problema é que as grandes empresas possuem longos e lentos processos de compra que exigem a construção de relacionamentos, discussões com várias pessoas e, em geral, muita interação humana e principalmente flexibilidade. Você poderia ter um produto que atendesse a necessidade dentro do mercado de grandes empresas, assim você teria o ajuste do produto ao mercado, mas se retornarmos ao ponto anterior sobre o ajuste do mercado ao produto, então isso não existe. A razão é que seu ecossistema não está em equilíbrio. Este cenário é chamado de cenário impossível por um motivo, então quanto mais você perseguir essa estratégia, então, mais lento e mais doloroso será o fracasso. Como posso escapar da Missão Impossível? Nem todas as esperanças são perdidas… A saída mais fácil é se colocar no cenário “falso fácil” ou “do meio”. Pois você pode manter sua estratégia de vendas original, fazer pequenos ajustes nos canais. Mas precisará alterar seu foco drasticamente para pelo menos ter como alvo as empresas de middle-market. Seu produto e preço também devem estar em harmonia com esse novo cenário, uma vez que os preços do mercado de grandes empresas para o mercado de PME e Middle Market não funcionarão. Se você já está aplicando estratégias de vendas de middle market para vender para grandes empresas, então você tem duas opções. Em primeiro lugar, você pode mudar o foco para clientes de middle market e assim aplicar as mesmas estratégias de vendas e embarcar no cenário “do meio”. Ou, em segundo lugar, você poderia investir mais, que significaria alterar sua oferta de produto para uma solução com um alto grau de customização conforme descrito no cenário “Escalando o Everest”. Em resumo você tem três maneiras possíveis de garantir que você esteja construindo uma empresa SaaS com uma chance muito maior de sucesso; Você só precisa decidir qual caminho você irá seguir. 5) Terra de Ninguém Estratégia de vendas / Empresas-Alvo = medium touch para PME ou high touch para middle market Receita por cliente = Média a alta Custo por cliente = Médio a alto Visão geral do cenário… É chamado de “terra de ninguém” porque ninguém está aqui. É como o deserto árido do fracasso, uma vez que as estratégias de vendas estão mal alinhadas às organizações alvo. Em primeiro lugar, o foco em PMEs com algo mais que um modelo de autoatendimento não é um negócio sustentável. O volume de contas que você precisará é muito alto para suportar qualquer processo de vendas de medium touch. Em segundo lugar, uma abordagem de vendas para middle market de high touch não é necessária. Além disso, os custos de criação e manutenção de uma equipe de vendas 5 as estrelas superarão a receita gerada, uma vez que a receita necessária para cobrir os custos dessa equipe é certamente maior que a receita que a mesma equipe conseguirá gerar (tipo referência circular do Excel). Seu ecossistema de ajuste de mercado ao produto está fora de equilíbrio, já que os canais de aquisição de clientes não correspondem ao mercado que você está focando. Isso, por sua vez, terá um efeito sobre o seu preço, o que significa que seu produto caso esteja dando lucro estará fora da média dos preços do mercado. Se você estiver nesse cenário precisa sair. Rapidamente. De uma maneira geral existem duas maneiras… Na primeira você pode diminuir o seu preço e ajustar a estratégia de vendas para focar nas PMEs via autoatendimento. Em segundo lugar, você pode mudar seu foco para o middle market e manter seus processos de vendas de medium touch para ter receitas de volume médio. Esse foi o exato desafio que Brian Balfour teve ao trabalhar no HubSpot, um cenário “do meio” clássico. Brian e sua equipe foram responsáveis por lançar o Sidekick, um add-on de uma ferramentas de vendas, mas ele foi lançado via modelo de freemium que tinha um preço de entrada muito baixo. Como resultado, eles acabaram atraindo muitas PMEs de receita baixa e as atenderam com um processo de vendas medium touch, o que não estava em linha com o alvo da HubSpot que eram de clientes middle-market de receita média. Seu ecossistema estava fora de equilíbrio e, como resultado, eles ficaram presos na “terra de ninguém”. Para corrigir o problema que eles escolheram avançar em direção ao cenário “do meio” tirando de linha a solução mais barata e fortalecendo o conjunto de recursos do produto de nível de preço médio. Essa mudança permitiu que o HubSpot crescesse a receita recorrente do Sidekick de $ 0M para $ 10 milhões em 2 anos. Se você está no cenário de estratégia de vendas high touch para atender empresas do middle market, em primeiro lugar, você poderia ir em direção ao cenário “do meio” e alterar a sua estratégia de vendas de high touch para medium touch. A outra opção seria mudar o foco para grandes empresas. Como no exemplo do HubSpot, ou seja, mantendo sua estratégia de vendas enquanto altera as empresas-alvo. O trade-off de desenvolvimento de um produto mais customizável para alcançar receitas médias mais altas é provavelmente mais alto que o contrário, uma vez que sua equipe 5 estrelas já estará montada para “escalar o Everest”. 6) Suicida E então chegamos ao nosso penúltimo cenário, que não é nada bonito. Estratégia de vendas = High touch Empresas-alvo = PMEs Receita média por cliente = Baixa Custo por cliente = Alto Visão geral do cenário… Se você está vendendo seu produto para startups e pequenas empresas com um processo de vendas high touch, então você vai matar sua empresa e é precisamente por isso que se chama cenário “suicida”. Seu ecossistema está fora de equilíbrio, pois sua estratégia de vendas e empresas-alvo estão totalmente desalinhadas, o que levará ao caos quando se trata de canais, preços e modelo de negócios; Mas e aí? Você satisfaz as necessidades do processo de vendas high touch ou de um cliente de receita média baixa? A questão aqui é que você deve abortar imediatamente o cenário e passar para uma nova posição dentro da matriz. Todas as rotas de fuga até este ponto só exigiram um grau de flexão. Com isso quero dizer que você conseguiu se mudar para uma posição de segurança, alterando suas empresas-alvo ou a estratégia de vendas. Mas, para escapar desse cenário, você precisa atravessar a “terra de ninguém”, o que lhe dá três alternativas possíveis, mas cada uma exige dois graus de flexibilidade em sua estratégia. O primeiro é alterar completamente sua estratégia de vendas para ter as PMEs como empresas-alvo e se juntar ao cenário “falso fácil”. Isso significa automatizar tudo, diminuir o preço e procurar um alto volume de usuários de receita média baixa que você precisará ganhar tendo baixos custos por clientes. O segundo é mudar completamente seu foco para grandes empresas e modificar seus produtos e preços adequadamente para seguir com esforços de vendas de high touch; E o terceiro é ir em ambas as direções e focar no middle market com uma mudança na força de vendas que dependerá de elementos de automação e sofisticadas práticas de inside sales. Você também precisará alterar seus preços e modelos da mesma forma que Balfour fez com o HubSpot, mas apesar do alto esforço o cenário “do meio” é uma das suas possíveis alternativas. 7) Tapete de bombas No início desse artigo, disse que um dos maiores erros que uma empresa SaaS pode cometer é pensar que pode vender a mesma solução para todos. Esse é exatamente o ponto que é demonstrado no cenário final. O tapete de bombas é uma tática militar usada para causar dano a uma grande área sem um alvo específico em mente. Esta mesma analogia se aplica às empresas SaaS que não conseguiram definir claramente um alvo específico; Eles tentarão vender a qualquer um, o que, em última análise, levará a vender pra ninguém. Como descobrimos, diferentes mercados exigem diferentes produtos, que exigem diferentes canais, que, por sua vez, exigem diferentes preços e modelos de negócios. Todos os componentes precisam cantar em perfeita harmonia, caso contrário, você liderará seu time em uma missão fadada ao fracasso. Se você se encontra em um cenário “tapete de bombas”, então você precisa começar encontrando um problema de mercado e posicionando seu produto como uma solução para este problema. Isso estabelecerá sua posição na matriz, o que, por sua vez, permitirá definir os canais, os modelos e as táticas de crescimento adequados para garantir um ecossistema equilibrado. Observe que esse artigo também serve como uma maneira de mapear a posição de seus concorrentes, permitindo que você diferencie sua oferta. Vamos dar um pequeno exemplo no campo de produtos de automação de marketing. A GetReponse, HubSpot e o Marketo fornecem uma solução semelhante ao mesmo problema; uma plataforma de automação para vendedores e equipes de marketing construirem seus funis e pipelines de vendas e negócios. No entanto, apesar das três empresas terem praticamente o mesmo produto, todas elas estabeleceram diferentes tipos de mercado. Isso significa que cada uma das três ocupa um lugar diferente dentro da matriz de ataque. A GetResponse começou a vida como uma plataforma de marketing por e-mail, mas eles expandiram sua oferta para incluir automação básica e a funcionalidade de inbound marketing. Eles operam um modelo de vendas no/low touch que é altamente automatizado com um baixa receita média por cliente e baixo custo médio por cliente. Eles operam em um cenário “falso fácil”. O HubSpot possui uma solução de crescimento mais abrangente que engloba marketing, vendas e CRM. Eles usam uma combinação de freemium, teste gratuito e modelos automatizados com uma forte unidade de vendas de medium touch composta por gerentes de conta, gerentes de canais e representantes de vendas internos além de um número abrangente de parceiros. O seu preço está para receita média média (desculpem a redundância) com um custo por cliente correspondente. Traduzindo, operam no cenário “do meio”. Já a Marketo é o terceiro jogador neste exemplo e eles criaram uma solução para grandes e complexos clientes. Não há uma tabela de preços em seu site (entre em contato se desejar uma cotação) e seus canais e modelo de negócios se alinham com o ajuste do produto. Dólar a dólar são, de longe, a opção mais “cara”, mas é porque eles estão visando “escalar o Everest” Como podemos ver, os três podem coexistir uma vez que estão executando diferentes missões, apesar de resolver o mesmo problema básico. Assim termina essa nossa rápida conversa, espero que esse artigo traga importantes insights. Apesar de ser uma rápida análise sobre possíveis estratégias para empresas SaaS B2B muito dos resultados aqui mostrados me parecem aplicáveis em outros segmentos. Gosto de lembrar também que estou aberta para conversas sobre o assunto e também para ouvir de vocês sobre possíveis falhas do modelo ou exceções, afinal a proposta aqui não é mostrar uma verdade absoluta.
Segmentação de clientes : usando dados para definir a melhor estratégia Go-to-Market de empresas…
0
segmentação-de-clientes-usando-dados-para-definir-a-melhor-estratégia-go-to-market-de-empresas-16018cb5efe5
2018-02-03
2018-02-03 17:36:47
https://medium.com/s/story/segmentação-de-clientes-usando-dados-para-definir-a-melhor-estratégia-go-to-market-de-empresas-16018cb5efe5
false
3,457
null
null
null
null
null
null
null
null
null
B2B
b2b
B2B
5,142
Raquel Deneige
An economist in love with data science, artificial intelligence and digital. DataAnalysis@MIT, Innovation@Stanford, Economy@ToulouseBS
3b222162223d
raqueldeneige
104
121
20,181,104
null
null
null
null
null
null
0
null
0
5f1816abe091
2018-04-21
2018-04-21 08:44:00
2018-04-22
2018-04-22 11:06:00
1
false
it
2018-04-22
2018-04-22 12:26:03
4
16021244d2a2
1.690566
0
0
0
Un software di ricostruzione grafica 3D basato sul machine learning è in grado di creare un tuo “avatar” virtuale.
5
Un’intelligenza artificiale è in grado di creare un modello 3D di una persona da pochi secondi di video Un software di ricostruzione grafica 3D basato sul machine learning è in grado di creare un tuo “avatar” virtuale. (Credit: Science) Trasportare te stesso in un videogioco, il tuo corpo e tutte le tue caratteristiche fisiche, è appena diventato più semplice. L’intelligenza artificiale è stata utilizzata per creare modelli 3D del corpo umano per avatar di realtà virtuale, con applicazioni nel campo della sorveglianza, della moda o dei film. In genere, però, sono necessarie apparecchiature speciali per rilevare la profondità o per visualizzare qualcuno da più angolazioni. Un nuovo algoritmo crea modelli 3D utilizzando filmati video standard da un’unica angolazione. Il sistema si articola in tre fasi. In primo luogo, analizza un video di pochi secondi in cui qualcuno si muove — preferibilmente ruotando di 360 ° per mostrare tutti i lati — e per ogni fotogramma crea una silhouette che separa la persona dallo sfondo. Basato su tecniche di machine learning, con cui i computer apprendono un’attività da molti esempi, stima approssimativamente la forma del corpo 3D e la posizione delle articolazioni. Nella seconda fase, muove l’essere umano virtuale creato da ogni fotogramma, facendolo stare in piedi con le braccia a forma di T, e combina le informazioni sulle persone in questa posizione in un unico modello più preciso. Infine, nella terza fase, applica colore e texture al modello sulla base del colore dei capelli, abbigliamento e pelle registrati. I ricercatori hanno testato il metodo con una varietà di forme del corpo, vestiti e sfondi e hanno scoperto di poter raggiungere una precisione media entro 5 millimetri, come riporteranno nel mese di giugno alla conferenza Computer Vision e Pattern Recognition a Salt Lake City. Il sistema è in grado di riprodurre anche le pieghe e le rughe del tessuto, ma fatica con gonne e capelli lunghi. Con un modello di voi, i ricercatori possono cambiare il vostro peso, abbigliamento e posa, e anche farvi eseguire una piroetta perfetta. Non vi sarà necessaria alcuna pratica. Il video di Science: Tradotto in Italiano. Articolo originale: Science VISIONARI è un’associazione non-profit che promuove l’utilizzo responsabile di scienza e tecnologia per il miglioramento della società. Per diventare socio, partecipare ai nostri eventi e attività, o fare una donazione visita: https://visionari.org Seguici sulla nostra pagina Facebook per scoprire nuovi progetti innovativi: VISIONARI
Un’intelligenza artificiale è in grado di creare un modello 3D di una persona da pochi secondi di…
0
unintelligenza-artificiale-è-in-grado-di-creare-un-modello-3d-di-una-persona-da-pochi-secondi-di-16021244d2a2
2018-04-22
2018-04-22 12:26:04
https://medium.com/s/story/unintelligenza-artificiale-è-in-grado-di-creare-un-modello-3d-di-una-persona-da-pochi-secondi-di-16021244d2a2
false
395
Pensare e agire fuori dagli schemi
null
VISIONARIORG
null
VISIONARI | Scienza e tecnologia al servizio delle persone
staff@visionari.org
visionari
TECNOLOGIA,FUTURO,SCIENZA,VISIONARI
federicopistono
Digitalizzazione
digitalizzazione
Digitalizzazione
41
ad astra
Per diventare socio, partecipare ai nostri eventi e attività, o fare una donazione visita: https://visionari.org
71565016dbab
VISIONARI
229
2
20,181,104
null
null
null
null
null
null
0
null
0
b9c490bd1fa1
2018-06-11
2018-06-11 14:24:15
2018-06-11
2018-06-11 14:26:30
3
false
en
2018-06-11
2018-06-11 14:26:30
7
16030833f282
3.331132
1
0
0
Beer incumbents are increasingly losing market share and feeling the pressure to compete with craft brewers. Consumer tastes are constantly…
5
Beer Brands Chugging along with AI Psychographics Beer incumbents are increasingly losing market share and feeling the pressure to compete with craft brewers. Consumer tastes are constantly changing and companies must evolve to stay competitive. The rise of the millennial consumer challenges the market. Their health-conscious, promiscuous behavior makes creating new products and finding loyal drinkers harder than ever. Implementing an AI strategy is the best bet beer companies can make at differentiating themselves from the competition and increasing top-line growth. To give a couple of examples: London-based beer startup IntelligentX leverages AI to analyze customer feedback and develop the next batch of beer. Carlsberg uses AI in a taste sensor platform to quickly distinguish between different flavors developed in their laboratory. One innovative application of AI is psychographics. With psychographics, we can learn to predict consumer behavior trends and influence purchases by profiling customer personalities. According to one theory, there are 5 main traits that describe a person’s personality: openness, conscientiousness, extraversion, agreeableness, and neuroticism. If you know your target market’s personality, combined with demographic and geographic data, you can create nuanced and personalized messages to resonate more strongly with them. You can measure openness and agreeableness, then correlate to other factors to predict future customers. Psychographic data such as lifestyle data, consumer data, and buying patterns correlate closely to personality data and guide us to sway the consumer. This is a step up from simply analyzing consumer purchasing history and predicting what they will buy next. Credit The path to psychographic segmentation is to create a digital consumer footprint, which is produced by algorithms that were trained with millions of data points such as social media profiles and psychometric test scores. These capabilities are now available commercially and go beyond demographics and enable identifying specific behavioral patterns and segmentation according to preferences and personalities. If this sounds familiar, it’s because some companies have misused this technique. As we wrote about previously, Cambridge Analytica is known for inappropriately utilizing psychographics to influence elections. At KUNGFU.AI, we stress our commitment to doing AI for Good and working ethically. Using individual psychographics for marketing may sound like a revolutionary idea but, how would it actually apply to the beer business? Let’s take the example of hyper-targeted marketing. Suppose you have a beer company that’s launching a new IPA and you need help deciding where to focus your marketing efforts. You would obviously want to identify beer drinkers that have a high degree of openness and are willing to try new things. However, you can dig a little deeper. Credit A good approach would be to identify the personality traits of the people who are on the ledge about drinking a new beer. These people don’t usually try new things, but aren’t completely opposed to the experience. By analyzing psychographic metrics such as lifestyle data and buying patterns, you might tailor a marketing message that will sway these consumers towards your new offering; maybe they are passionate about protecting the environment or have certain political tendencies. Marketing efforts could consider these and place the product in the appropriate channels and with relevant messages. Psychographics can help beer answer several other questions: What are the target customers main lifestyle patterns? What type of beer would be attractive to a customer segment that you wish to penetrate? What other customer segments would it be beneficial to target? Are there other brands/companies you can partner with to increase engagement and revenue? Credit There are many other ways to implement AI in the beer industry and other consumer product markets. From supply chain and manufacturing to pricing, AI can help organizations improve capital efficiency, uncover hidden trends, and develop a data-driven culture. Creating an AI strategy is important to realizing the full potential of AI in your organization. This involves answering several questions: What type of data should I be looking at? How should I start storing my data so that it can be used for AI? Should I hire an in-house AI team or work with an external team? Overwhelmed? KUNGFU.AI can help you answer these questions and guide you through every step, from data to transformation. We also have AI Executive Education offerings to help start building a data mindset in your organization. KUNGFU is an AI consultancy that helps companies build their strategy, operationalize, and deploy artificial intelligence solutions. Check us out at www.kungfu.ai
Beer Brands Chugging along with AI Psychographics
8
beer-brands-chugging-along-with-ai-psychographics-16030833f282
2018-06-11
2018-06-11 14:26:32
https://medium.com/s/story/beer-brands-chugging-along-with-ai-psychographics-16030833f282
false
737
KUNGFU.AI is an AI consultancy that helps companies build their strategy, operationalize, and deploy artificial intelligence solutions.
null
null
null
KUNGFU.AI
null
kung-fu
ARTIFICIAL INTELLIGENCE,AUSTIN TEXAS,CONSULTING,DATA SCIENCE,AI
kungfuai
Beer
beer
Beer
5,723
Steve Meier
14 year technology pro. AI enthusiast and Co-founder @ KUNGFU.AI
33498ee9d6c5
steve_53806
11
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-11
2017-10-11 01:05:53
2017-10-11
2017-10-11 01:32:12
0
false
en
2018-08-06
2018-08-06 00:25:22
6
1603fbf9d3b1
0.441509
0
0
0
Obsessed with web episodes? Machine Learning Audio Episodes, here
5
Data Science Meta-guide Obsessed with web episodes? Machine Learning Audio Episodes, here Cheat Sheets for anything and everything in Machine Learning Many e-book can be found here for FREE! One good book to start with- Introduction to Statistical Learning Deep Learning book which covers necessary Linear Algebra, application and sample codes in Computer vision Is there an IDE for DataScience? Jupyter Lab Review Need guidance for your first project?! Go through the Kaggle kernels Those are my go-to list of links so far that are on top of my head that helped me at one or the other point since I started pursuing Data Science which has been an year or more :D B)
Data Science Meta-guide
0
links-to-useful-information-1603fbf9d3b1
2018-08-06
2018-08-06 00:25:22
https://medium.com/s/story/links-to-useful-information-1603fbf9d3b1
false
117
null
null
null
null
null
null
null
null
null
Miscellanous
miscellanous
Miscellanous
87
Sahiti Korrapati
Data science rookie
8a8422deb260
sahitikorrapati
55
59
20,181,104
null
null
null
null
null
null
0
null
0
795679d08add
2018-01-27
2018-01-27 18:15:17
2018-01-27
2018-01-27 18:19:53
1
false
en
2018-01-27
2018-01-27 18:19:53
0
160440f01b1c
1.279245
0
0
0
The list continues on the technologies that are redefining today’s business. Organizations, in some shape or form, have taken a plunge into…
5
AI, RPA, NLP, ML, DS, NLU, NLG, Bots…. The list continues on the technologies that are redefining today’s business. Organizations, in some shape or form, have taken a plunge into the journey of AI and Automation. However, the question remains, is there a clear understanding of these technologies, their relationship with one another and the marketplace for these technologies. To complicate matters further, there is considerable development in the dependent areas of AI. For organizations to drive towards true transformation, it is important to have a holistic strategy of AI and automation with the below dependencies. Storage — Cloud, Object Storage Compute — Data Lakes, Serverless compute.. Data & information management — Batch, Stream, IoT… Architecture models — Domain Driven Design, Containers, Dockers, Kubernetes API Gateways and Orchestration Delivery models — Design Thinking, Agile, MVP, Lean CI/CD — Jenkins, Artifactory, TFS……. In the interest of simplicity let us reserve the integration of these different dimensions with AI and automation for the next series. In this article, we will attempt to simplify AI and its components Simply put, AI is an ability of a machine to imitate intelligent human behavior. Intelligent is the KEY. Intelligence in human is built around his / her ability to Observe -> Interpret -> Evaluate -> Decide -> Act Here is a look at how Machine leverages the technologies to imitate human behaviors… Intelligently. The intelligence provided by Machine Learning algorithms (trained appropriately) Observe: Ability to visualize and recognize sound : Machine Vision and voice recognition (NLU) Interpret: Understand Language and images : Natural Language Processing (NLP)& Convolutional Neural Networks (CNN) Evaluate & Decide (Brain): Classification, Clustering….: Machine learning, Artificial Neural Networks Act — Natural Language Generation (voice response), Automation, RPA, BoTs
AI, RPA, NLP, ML, DS, NLU, NLG, Bots….
0
ai-rpa-nlp-ml-ds-nlu-nlg-bots-160440f01b1c
2018-01-27
2018-01-27 18:19:54
https://medium.com/s/story/ai-rpa-nlp-ml-ds-nlu-nlg-bots-160440f01b1c
false
286
a beginners mind
null
null
null
Shoshin-labs
null
shoshin-labs
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Meenakshisundaram Thandavarayan
null
b6d7a68b9169
meenakshisundaramthandavarayan
3
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-05
2017-12-05 07:44:19
2017-12-05
2017-12-05 07:45:06
12
false
en
2018-01-07
2018-01-07 05:41:57
5
16060beeaef6
6.791509
3
0
0
Before we can get to model-based reinforcement learning, we will need to formalize some reinforcement learning concepts in mathematics.
4
Model-based Reinforcement Learning Part 3: RL Formalism Before we can get to model-based reinforcement learning, we will need to formalize some reinforcement learning concepts in mathematics. Reinforcement Learning, Formally Reinforcement learning is often used to solve Markov Decision Processes, or problems in environments which follow the Markov Property, which we saw in Part One defined as: Problems that can be formulated as having the next state dependent only on the current state and current action. Every timestep in the environment, we are posed with a decision: given our state, what action to take? After taking an action, we get a reward, and a next state as well. At each timestep, we gather experience, which can be thought of as a tuple. Experience Tuple Now, assuming that we have a lot of these experience tuples, we want to solve a simple goal: Maximize the expected reward over time. Mathematically, this is written as so. Goal of Reinforcement Learning, formally Here, we are summing over all timesteps, from the initial time to infinity. Inside the summation, we have a term that includes a Gamma (this is called a discount factor, but if we set it equal to one, the term goes away, which is exactly what we will do!) as well as a reward function, which basically formalizes the above paragraph (what reward will I get if I take this action in this state). Our goal for the MDP is to maximize the reward over time, and now that we have introduced our experience tuples, we can learn how to do just that. Q Learning In order to maximize this reward, we are going to first look at a popular algorithm called Q-Learning, a reinforcement learning algorithm which learns the action-value function, or the value of taking a particular action in a given state. Our Q function has two inputs: a state, and an action that we would like to evaluate, and it returns the real-valued expected reward of that action (and all other actions after it; we’ll see how this is incorporate below). Intuitively, it tells us how good a particular action is at a particular state. Q-value function: Given a particular state, tells how good a particular action is in terms of reward. From Wikipedia: https://en.wikipedia.org/wiki/Q-learning We can learn Q-functions by starting with the same Q value for every action in every state. As we explore our environment, we can update our Q estimate using the equation seen above, which is basically a weighted estimate between our old estimate and the reward that we get by taking the action, as well as the estimate of the optimal future value (This is what we mean by all subsequent future actions). Now, using this simple formula, we can start to operate in our environment, gain experience, and then update our estimates with the formula above. But, how can we pick actions? We can use our Q Values! By picking our actions according to a simple formula most of the time, we can actually learn good estimates of each action in each state. Our simple formula can be written as: Which means to pick the best action in any state according to our estimate. But, there’s a problem: what if our estimates are wrong? If we only pick the best action, we will never see anything else and never know if that action truly was the best or not, or in reinforcement learning terms: Our agent is not able to explore with a greedy policy. To fix our exploration problem, we introduce a small number called epsilon. Epsilon, a user-set number, is a small number which denotes the probability of taking a random action in any state. This means that in a state, with probability one minus epsilon, we will take the best estimated action, and with probability epsilon, we pick randomly. What we have just defined is an implicit policy, or a policy which is derived from a state-value or action-value function. Remember, a policy is a mapping from states to actions, or basically a function that tells us how to act in each state. Before we continue onto model-based reinforcement learning, we are going to touch upon another way of defining a policy: by learning it. Policy Learning As we saw, a policy helps us figure out what actions to take in what states. Using some math called policy gradients, we can use our experience to directly optimize our policy. Policy Learning methods can have better convergence properties from their implicit-policy counterparts, and are effective in high dimensional or continuous action spaces (hence their widespread use in robotics). Policies can be any function, and as any function is, has the ability to be represented (or, parameterized) by a set of parameters. In our examples, the parameters will be called theta, and the function will be a neural network. If that doesn’t make too much sense, take a look at this link. And then, this one. Our policy will tell us what action to take in a state, or more precisely, the probabilities associated with each action for an inputted state. Policy pi, parameterized by theta Our goal in policy learning, is that given a policy that is parameterized by theta, find the best theta. Best theta is a vague phrase, but basically, we can formulate the policy search problem as an optimization problem, and then use gradient ascent to maximize our optimization problem. Formally, we define a policy objective function, J, which is the expected return given by following a policy pi parameterized by theta. Policy Objective The first term is the stationary distribution of states, or the distribution of states in the environment. The second term is the policy, and the third term is the reward. The policy gradient, seen below, maximizes this function by searching for a local maximum and ascending the gradient with respect to the parameters. From David Silver’s Lecture on the subject Well, What’s the Problem? It seems that the problem should be solved; both Q Learning and Policy Learning seem to offer solutions to finding a good policy in any environment, so why do we need model-based reinforcement learning? The problem stems from the equations listed for both learning methods. Among other things, both methods are extremely inefficient, and take a long time to converge to a good policy. On a physical robot, this could mean exploring with random actions for quite some time before any meaningful reward is achieved from which we can start learning. On real systems, this type of uncertainty can be expensive, and is almost never tolerated. Maybe an untrained neural network, although it does seem to know what it’s wants to do… Model-based reinforcement learning offers the ability to model the transition function between states. This way, we can use the model internally to estimate the return of taking actions and optimize our policy using our simulated experience. Then, we can run a better (and safer) policy (or often times, another experience gatherer, we’ll see what this means in Part 4) with which to gather experience tuples, better model our environment, and simulate experience to optimize our policy. Model Based Reinforcement Learning Now that we see the benefits of model-based reinforcement learning, how can we do it? Luckily, we still have our experience tuples, which makes a lot of things much easier. Note, our example in this post will concern the tabular case, but we will soon see how these ideas can generalize to problems where tabular solutions cannot be applied. If we want to use model-based Q-Learning, we can pretty easily do so. In our tabular setting, we can remodel our Q-Learning problem like so: Here, the probability function inside the summation will act as our model, or the probability of moving to another state after taking an action in a starting state. If we set gamma to one again, we can see that the Q-value of any state is the reward we’d get by taking an action in the previous state, as well as the max Q value of the next state multiplied by the probability of actually getting to that state. In this formulation, we can learn a model pretty easily using a few update rules, similar to the ones we defined in Q Learning. For the state actually transitioned to. For any other state. Here, given enough experience, our model-based and model-free learner defined in the Q Learning section will converge to the same answer. If the value of a model is not necessarily clear from this simple example, keep reading. In the next post, we will learn about some model-based reinforcement learning algorithms that are applied to physical robots, where mistakes can be costly. This post is the Part 3 of a few, in which we will try to approach what’s called Model-based Reinforcement Learning from a less-mathy perspective. Part 1: Introduction Part 2: Model-based RL Part 3: RL Formalism
Model-based Reinforcement Learning Part 3: RL Formalism
53
model-based-reinforcement-learning-part-3-mathy-rl-and-introduction-to-lqr-16060beeaef6
2018-03-31
2018-03-31 08:33:42
https://medium.com/s/story/model-based-reinforcement-learning-part-3-mathy-rl-and-introduction-to-lqr-16060beeaef6
false
1,442
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Bhairav Mehta
RL / Robotics Researcher
896a29c3bc85
bhairavmehta95
10
5
20,181,104
null
null
null
null
null
null
0
lapply(X, FUN, …) mapply(FUN, X, …) nlm(f, p, ...) nlminb(start, objective,...) x = read.csv(file, header = TRUE, ...) # reading into DataFrame write.csv(x, file = "") x = pd.read_csv(filepath_or_buffer,...) DataFrame.to_csv(path_or_buf=None,...)
4
null
2018-05-15
2018-05-15 06:06:13
2018-05-15
2018-05-15 06:39:08
0
false
en
2018-05-18
2018-05-18 11:54:30
2
16078f8c5f23
1.984906
0
0
0
How the journey started
4
From R to Python (This is an in-progress blog, as of now I do not know enough R or python to actually compare the two. I just like python more) How the journey started I did some bit of data analysis in R for past three years. Worse, I did a fair bit of data analysis in SAS for six years. Last year, I had to do a project, where 50% of it was already coded in python, and rest was up to me. Now converting that 50% bit to R was a huge task for me due to my lack of knowledge in GIS and R libraries pertaining to spatial analysis. Also the remaining 50% of project required nothing “complex” but general loading of data, doing calculations, writing back to disk etc. , so I decided to give python a try and complete this remaining 50% of the project in python. Before I talk about pleasant surprises I had with python, I want to mention some of the frustrations I had with R. R Frustrations Different Argument order for similar functions in same package lapply vs mapply Seriously, why do I need to remember that in lapply I have to put list as first argument and in mapply I have to put function as first argument? Well, I changed the official mapply a bit here, but readers who know that I have changed it, already know that it doesn’t matter. nlm vs nlminb vs others Both of these functions belongs to stats package and both of them for non-linear minimization. Not only the official documentation treats the same thing with different symbols, they also have different order. Here f and objective refer to the same thing: function to be minimized, and p and start refer to the same thing: the initial values given to the parameters. This does not end here. There are multiple packages, multiple functions, multiple wrapper functions, all of them using their own not-so-intuitive syntax. And the list goes on… — — - The Surprises with Python(Serendipities and Enchantments) Pandas Well turned out that you don’t have an base data structure to deal with data tables or data frames. But you have a rather amusingly named library called pandas to deal with it. What the heck — if python can do, why not pandas. So with that thought, the journey continues. It was as easy to read and write to files using pandas as with R, with one slight syntactical difference. In R In pandas The syntactical difference is that in pandas the function to_csv is implemented in Dataframe object. I don’t know which is better and why. I need to read about R classes and functions types which I do want to read, but have to wait for a month as do not have time till 17th June, and then read about difference between ‘stand alone functions implemented in package’ and ‘functions implemented in a class’ in python.
From R to Python (This is an in-progress blog, as of now I do not know enough R or python to…
0
from-r-to-python-16078f8c5f23
2018-05-18
2018-05-18 11:54:31
https://medium.com/s/story/from-r-to-python-16078f8c5f23
false
526
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Gaurav Singhal
null
827dbe551f67
grvsinghal
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-05
2017-11-05 11:39:24
2017-11-06
2017-11-06 07:44:03
0
false
en
2017-11-06
2017-11-06 08:09:34
0
16080c13df66
2.143396
0
0
0
Conversational interfaces are becoming much more popular these days for finding information and performing tasks. In this article I explain…
3
Natural Language Processing & Machine Learning for Conversational Interfaces and Virtual Assistents Conversational interfaces are becoming much more popular these days for finding information and performing tasks. In this article I explain some of the technologies you can use for creating intelligent conversational interfaces. Introduction Currently, all useful & practical conversation systems are based on retrieval: we want to retrieve the most relevant item in a (limited) set of items. This set of items usually is a knowledge base in the form of a decision tree: at the start nodes we have very general topics, while in items later in this tree, we place the specific information or actions. In a retrieval system, we want to find the item or item(s) that are likely to be the most relevant to the user given user input and context. Text classification In supervised text classification, we have a vector (see it as a list with a fixed length) of samples and a vector to which class they belong. For a conversation this class is usually the intent of the user. As text classification is a relative easy problem, simple models like a shallow word-based neural networks, support vector machines or tf-idf still compare well against more complicated models like LSTMs. N-grams A n-gram represents the text as a set of sequences with length n. The most simplistic version is the unigram or bag-of-words: each word is represented as a count in a vector. When a sentence contains word i, the vector has a 1 at index i. For each word j not in the sentence, the vector contains a zero at index j. You could maybe see that the vector almost always contains many zeros, which do not contribute to the prediction. As such, a lookup table is often used as a computationally more efficient version. The items in this table are called embeddings or word vectors. Word vectors Distributional word vectors (word embeddings, word2vec) are used to inject information about how words are used in natural language. Word vectors are trained either with the continuous bag of words (CBOW) (predicting a word given surrounding words) method or the skip-gram approach (predicting the surrounding words given a word). The usefulness of the vectors for learning other models are a side effect: synonyms and similar words will have a similar distribution of surrounding words (they co-occur), which means that they get similar representations. When your dataset is large enough, training the word vectors while training the model may work just as well as using pre-trained word vectors. Feature extraction To retrieve some information from the user, feature extraction can be used to retrieve and save some text from the user, such as a customer id or a flight number. For this, a set of handcrafted patterns will often be more accurate than trying to learn those using machine learning. Named-entity recognition Furthermore, named-entity recognition (NER) is used to retrieve names, brands, locations, values, etc. This allows an agent to know what someone is talking about. Computer vision Often, we would like to do more with our agent than just talking: this is where computer vision comes in. The most simple task is often just to recognize objects in an image (object recognition), but for an agent also optical character recognition (OCR) can be used to retrieve text from a user.
Natural Language Processing & Machine Learning for Conversational Interfaces and Virtual Assistents
0
natural-language-processing-machine-learning-for-conversational-interfaces-16080c13df66
2017-11-06
2017-11-06 08:09:35
https://medium.com/s/story/natural-language-processing-machine-learning-for-conversational-interfaces-16080c13df66
false
568
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Daniël Heres
Machine Learning & AI research
4deb50a43329
danilheres
64
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-21
2017-11-21 17:56:21
2017-11-21
2017-11-21 18:00:37
1
false
en
2017-11-21
2017-11-21 18:00:37
1
160a34db0811
1.641509
1
0
0
Artificial intelligence (AI) is an area of computer science that emphasizes on the designing and development of intelligent machines…
5
Artificial Intelligence Basics Artificial intelligence (AI) is an area of computer science that emphasizes on the designing and development of intelligent machines specially software that work and react like humans through strong algorithm running at back. It is also the task of using computers to understand human intelligence. AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing strong and relevant algorithm. · Reasoning: It is the processes that enables us to provide basis for decisions making, and predictions · Learning: It is the activity of getting information or skill by studying, practicing, or experiencing something. · Problem Solving : It is the activity in which one observes and tries to arrive at a desired point from a present situation by taking some path for solution · Perception: It is the process of acquiring, interpreting, selecting, and organizing sensual information. Areas of AI 1. Natural Language Processing: Natural Language Processing (NLP) refers to AI method of communicating with an intelligent systems using a natural language such as English. 2. Neural network: A computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs. 3. Machine Learning: Machine learning is the science of getting a computer to act without programming. 4. Robotics: Robotics is a domain in artificial intelligence that deals with the study of creating intelligent and efficient robots and aimed at manipulating the objects by perceiving, picking, moving, modifying the physical properties of object, Application of AI AI can have impact on complete digital world, it helps to become more efficient from today and have decision making capability which will reduce human interaction on every point. Here are few AI application 1. AI in Healthcare: For improving patient care and reducing costs in less efforts. Hospital can apply AI algorithm to make better and faster diagnoses than humans. 2. AI in Education: AI can assess students and adapt to their needs, helping them work at their own pace. AI in Business. AI algorithms can be integrated into analytics and platforms to understand data on how to better serve customers and fulfill their needs
Artificial Intelligence Basics
12
artificial-intelligence-basics-160a34db0811
2018-03-26
2018-03-26 18:37:23
https://medium.com/s/story/artificial-intelligence-basics-160a34db0811
false
382
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Dodo Khan
Mr. Dodo Khan is working as a General Manager for Tech4Life Enterprises. He is a Software Engineer with keen interest in development of eHealth technologies
95706d290ec0
dkseelro
2
14
20,181,104
null
null
null
null
null
null
0
pip install numpy opencv-python dlib imutils
1
null
2018-06-29
2018-06-29 01:46:10
2018-06-29
2018-06-29 02:42:12
2
false
en
2018-06-29
2018-06-29 02:42:12
4
160abcf7d672
1.621069
4
0
0
Identifying faces in photos or videos is very cool, but this isn’t enough information to create powerful applications, we need more…
4
Facial mapping (landmarks) with Dlib + python Identifying faces in photos or videos is very cool, but this isn’t enough information to create powerful applications, we need more information about the person’s face, like position, whether the mouth is opened or closed, whether the eyes are opened, closed, looking up and etc. In this article I will present to you (in a quick and objective way) the Dlib, a library capable of giving you 68 points (landkmarks) of the face. What is Dlib? It‘s a landmark’s facial detector with pre-trained models, the dlib is used to estimate the location of 68 coordinates (x, y) that map the facial points on a person’s face like image below. These points are identified from the pre-trained model where the iBUG300-W dataset was used. Show me the code! In this “Hello World” we will use: numpy opencv imutils In this tutorial I will code a simple example with that is possible with dlib. we are indentify and plot the face’s points on the image, in future articles I will detail a little more the use of this beautiful library. Installing the dependencies. Starting by the image capture that we are going to work on, we will use OpenCV to capture the image’s webcam in an “infinite” loop and thus give the impression of watching a video. Run your script and make sure your webcam’s image is being captured (it will open a window for you with the webcam’s image). After getting our picture, let’s do the magic happen. REMINDER: We are using the model already trained, we will need to download the file shape_predictor_68_face_landmarks.dat that you can find it here. After that, just run the script, you have your “hello_world” in Dlib working, in future articles I’ll detail a little more about how to extract more information about the faces founded in the image. All the code is on my github. TKS.
Facial mapping (landmarks) with Dlib + python
6
facial-mapping-landmarks-with-dlib-python-160abcf7d672
2018-06-29
2018-06-29 02:42:12
https://medium.com/s/story/facial-mapping-landmarks-with-dlib-python-160abcf7d672
false
328
null
null
null
null
null
null
null
null
null
Python
python
Python
20,142
Italo José
Computer vision Engineer at Nextcode https://www.linkedin.com/in/italojs/
846b19bfbf1d
italojs
177
50
20,181,104
null
null
null
null
null
null
0
null
0
347e46d8a6fb
2018-08-29
2018-08-29 10:48:48
2018-09-21
2018-09-21 17:01:04
2
false
en
2018-09-21
2018-09-21 17:01:04
0
160b0b19ac47
3.847484
0
0
0
Traditional marketing tactics like mass mailings need to be replaced by personalized digital marketing tactics and marketing automation.
3
Making Marketing Intelligent — One Step Closer to The Intelligent Enterprise While the digital transformation is happening with full speed we are already facing the next era: The era of intelligence. We at SAP are helping our customers to become intelligent enterprises. At the same time, we are using our own solutions to become an intelligent enterprise ourselves. In an intelligent enterprise every business process is a part of the company’s entire value chain. While every department has a specific role, all activities need to be aligned to create the visibility, focus and agility that define intelligent enterprises. In this blog, I will look at the transformation of our marketing processes, as an example of the evolution of one department in SAP’s journey to become an intelligent enterprise. It is proof that real intelligence can only be achieved if all functions come together to achieve one common goal — in our case, creating customers for life. Marketing is unique as it is a line of business that is highly impacted by digitalization. Today, marketing is almost completely digital. Buyers research online and often make buying decisions even before talking to any vendor — if they talk to one at all. This informed buyer needs to be addressed differently than in the past. Traditional marketing tactics like mass mailings need to be replaced by personalized digital marketing tactics and marketing automation. Buyers want to receive relevant information when they need it through the channel they prefer — in their own speed via various touchpoints. All interactions leave behind important digital footprints which can be used to identify topics of interest and measures tailored to each prospect or customer. And this is where marketing needs to be aligned with all other corporate functions. To offer customers the experience they wish for, a 360-degree view of our customers and their data is crucial — in alignment and according to legal rules such as GDPR. At SAP, customer data is aggregated across the entire enterprise — this allows real-time analytics to understand how customers are engaging with us and provide them the personalized content that they need via the channel they prefer. Technology allows us to re-think processes in marketing automation, thus nurturing functionality as well as a scoring model increase the quality of leads and help to route the right leads to our sales teams at the right time, so they can have more meaningful and engaging conversations with our customers and prospects. Our contact scoring is being done by Machine Learning, thus we can qualify customer contacts faster and better than before — this is where our sales teams really benefit as our pipeline’s quality and volume improve. Additionally, our agency portal in the cloud connects marketing agencies as well as data providers to further optimize and harmonize the processes for our marketers. We transform marketing at SAP with the help of SAP C/4HANA and our SAP Marketing Cloud. The SAP C/4HANA Suite is designed to support digital and future oriented marketing automation. With marketing being highly data driven, it relies on a tight connection between the front office and all fulfillment functions. SAP C/4HANA together with SAP S/4HANA can provide just that. This combination guarantees the most important pre-requisites to increase marketing-led demand generation: trusted data, single view of the customer, customer for life mindset, digital first engagement models and the connection of front and back-office. To achieve this, we must overcome silo-ed thinking and work on one harmonized process across all related functions such as sales, development, digital channels and IT. Together, we need to ensure that our customers are experiencing one seamless end-to-end journey, from the time they search for a product through the sales cycle to solution adoption, usage, service and support. Our overall goal is nothing less than creating customers for life. Our team is thinking about every stage of the customer experience to make every step as frictionless as possible for our customers. For us at SAP, our investments in our marketing transformation are already paying off with significant business benefits towards our targets, such as 2.5 x increase in contribution to revenue, 70 % reduction in lead response time and 37 % growth year-over-year in lead conversion to cloud bookings. With SAP Analytics always running in parallel to the end-to-end process, we can directly jump into the SAP Digital Boardroom and analyze marketing and customer data in real-time from end to end, meaning from the very first contact with the customer to the actual purchase and revenue. That said, we can continuously check the business benefits of our transformation. By using our own technology, we are making our marketing intelligent. That brings us one step closer to achieving the visibility, focus and agility of an intelligent enterprise. Maybe you know one of the many famous quotes from Albert Einstein: “The measure of intelligence is the ability to change.” In our case the ability and the willingness to change are key in becoming an intelligent enterprise. We must not forget that all this can only happen if all colleagues are convinced about the need to change and constantly adapt. Only then will the journey to the intelligent enterprise be successful. On this journey, there are many more puzzle pieces that need to be transformed and aligned in one harmonized picture — not only in marketing. I look forward to sharing more stories during our journey to becoming an intelligent enterprise.
Making Marketing Intelligent — One Step Closer to The Intelligent Enterprise
0
intelligent-marketing-160b0b19ac47
2018-09-21
2018-09-21 17:01:04
https://medium.com/s/story/intelligent-marketing-160b0b19ac47
false
918
SAP's best brand journalists cover hot-button tech and IT trends like Digital Transformation, Future of Work, Purpose, Customer Experience and more. VISIT OUR ARCHIVES HERE: https://medium.com/sap-innovation-spotlight/archive.
null
SAP
null
SAP Innovation Spotlight
timclarkbar@gmail.com
sap-innovation-spotlight
DIGITAL TRANSFORMATION,CUSTOMER EXPERIENCE,PURPOSE,TECHNOLOGY
SAP
Marketing
marketing
Marketing
170,910
Christian Klein
Chief Operating Officer and Member of the Executive Board of SAP
1a26f29479b9
ChrstnKlein
10
20
20,181,104
null
null
null
null
null
null
0
null
0
61d8f53e661f
2018-05-27
2018-05-27 12:16:12
2018-06-06
2018-06-06 17:20:01
3
false
en
2018-06-06
2018-06-06 17:20:29
15
160d3b6bb17d
5.229245
1
1
0
The irrational fear of autonomous driving
5
I, (robot) Driver The irrational fear of autonomous driving Elaine Herzberg: killed by an Uber is autonomous mode, 2018 Rafaela Vasquez: the “driver” who let the robot drive Joshua Brown: killed by his Tesla’s self-driving mode, 2016 There are two deaths on this short list. Two people are no longer alive because of the testing and development of auto-driving systems. One person remains. She may haunted by the [robot system’s] inability to stop the accident . She may never trust an autonomous system again— I won’t pretend to know. I’ve left out a few other incidents, even deaths, where we can find some connection to autonomous driving. The use of the technology is still limited. The impact is small, yet even a single death-by-machine-decision triggers our worst fears. The fear of self-driven cars is a new kind of fear. New human emotions take time to understand. Most emotions make us incapable of hearing the rational — that’s why we call them emotions. If I tried to tell of all the human-caused car deaths, this article would be obsolete the moment I published it. It would literally be impossible to keep the count up to date, even for a few minutes — even if I just mentioned the name, time, and place of the “accident”. Elaine and Joshua were (are) real people. So too are the 40,000 killed each year in the US alone. Uber shut down their self-driving car program immediately after Elaine Herzberg was killed in Arizona. We don’t react this way to errors in judgement or even drunk driving for human drivers(trust me, I’m from New Mexico). Compare this to the industry of “giving rides.” Would an entire cab company shut down if one of its drivers got drunk during a company break — and then killed someone when they got back on the road? I’ve never heard of this kind of reaction to human. It may have happened. More likely is our willingness to forgive human error and accept its tragedies. “the car’s self-driving system was overly inclined to dismiss objects in its path” The autonomous vehicle wasn’t drunk — it made a bad decision. Someone programmed that decision. Someone else decided enough testing had been done to try a real-world run of the code. As machine learning, digital simulations (and super computing power) continue to advance, should we wait until we have the tech to run millions of tests before we let the robots drive? What is cost of progress (towards autonomous driving) and what is the alternative cost (deaths caused by human drivers)? In this early stage, what kind of statistics, how many deaths are acceptable? I’m not a Tesla fanboy. Every action and biography of Elon Musk tells me he’s a former, and current, egomaniacal asshole and opportunist who never built compassion as a skill. I also much prefer Lyft’s front-seat, friendly culture over Uber’s sterile experience — enough to write letters and lobby for Lyft (see the last section on being a Lyft ambassador) in the 2013–15 American Land Grab of ride-share services. That said, rational analysis tells me Uber and Tesla have soaked up a ton of blame as part of an over-reaction to single-digit tragedies from their autonomous driving systems and programs. Logic says a death is a death, no matter who was driving. The consensus on the safety of self-driving cars is all over the place (i.e. there is no consensus). Google has a near-perfect record in its program. It’s easy to attribute Uber and Tesla’s faults to their toxic cultures and free-wheeling pasts. It’s harder to justify or accept any loss of life in the development of what seems like peaceful tech. What if these deaths could have been prevented with a little more planning, careful consideration, non-real world testing? How many human drivers responsible for accidents or deaths ask themselves — and have been asked — what they could have done differently? We focus on these worst case examples, the sensationalism of accidents that have no direct human at fault when the insurance rep, the police — and sometimes the ambulance and black body bags — show up. We know humans are poor drivers. We know we now have more distractions, more traffic, and less time to drive. We admit we need tech to solve other driving problems. It gets deployed readily to help wipe our windshields when it rains and improve our imperfect vision as we hurtle down imperfect roads at high speeds. We let it hold speed for us, control our cruise. Richer folks seemed ok with letting a Mercedes or Lincoln parallel-park itself, so more common Chevys and Fords got the upgrade. That final decision power though, we have to have it. We pay for insurance to protect us when we make inevitable mistakes, or something else fails on our vehicles that’s not “autonomous.” All this responsibility and risk falls on us to keep our status and role as driver. What of the time, energy, thought put into driving and the fear of crashes? Will Smith takes over when things get heavy in this I, Robot driving scene. Earlier in the film, the female lead can’t believe he would dare enter “manual mode” at tunnel/highway speeds. What of the the carryover effects? The cognitive tolls: road rage, the the stress of traffic jams and close-calls? The general feeling of unease as humans dealing with all kinds of things — emotions, allergies, existential crisis — while piloting thousands of pounds of metal and plastic through congested streets and highways? We carry these feelings into our work, into our conversations, into the rest of our non-driving lives. We leave tracks of these muddy thoughts and fears as we re-enter our homes. How much human potential, other measures of human life like time and cognition, have been lost because of required driving and dangerous commutes? I’ve struggled with preserving the value of human life when there’s so damn many of us. We’ve carved and paved our trails through nearly every expanse. We multiply and consume, ignoring the signs of our excess as the dominant, unchallenged species of a planet with fixed resources. We juke and jive across the globe, trying not to run into each other. We want freedom and liberty of movement — and fail to see why we should ever collide with another person, why we should care until the collision is violent and deadly and something we’re forced to confront. So then, what are solutions? Never try non-human driving for risk of car deaths without a person to blame, or excuse as an “accident?” Increase driver education and testing? Restrict licensing? Limit our human development? Literally slow our progress down by forcing humans to drive instead of think and create? Sacrifice huge amounts (lifetimes) of collective time and cognition to save lives? Maybe my bias towards experimentation, towards the risk and reward of new, unproven technology is obvious. Yet I have to ask as a humanist: What would I think if Elaine Herzberg was my mom? How would I feel about real-world testing of autonomous cars if Rafaela was my sister or my aunt? The rush. Photo by Caleb George I also ask all these questions as a futurist. It is my obligation, my responsibility to ask. The future and the unknown is a scary and dangerous place, like the highways of any given city in rush hour. Scarier still is driving or entering the future blindly, with a dominant, overpowering fear — and no ethics to guide us.
I, (robot) Driver
18
i-robot-driver-160d3b6bb17d
2018-06-11
2018-06-11 23:47:58
https://medium.com/s/story/i-robot-driver-160d3b6bb17d
false
1,240
Futurism articles bent on cultivating an awareness of exponential technologies while exploring the 4th industrial revolution.
null
null
null
FutureSin
null
futuresin
TECHNOLOGY,FUTURE,CRYPTOCURRENCY,BLOCKCHAIN,SOCIETY
FuturesSin
Self Driving Cars
self-driving-cars
Self Driving Cars
13,349
Travis Kellerman
If I listen carefully, a collective future whispers — and it sounds a little crazy. @traviskellerman
1209a2e54db0
traviskellerman
623
254
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-12
2018-06-12 15:30:01
2018-06-12
2018-06-12 16:31:15
1
false
en
2018-06-12
2018-06-12 16:31:15
20
160e1452e9f4
3.596226
3
0
0
I already have an existing blog but I am also curious to see what Medium has to offer above that. So since it also aligns with my new…
1
My Data Science Learning Journey begins… I already have an existing blog but I am also curious to see what Medium has to offer above that. So since it also aligns with my new exploration to self-teach myself Data Science, I think this might be a good opportunity to kill two birds with one stone: Learn Data Science, and try a new blogging platform to document my personal journey and share any learnings I have with others. So the first step turned out to be more of a stumble until recently. Overwhelmed with the terms of this new industry, and not having much mathematics or statistics background other than what I learnt (and forgot most of) from university some decades back, I found myself in the rare position of being a newbie once again in technology. Exciting! Thankfully, Brisbane has a great pioneer in Data Science, specifically Artificial Intelligence, in Dr. Natalie Rens, who runs Brisbane.AI, a meetup for AI enthusiasts and professionals in Brisbane, and who organized an AI Hackathon that I was able to participate in. For this hackathon we leveraged Kaggle, a platform which hosts competitions where datasets are offered up for competitors to use in developing predictive models on, often with financial rewards from companies offering up the datasets as they seek to solve very real-world problems they face. For the hackathon the beginners were focused on the Housing Prices: Advanced Regression Techniques competition. We had a great mentor for the beginners in Lex Toumbourou, who walked us through several well timed short, but very useful, introductions to ideas on how to attack the problem, minimising the statistical and mathematical weeds we would need to navigate where possible and using plain English to not intimidate a beginner. In essence, he taught us some of the basics of data wrangling and modelling real data scientists use, giving us just enough of a sip of the entire ocean of Data Science to wet our appetites and achieve submissions for the competition, and keep us encouraged to learn more on our own even after the competition (as I have). Kris Bock and Georgina Siggins of Microsoft were our hosts, and much thanks to Georgina for making sure the snacks plates and lunches were there on time, and for providing moral support and conversation during the break times. Thanks to Kris as well for persevering to share with us what the Azure platform offered for Data Science enthusiasts as well, even when the demo gods failed to be appeased by his efforts, he still made available the contents of the Deep Learning VMs he had hoped to have us run on our own. The peer community itself who participated in the hackathon was very vibrant and supportive for learning from each other. Though I was a newbie to this field, with several much more experienced people in the room, the enthusiasm to answer my many (many) questions that were probably very basic to many of them was humbling and quite encouraging that this is definitely a spirited, positive community of folks. I have to thank again Natalie Rens and Lex Toumbourou for all they did to make that event happen. In conclusion, several things I learnt over this weekend: Kaggle is a great learning tool and not just for the competitions. There were several notebooks shared by Lex that walked us through attacking the problem, and these types of notebooks are commonly shared by others in the community for free Python is really powerful and has a simplicity and elegance to it. In the course of the hackathon I got to learn, as side effects, how to use pyenv for Python version management as I setup my own Jupyter Notebook. Kaggle itself offers a custom interface to a Jupyter Notebook for writing competition code, so you didn’t need to how to set one up, but I wanted the flexibility to use my own. Doing the competition taught me alot about how to look at data, and how to use basic libraries such NumPy, Pandas and SciKitLearn for data wrangling as these are incredibly powerful libraries and are heavily leveraged in predictive modelling. While we started with a focus on a simple Linear Regresssion model for our learning algorithm, thru peer learning I learnt the basics of variations to these (and their associated libraries) such as Lasso and Ridge regression models as well as introducing me to ElasticNet, which I settled on for my hackathon entry, which I think didn’t do too badly for a first timer. One of my first computer science lecturers at UWI always advocated mastering first principles as a means to being able to grasp any specialised realm in Computer Science. In writing this, I chuckle to myself as I also now remembered he was also the AI lecturer. His advice has allows stuck with me. So in looking retrospectively at this weekend’s competition, it started with the first principle of how to understand what data we were given, and how to measure it effectively so it can be applied in the predictive modeling algorithm chosen (in this case Linear Regression). So the knowledge I am missing around statistics and measurement theory is where I am going to start next to look at, while also exploring the “cool” stuff like Python, Jupyter Notebook, Kaggle and other libraries, platforms and tools relevant to Data Science learning.
My Data Science Learning Journey begins…
7
my-data-science-learning-journey-begins-160e1452e9f4
2018-06-14
2018-06-14 01:55:17
https://medium.com/s/story/my-data-science-learning-journey-begins-160e1452e9f4
false
900
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Nissan Dookeran
Technology Architect and Consultant, social networking and marketing enthusiast
22984c764d94
nissandookeran
96
260
20,181,104
null
null
null
null
null
null
0
null
0
d82dbd11f86a
2018-06-10
2018-06-10 16:08:58
2018-06-11
2018-06-11 02:31:32
5
false
en
2018-06-11
2018-06-11 02:31:32
0
160e584f8cb4
3.686164
3
0
0
In previous post we have commented about the nature of the information, sometimes we have scarce information and in other occasions we have…
5
Panel Data: Fixed Effects Model In previous post we have commented about the nature of the information, sometimes we have scarce information and in other occasions we have very broad information or difficult to interpret. As mentioned above, the quality of the data determines the quality of the estimates. But there is a classification of the data. The data can be classified in: a) Cross section b) Time series c) Panel Pool of time series and cross section In this post we will talk about the Panel Data. Panel Data The panel data combine time series and cross section. The idea is that we have an observation unit i (individuals, countries, companies, actions, etc.) that is followed through time t (hours, months, quarters, years, etc.). The panel data are usually longitudinal. However, sometimes non-longitudinal panels can be formed. The advantages we have to use panel data are: Greater heterogeneity as they vary between them and over time, as we have data that we follow over time we have more observations than the cross section and the series of time alone. → Greater heterogeneity, greater efficiency, less collinearity between variables, more degrees of freedom and consistency. They allow the elaboration of specific studies that can not be carried out with other types of data. Examples: Labor mobility, rotation in employment They allow to minimize biases in the estimates through certain techniques (fixed effects, first differences, etc.). Pooled Data There are bases similar to the data panels that are formed by gluing random samples of cross-sectional data using different points in time. Notice that they are not the same at different points of time. Data Models Panel There are two categories in the nature of the panel data information: a) When the number of observations of i is large and t small (classical models used in microeconometrics). b) If the number of observations in t is relatively large, then we must worry about serial correlation within the panel. Fixed Effects Model Assume the following generic model: (1) Where i = 1,2, 3, … ..N t = 1, 2, 3, … .T Note that the number of observations is NxT if we have a balanced panel For the moment we will assume that the panel is balanced later we will see how to relax this assumption. The previous model is expressed in vector notation. The difference c.r.a to the classical regression model is the term αi, which is not random and is specific to each i Can we ignore the αi element and apply OLS? (2) We can think of αi as an unobserved effect and that therefore it is part of the error. Examples: a) In labor economics could be the ability of an individual b) In economic development the degree of corruption of a country, or some regional effect not observed. Note that MCO generates skewed results when Cov (x, α) is different from 0 If the Cov (x, α) is> 0 → Bias up If the Cov (x, α) is <0 → Bias down Basically It consists of 3 steps: Estimate the mean of all the variables in the model for each i Obtain the deviation with respect to the mean for each variable Apply OLS to the transformed model and correct its standard errors Note that the transformation removes αi so it is possible to apply OLS It is also known as the “within” model, the same applies to its estimators Note that your estimators are calculated from the variation within each i EF estimators are unbiased and consistent but generate a loss of efficiency. Another way of thinking about αi is as an intercept for each i, in fact the OLS estimators with a dummy for each i generate the same results as the EF model. Using dummy variables, a larger R2 is obtained than using EF. Usually the coefficients of the dummy variables are not reported in the estimates. Note that even when Cov (x, α) = 0 we still need to worry about the variance of the estimators: (3) What is different to (4) Since it is different from the Identity Matrix. The X that do not change over time are eliminated with the transformation of EF, which generates that its coefficient can not be identified. When the generic model is correct, it does not matter if Cov (x, α) is equal to zero or not, EF generates unbiased results. EF generates a loss of efficiency since it only takes into account the variation within the group (does not consider the variation between groups).
Panel Data: Fixed Effects Model
3
panel-data-fixed-effects-model-160e584f8cb4
2018-06-12
2018-06-12 12:07:01
https://medium.com/s/story/panel-data-fixed-effects-model-160e584f8cb4
false
756
All about the Data Analysis with entrepreneur
null
null
null
High Data Stories
highdataconsulting@gmail.com
high-data-stories
DATA SCIENCE,DATA VISUALIZATION,DATA ANALYSIS,TECNOLOGY,SMALL MEDIUM ENTERPRISE
null
Data Science
data-science
Data Science
33,617
Luis Alberto Palacios
I’m a passionate about life that enjoy meeting people, living new adventures and sharing my experiences
7a4f2ad9c738
LuisAlbertoPala
97
51
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-29
2018-01-29 09:07:51
2018-01-29
2018-01-29 15:35:15
0
false
en
2018-01-29
2018-01-29 15:44:21
2
160fa41a4a43
2.316981
0
0
0
I write this article to conclude my deep learning development experience of my internship at UmboCV. During my internship, I built a…
2
Practical Deep Learning Strategy I write this article to conclude my deep learning development experience of my internship at UmboCV. During my internship, I built a violence detection system on production. I hope that I can introduce my strategies on this cutting-edge research deep learning project. For each project which has its own data, I will follow three states. 1. Data Processing 2. Model Development 3. Transfer Learning Data Processing As the first state of a deep learning problem, it may be the most important one. There is a simple exploratory data analysis pipeline from the blog. Ingest Data → Clean Data → Transform Data → Present Data For Ingest Data, we have to collect data as more as possible. Then, we should think about what kind of input data should be for models. As the position of models, we can design how to label our data. This step is really important because if you choose the wrong way to label data, it will be really hard to train a good model. If you know what kind of format is suitable for your models, you can start to build a good quality dataset. A good quality dataset will lead to a good model, so make sure to clean your dataset really carefully. Then, transform your data into the specific format for your models. For example, resize your image data as the input size of your models. This step will speed up your data loader. Finally, try to present your data and check that it is reasonable for a model to learn the meaning. Sometimes, researchers will assume deep learning model can learn anything and forget to check the final data. It is really dangerous for researchers to build a baseline method. In general, a machine is hard to beat a human. Make sure people can do the tasks. Model Development If you already built a good dataset, you can start to read some papers and select a good model to implement. For your first model, you should choose a model in a simple architecture. Implement the model as fast as you can. To make sure you successfully train the model, there are some tips I usually take. First, select some small amount of data and try to overfit them. The capacity of your model must remember all the feature of your small dataset. If you can not train the dataset to 100%, there must be some bugs in your code. Second, data augmentation is really important. From my experience, data augmentation is as essential as model architecture. If your architecture is correct and there is no bug in your code, try to change your data augmentation strategy. After your finish your baseline model, you can survey more papers and compare the pros and cons of these methods. Choose the approach which is most suitable for your situation. All of the models should train on the public dataset first. There are two reasons. First, on the public dataset, you will have a standard metric. Second, in most of the time, the public dataset includes much more data than your own data. Pretrained weights of deep learning models are the most critical elements. Transfer Learning For the strategies of transfer learning, you should follow the guide on CS231n. There are some advice I want to share. 1. Don’t think you make your model have a great fit on your dataset in one time. You should be patient and take care of the parameters in all layers. 2. You should inference on your testing data for each fine-tuned model. With several iterations of the processing, you will understand more about the distribution of your dataset.
Practical Deep Learning Strategy
0
practical-deep-learning-strategy-160fa41a4a43
2018-05-06
2018-05-06 00:05:35
https://medium.com/s/story/practical-deep-learning-strategy-160fa41a4a43
false
614
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Yi Yao Huang
My research interest is deep learning on Natural Language Processing and AI. To apply AI to education, I force myself to think, write and code everyday.
c7e3733fb89
darrenyaoyao.huang
81
18
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-06
2018-03-06 07:49:38
2018-03-06
2018-03-06 07:51:36
1
false
en
2018-03-06
2018-03-06 07:51:36
2
16102afddcfc
6.50566
1
0
0
Digital captures the transition we are embarked on from neurons to transistors as the dominant substrate of information processing.
5
Digital: From Neurons to Transistors Digital captures the transition we are embarked on from neurons to transistors as the dominant substrate of information processing. McGovern Institute in MIT Neurons are exquisite creations of natural evolution. They have achieved through self-organization and evolution many of the properties we have managed to engineer in electrical systems such as logical operations, signal regeneration for information transmission or 1/0 information processing. When combined together in the circuits that constitute the human brain they allow amazing complexity and functionality. The roughly 100 billion neurons in a human brain form well over 100 trillion connections leading to consciousness, creativity, moral judgment and much more. They have also led to transistors, an information processing system that has managed to replicate many of the properties of neurons while being approximately 10 million times faster. A neuron typically fires in the millisecond range (1000 Hz of frequency), while transistors operate comfortably in the nano to picoseconds (1–100 GHz range). A 10 million times speed advantage makes transistor dominate neurons as an information processing medium, even if they are still capable of less complexity and require more energy than a brain. That is why we have seen the progressive substitution of neurons for transistors in many information processing tasks. This is fundamentally different from technologies like writing that extend neurons. It is only comparable to the use of the steam engine, electricity and internal combustion to substitute muscles. This absurdly great speed advantage allows the Digital Paradox. When neurons are substituted by transistors in a process you get lower cost, higher quality and higher speedwithout trade-offs. Thus there is no going back, once we have engineered sufficient complexity in transistors to tackle a process there is no reason to use neurons anymore. Of course, neurons and transistors are often combined. Neurons still dominate for some tasks and they benefit greatly by being supported by transistors. What we now call generically “Digital” is one more stage in this gradual substitution of neurons by transistors. In that sense, those who claim that Digital is not new are right. At the same time, the processes that are now being substituted have a wider impact, so the use of a new term is understandable. Finally, we can expect the substitution to continue after the term digital has faded from use. In that sense, there is a lot that will happen Beyond Digital. Early stages of substitution Transistors and their earlier cousins vacuum tubes started by substituting neurons in areas in which their advantage was greatest: complex brute force calculations and extensive data collection and archiving. This was epitomized by calculating missile trajectories and code-breaking during World War II and tabulating census data since the early 20th century. Over time, this extended to databases to store large amounts of information for almost any purpose and programming repetitive management of information. This enabled important advances but still had limited impact in most people’s lives. Only very specialized functions like detailed memory, long relegated to writing, and rote calculation, the domain of only a minuscule fraction of the workforce, were affected. The next step was to use these technologies to manage economic flows, inventory, and accounting within organizations. So-called Enterprise-Resource-Planning or ERP systems allowed to substitute complex neuron+writing processing systems which were at the limit of their capacity. This substituted some human jobs, but mainly made possible a level of complexity and performance that was not attainable before. Substitution only started to penetrate the popular conscience with Personal Computers (PCs). PCs first allowed individuals to start leveraging the power of transistors for tasks such as creating documents, doing their accounting or entertainment. Finally, the internet allowed to move most information transmission from neurons to transistors. We went from person to person telephone calls and printing encyclopedias, to email, web pages, and Wikipedia. In this first stage, transistors were substituting mostly written records and some specialized jobs such as persons performing calculations, record keeping or information transmission. They were also enabling new activities like complex ERPs, computer games or electronic chats that were not possible before. In that sense, the transition was mostly additive for humans. Digital: Mainstream substitution The use of Digital coincides with when many mainstream neuron-based processes have started to be affected by transistors. This greater disruption of the supremacy of neurons is being felt beyond specialized roles and starts to become widespread. It also starts to be more substitutive, with transistor-based information processing being able to completely replace neurons in areas in which we thought neurons were reasonably well adapted to perform. First went media and advertising. We used to have an industry that created, edited, curated and distributed news and delivered advertising on top of that. Most of these functions in the value chain have been taken over by transistors either in part or in full. Then eCommerce and eServices moved to transistors the age-old process of selling and distributing products to humans. The buying has still stayed in human hands for now, but you are close to being able to buy a book from Amazon without any human touching it from the printing to the delivery. On the eServices side, no one goes to the bank teller anymore if it can be done instantly on the Internet. Then the Cloud took the management of computers to transistors themselves. Instead of depending on neurons for deployment, scaling, and management of server capacity, services like Amazon Web Services or Microsoft Azure give transistors the capacity to manage themselves for the most part. Our social lives and gossip might have seemed totally suited for neurons. However, services like Facebook, Whatsapp or LinkedIn have allowed transistors to manage a large part of them at much higher speeds. Smartphones made transistors much more mobile and accessible, making them readily available in any context and any moment. Smartphones have started substituting tasks our brains used to be able to perform with neurons, like remembering phone numbers, navigating through a city o knowing where we have to go next. Finally, platforms and marketplaces like Airbnb and Uber turned to transistors for tasks that were totally in the hands of neurons like getting hold of a cab or renting an apartment. This encroachment of transistors on the daily tasks of neurons has woken all of us up to change. Now it is not just obscure professions or processes but a big chunk of our daily life that is being handed over to transistors. It creates mixed feelings for us. On the one hand, we love the Digital Paradox and its improvement of speed, quality, and cost. It would be difficult to convince us to forsake Amazon or to return to the bank branch. On the other side, transistors substituting neurons have left many humans without jobs and are causing social disruption. Transistors are also speeding up the world towards their native speed, which is inaccessible to humans. There are almost no more human stock traders because they cannot compete with the 10 million faster speeds of transistors. Beyond Digital: Transistors take over The final stages of the transition can be mapped to how core functions of the human brain might become substitutable by transistors. The process is already well underway with some question marks around the reach of transistors. In any case, we can expect it to continue accelerating speed taking us to the limits of what can be endurable for our sluggish neuron-based brains. Substituting the sensory cortex. Machine vision, Enhanced Reality, Text-to-Speech, Natural Language Processing and Chatbots are just some of the technologies that are putting in question the neuron’s dominance in sight, sound, and language. Approximately ~15–25% of the brain is devoted to processing sensory signals and language at various levels. Transistors are getting very good at it, and have recently become able to recognize many items in images and process and create language effectively. Substituting the motor cortex. The same with unscripted and adaptive movement. Robots are increasingly powerful and they are able to cook pizzas, walk through a forest and help you in a retail store. Autonomous driving promises for transistors to be able to navigate a vehicle, one of the most demanding motor-sensory tasks we humans undertake. Substituting non-routine cognitive and information processing. We have seen basic calculation and data processing be taken over by transistors, now we are starting to see “frontal lobe” tasks go over to transistors. Chess, Go and Jeopardy are games in which AIs have already bested human champions. Other more professional fields like medicine, education or law are already seeing transistors start to encroach on neurons with Artificial Intelligence in which transistors mimic “neural networks”. Encoding morality, justice, and cooperation. Another set capabilities which represent some of the highest complexity of the human brain and the frontal lobe are moral judgments and our capacity for cooperation. Blockchain promises to be able to encode morality, justice, and cooperation digitally and make it work automatically through “smart contracts”. Connecting brains with transistors. Finally, we are seeing neurons get increasingly connected to transistors. There are now examples of humans interfacing with transistors directly with the brain. This could bring a time of integration in which neurons take care of non-time sensitive tasks while in continuous interaction with transistors that move at much higher speeds. Will all this substitution leave anything to neurons? There are capabilities like empathy, creating goals, creativity, compassion or teaching humans that might be beyond the reach of transistors. On the other hand, it might be reasonable to believe that any task that can be done with a human brain can be done much quicker with an equivalently complex processor. There are no right answers but it seems that roles like entrepreneur, teacher, caregiver or artist might still have more time dominated by neurons than many other callings. Also, there might be some roles that we always want other humans to take with us, even if the transistors could take care of it much more quickly and more efficiently.
Digital: From Neurons to Transistors
50
digital-from-neurons-to-transistors-16102afddcfc
2018-05-16
2018-05-16 21:02:50
https://medium.com/s/story/digital-from-neurons-to-transistors-16102afddcfc
false
1,671
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jaime Rodriguez-Ramos
Impact of exponential technologies on society and business.
3af872479b37
jaime.rodriguezramos
43
21
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-27
2017-09-27 13:54:11
2017-10-02
2017-10-02 08:00:39
1
false
en
2017-10-02
2017-10-02 08:00:39
2
16107d0b0fd9
2.162264
5
0
0
AI is moving at a substantial rate from mobile phones to medical advancements, so why are people afraid of AI? Killer robots and the…
4
AI & Singularity? AI is moving at a substantial rate from mobile phones to medical advancements, so why are people afraid of AI? Killer robots and the Terminator spring to mind, but is AI really that scary or is it just a case of misinformation? AI itself is just a mechanism that is used for quicker analysis of information, thus making it far more efficient and effective to complete arduous tasks. Eliminating time reduces costs and allows decisions to be made far more efficiently. AI ranges from entertainment to applications in healthcare, the purpose of each application is to speed up traditionally laborious processes and make information far more accessible and easy to view for the best possible user experience. Netflix for example analyses data using AI, tracking what types of shows and movies you watch and even which bits you skip. Have you ever noticed how if you skip the titles for a TV show on Netflix, it will automatically skip the intro every time you start a new episode of the same series? Singularity is the phrase that’s being thrown around and rightly so, this is the point at which AI exceeds a man’s intellectual capacity, but how close are we to this point? Some have said that we are so far away that it’s not even worth thinking about, but others have stated this could become a reality within the next decade or two! “But on the question of whether the robots will eventually take over, he (Rodney A. Brooks) says that this will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. He says that creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a “super-bad robot,” someone has to build a “mildly bad robot,” and before that a “not-so-bad robot.” ― Michio Kaku AI as it stands isn’t an issue, the issue comes from developers and organisations that want to progress AI into dangerous territories, however, you have to remember there is a lot of opposition for those who want to cross over the dangerous boundary, especially from the likes of Elon Musk. Elon Musk was an investor in DeepMind before it was bought by Google in 2014 and he stated the reason for his investment was to keep an eye on the improvements and developments within AI: “It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.” So what of singularity? We don’t know and there’s speculation and questions everywhere, but we believe that decentralisation and democratisation of the technology is the answer. Time will tell what happens, but for the moment we should just enjoy the benefits that AI brings.
AI & Singularity?
54
ai-singularity-16107d0b0fd9
2018-04-25
2018-04-25 22:43:26
https://medium.com/s/story/ai-singularity-16107d0b0fd9
false
520
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
EnergiToken
EnergiToken rewards energy saving behaviour. Our blockchain solution will create a platform to reward energy efficient behaviour through EnergiToken.
2cf505f296c0
EnergiMine
158
50
20,181,104
null
null
null
null
null
null
0
null
0
ac68236a442e
2018-01-08
2018-01-08 16:44:53
2017-12-31
2017-12-31 18:22:26
2
false
en
2018-01-08
2018-01-08 16:46:14
8
161109eec3b3
14.005975
0
0
0
I’m building the machine that could one day replace me.
4
Planned Obsolescence It began as all well-meaning ideas do — as an effort to make life a little bit easier. This year I began building my own stock-picking artificial intelligence (AI) program. I created it to help me invest my savings and gave it a name: AlphaBean. But as AlphaBean became smarter and smarter, I eventually began to wonder: Could it replace me? After all, I’m a financial journalist. My job involves studying companies and writing about them for the wider public. If AlphaBean could learn to invest, might it, or something like it, steal my job? Past is prologue Of course, I’m not the first person to create an investing algorithm. Algorithmic thinking about investing has existed for a long time — much longer, in fact, than people normally think. Benjamin Graham, the person who practically invented stock analysis and served as a mentor to Warren Buffett, famously suggested more than half a century ago to his readers that they “limit themselves to issues selling not far above their tangible asset value.” In ruling out pricier stocks, Graham was articulating a rule or heuristic for winnowing the number of possible answers a human being must consider — much like how programmers often do with AI. Investing has become much more competitive and financial data more widely available since Graham’s day, so you won’t find many companies that meet his criterion anymore. Now the work of scanning through the newspaper has been replaced by websites like Yahoo! Finance, MSN, and CNBC, and partially automated with computerized screening tools. Investors’ relentless pursuit of money meant that technical progress didn’t stop with the simple algorithms used by Graham and his followers. Aided by computing advancements, people began using trading systems that bought and sold stocks based on other things, too, including technical indicators. Today, short-term-oriented algorithms direct the majority of stock trades. They aren’t rooted in an understanding of businesses or investing for the long term, but in noticing infinitesimal price discrepancies; detecting and sneaking ahead of big orders; riding waves of price momentum; processing economic, financial, and related news reports before anyone else can; or just having superior fiber-optic (or, better yet, microwave) communication access to the market. But automated trading is beginning to come to individual investors in a much a different form. Robo-advisors — financial advisory firms whose algorithms automatically allocate client funds — have just in the past decade grown from nothing to having more than $200 billion in assets under management. With lower variable costs, they’re able to serve smaller accounts and charge lower fees than traditional advisors. This makes robo-advisors a great option for individual investors. From a purely technical perspective, however, the algorithms robo-advisors use aren’t much fancier than those of their flesh-and-blood counterparts. We’re just now seeing machine-learning-based investment options becoming widely accessible. The writing’s on the wall. Over the past two years, several AI-managed ETFs have come online. Their typical strategy is to scrape information from analyst reports, market news, and social media to execute ephemeral trades. But I’m a long-term, buy-and-hold stock picker. Of all the available tools — stock screens that we have to tell what to screen for, automated asset allocators, and short-term-oriented AI — none do that. So I set out to make my own. Time for another montage Over the past year, I’ve spent more than 1,000 hours creating AlphaBean. It’s been an extremely complex process that’s involved late nights, spreadsheets, thousands of tests, 3,599 precise lines of code (so far), and hundreds of computer crashes. I’ve been programming since I was an 11-year-old. This year, I studied AI part-time at Georgetown University as part of my research for this series. But making AlphaBean was different from anything I’ve ever done before. Machine learning is not like many imagine it — dozens of programmers and mathematicians writing programs on computers and formulas in notebooks until they crack the code and finally…voila: It’s alive! The reality is that machine learning is an iterative process. Even an AI program’s creators don’t always know where the adventure will take them. Here’s how my journey began. I started with a standard set of machine-learning tools from the academic world. I spent weeks with dozens of their algorithms, getting a feel for the different ways in which they can be used alone or in combination with one another to invest. I thought a lot about what kinds of financial metrics would be a good fit. Programmers typically hire subject-matter experts to orient the problems they face, suggest heuristics, and identify when AI platforms make silly answers that lack common sense. But AlphaBean was a lean, one-man, up-until-2-a.m., evenings-and-weekends operation. Luckily I had a sufficient background in investing to serve as my own subject-matter expert, for I had no money with which to hire one. There’s a belief that more information is always better, but that’s not always the case. It’s often better to be thoughtful of what you’re doing, rather than to throw everything against the wall and see what sticks. Once I’d carefully selected metrics, I meticulously typed in years’ worth of publicly available information into a spreadsheet. I ran machine-learning tools on the information that I’d inputted to understand how different learning techniques react to the problem of investing and to identify which algorithms were promising candidates for getting to my desired result: 3-year outperformance. To give you a lens into the process, here are two examples of such techniques: Bayesian networks and neural networks. Their names may sound similar, but they aren’t at all the same things. A Bayes network can organize information into a web of causal and probabilistic relationships between different events (Hey look — CatfudCorp was really profitable over the past few years, and its stock went up, too! Maybe there’s a connection). As a Bayes network encounters new information, it employs statistical calculations to reweight its network of probabilities. (For my coursework, I built a Bayes network from scratch, and I can tell you it’s a beautiful experience to watch each part of the network communicate with each other part to reweight all these probabilities — all on its own.) Finally, you can ask it to spit out predictions for which stocks it believes will be solid performers over the long run. Recall that a neural network also models a web of interconnected data nodes, but its nodes are organized into layers, and they don’t necessary represent real-world features. Each node contains a series of “weights” that indicate how much influence it has on each node in the next layer. The neural network’s learning algorithm laboriously churns through input examples over and over, checking each time to see whether the network produces the correct output for the input. If it doesn’t (Huh. When I plug in CatfudCorp’s financial information, I thought the stock would have been a loser, but actually it was fantastic), it tweaks the nodes’ weights according to cumbersome mathematical formulas and so hopefully “learns” how to produce more accurate outputs next time. The differences may sound academic, but trust me — they’re not. Algorithms are tools, and some tools — or combinations of tools — are better suited to some problems than to others. And the neural network took me 30 times as long to run. Trying a different process meant gaining a new insight, which meant more tinkering, and starting over. Again and again. After all, if I was going to put my money behind my AI software’s picks, I had better be darn sure the system was well-tested. AlphaBean is a vastly more sophisticated creature today than it was when I started the project. With each iteration, I encountered new issues that demanded comprehensive problem solving. Over time, I designed and coded various tools that have made AlphaBean into a smarter, more objective student. And that’s the critical thing. The COMPAS criminologist system shows what an enormous challenge bias and overconfidence can pose for machine learning. These pitfalls underscore why it’s important to view machine learning as an experimental science. As with any test, you’re never sure how it’s going to turn out until you conduct it. Intuition is not a reliable guide. This meant that the bulk of my 1,000-plus hours was spent not on making the actual AI, but on devising clever ways to test and improve its investing skill. Originally I thought I might collect some encouraging results or stock ideas in just a couple of months. Instead, I’ve taken the meticulous path, and have now spent upwards of six months handcrafting my own custom tools just for evaluating AlphaBean’s performance. Throughout the process, I faced many of the same challenges I’ve written about — avoiding bias, mitigating overconfidence, interpreting information, and making AI software that can think for itself. There’s no free lunch Creating AlphaBean would have been much easier if all I wanted it to do was automate my own thinking processes. But because I wanted AlphaBean to improve upon my knowledge, I needed to program it with the ability to discover its own strategies. And I had to begin its training with a clean slate so as not to bias it toward mine. What’s more, I didn’t want to assume that I knew how AlphaBean should work better than AlphaBean did. Just as human students have individual learning styles that adhere to different learning situations better than others, certain machine learners work better than others at mastering certain problems. That’s because — as I discussed previously — there’s no such thing as the perfect AI program. Researchers have had much more success tailoring individual AI systems to specific problems than building a logic machine capable of general intelligence. Certain algorithms work better than others at certain problems. This fact was captured in 1999 with a mathematical proof similar to the economic adage that you can’t get something for nothing: AI’s “no free lunch” theorem tells us that AI software attuned to one problem is necessarily worse at solving others. It may seem that AlphaGo, which excels at Go, chess, and energy management, violates this rule. But as a product of deep learning, AlphaGo is best suited for learning subtle patterns out of large data sets. AlphaGo would be terrible, say, for identifying extremely rare diseases or one-of-a-kind cyberattacks — problems which by definition lack ample data. Instead, rule-based systems with algorithms that follow a series of hard-coded procedures (if this, then do that) would work better than AlphaGo at detecting unique threats whose properties can be defined by experts but not mountains of data. As for AlphaBean, I’m exploring its styles and quirks. Even though I’m its owner and creator, I won’t understand everything about my program until we’ve gotten to know each other better. AlphaBean has a mind and personality of its own. That’s the point. So I’m letting AlphaBean teach me how it prefers to learn. IMAGE CREATED FROM A PORTION OF CODE BELONGING TO THE AUTHOR. The struggle is real There’s more to evaluating a company than running numbers through a computer. Investors face squishier questions that even human intuition struggles to work through. For example, we all know that company management doesn’t always give us the full story, that they sometimes massage earnings, and that it takes critical judgment to see through the smoke and mirrors. One investor relations officer I cited earlier this year put the quarterly earnings game in these stark terms: The company can talk to the analysts [privately before a conference call] … and basically guide the analysts to help manage the public call. … If I can call some of my investors or analysts before the public call, then the analyst doesn’t ask me a really tough, embarrassing question in public. Is management trying to hide something? Do the numbers capture reality? How will big-picture social trends affect this business over the long run? What are its competitive advantages? Number-crunching techniques can’t easily ask or answer questions such as these. To compensate, I’m looking into which kinds of quantitative proxies help to answer qualitative enigmas. For example, heavy insider ownership might be a good proxy for shareholder-friendliness, because it demonstrates some alignment between shareholder interests and management’s self-interest. Machine-learning techniques for natural-language processing offer some promise of bridging the quantitative-qualitative gap. Over the past decade, news agencies and internet content creators have been using AI to help craft stories. Today, programs can collect figures for reporters, alert them to unusual data, identify viral topics, and even write templated stories. But AI can’t do the reporting itself. It takes dogged research and human judgment to comb through all kinds of qualitative issues like managerial integrity or competitive advantages. AI is less capable than humans at assessing unquantifiable human intentions, motives, and subterfuge. When it comes to understanding people, humans are still top dog. To handle these problems, I managed to combine some of my own investing knowledge with AlphaBean’s while using sophisticated techniques to avoid biasing it. Now I’m a believer Despite all of these challenges and AlphaBean’s youth, it’s already beginning to change how I look at companies, both as an investor and as a journalist. Here’s just one example. We often treat earnings per share (EPS) growth as one of the most significant measures of business performance. It’s touted by management, harped on by analysts, and followed closely by investors and the media. Who doesn’t like profit? But EPS has some flaws. It’s easier for management to manipulate than other financial measures. And it’s not always the most relevant metric. Academic studies have suggested that the prominence of EPS stems in large part not only from its accounting explanatory abilities, but because it acts as a signal to traders and speculators. Momentum traders need a way coordinate their behavior, and EPS provides that, as I’ve written before: “[A]n earnings beat means we all buy, and a miss means we sell.” AlphaBean seems to agree that when it comes to long-term investing, EPS shouldn’t always be the headline number. Sometimes my program prefers to look at free cash flow, a different metric that’s harder for management to manipulate and can do a better job describing businesses with improving working capital efficiency. Its choices depend on context. Every company is different. AlphaBean’s urge to examine each company in a unique way has also changed how I look at financial information. As a writer, editor, and reader of financial news, I am now much more skeptical of articles that draw significant conclusions about a company from generic financial metrics. After all, as an executive, you wouldn’t manage Facebook the same way as Tractor Supply. EPS growth, revenue growth, operating margin, and so forth matter, but they aren’t always the most significant pieces of information for every company. Knowing what to look for takes knowledge, experience, and sensitivity to a company’s individual circumstances. So why keep me around? I analyze companies. AlphaBean can do that. I write about business. AI does that, too. What’s left for me to do? A funny event points to an answer. Earlier this year, a software engineer and impatient Game of Thrones fan decided he’d had enough after waiting six years for the publication of book 6. To speed things along, he created his own author: A neural network that could write in the style of his beloved fantasy series. In July, he uploaded five chapters to the internet. The writing is interesting — and reminiscent of our knight flaunting his fish-eyeball cloak that Google’s neural network produced. DEEP NEURAL NETWORKS NOTICE MYRIAD ASPECTS. IMAGE SOURCE: GOOGLE RESEARCH. USED WITH PERMISSION. Here’s a snippet: “Aye, Pate.” the tall man raised a sword and beckoned him back and pushed the big steel throne to where the girl came forward. Greenbeard was waiting toward the gates, big blind bearded pimple with his fallen body scraped his finger from a ring of white apple. It was half-buried mad on honey of a dried brain, of two rangers, a heavy frey. Wow. What happened to his half-buried-mad-on-honey-of-a-dried-brain AI? It had learned to imitate style, but meaning remained elusive. And that’s the heart of the problem. As the inventor of modern logic noted, there’s a big difference between the things in the world we talk about, and the sense, or cognitive significance, of those things. AI struggles to make sense out of things. Humans will continue to surpass machines for some time in areas like appreciating contextual nuance, weaving together disparate ideas, comprehending human motive and intent, integrating an interdisciplinary conception of the world, and general intelligence. AI is simply not at our level yet. Humans are also far superior at generalizing from a small number of experiences. It’s a miraculous ability we have, and no one understands how we do it. Remember that the first version of AlphaGo studied positions from 30 million human games and played more than 30 million practice games with itself. Lee Sedol began serious training when he was 8 years old for 12 hours a day. That means AlphaGo required at least 500 times as much practice as Lee to achieve a comparable level of skill. We’ve also seen that the current trend in AI is not to imitate human understanding of the world, but to accomplish a task — and, for each AI, to perform a single, specific task. This approach has proven effective, but it leaves AI vulnerable to missing the bigger picture. That bigger picture may not be that significant if you only want to discover which facets of a business are worth looking at, or you only want to predict which companies will succeed. But if you want a why, you need a human being to connect all the dots. We’ve seen AlphaGo flub a crucial moment in a game whose environment consists of 361 squares, two colors of stones, perfect information, and a handful of fixed rules. Human interactions have far more features than either the physical world of robots or the game-board world of AlphaGo. At The Motley Fool, we try not to just throw data at our readers, but to contextualize, educate, and help people understand concepts. What AI fails to do is just that. Correlation is not explanation. So what is it good for? That said, AlphaBean or similar tools could make my life and my job easier. Much like we saw how AI will transform certain jobs, AlphaBean will allow me to focus on higher-value parts of my life. For one, AlphaBean could work like the radiologists’ CADs I described in part 3, only for investors and journalists. It could apply its own triaging layer of pattern recognition to look at information in a different way than I would, and to return with a second opinion. AlphaBean could also bubble up suggestions for me to look at, like a kind of AI stock screener. It already has the ability to analyze obscure, under-the-radar companies and scoop up ideas that I would have missed. I’m also refining AlphaBean to the point where it can invest part of my retirement savings. If rigorous testing that shows my program can consistently beat the market and do a better job investing than I can, I’m happy to let it take over. Finally, I’m convinced that AlphaBean will teach me new ways of looking at businesses. It has the ability to develop its own investing strategies and to identify characteristics of companies that I had never considered. I’ll be watching how AlphaBean learns to examine the world and learning from it. Understanding the ways AI is best-suited to serve us is the best way to ensure that it does. For in spite of its astonishing powers, AI is not perfect. Just like each of us, algorithms have their flaws. We all need to realize what’s inside the black box if we are to understand our new world of AI and have a say in how that world unfolds. AI can’t go it alone A few months after AlphaGo defeated Lee Sedol, I confided in a friend who trains AI for DeepMind that I had been initially skeptical of AlphaGo. That I had thought the whole endeavor typical technological hubris to believe people were smart enough to program something as capable as a human being. His response? Maybe it’s hubris to think humans are so smart that we can’t make something smarter than us. He may be right. Computers are already better than us at many things, and though AI has a long way to go, it’s quickly making up ground. AI’s growing capabilities in strategic thinking, robotics, language processing, and learning are leading to new developments in healthcare, manufacturing, retail, the home, and every white-collar job. But AI also encourages overreliance; it can be opaque, literal-minded, and susceptible to bias; and it can crash unpredictably into erratic, catastrophic dysfunction that offends common sense. Human ingenuity and adaptability have so far allowed mankind to survive a relatively brief length of time. They’ve helped us dominate the planet. And they’ve permitted us to fashion a powerful new technology whose thinking is alien to ours yet is created in our own intelligent image. Powerful though it may be, AI won’t do what we want it to do without our understanding and guidance. If — and only if — we’re careful with this awesome power, can we create a world worthy of our values. Originally published at www.fool.com on December 31, 2017.
Planned Obsolescence
0
planned-obsolescence-161109eec3b3
2018-04-18
2018-04-18 12:37:32
https://medium.com/s/story/planned-obsolescence-161109eec3b3
false
3,610
To help the world invest better.
null
themotleyfool
null
The Motley Fool
editorial@fool.com
the-motley-fool
INVESTING,STOCKS,PERSONAL FINANCE,STOCK MARKET,MONEY
themotleyfool
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
The Motley Fool
Founded in 1993 by brothers Tom & David Gardner, The Motley Fool helps millions of people attain financial freedom through our website, podcasts, books & more
c18191d81ca
themotleyfool
69
3
20,181,104
null
null
null
null
null
null
0
x: input matrix w1: 1st weight matrix for multiplication w2: 2nd weight matrix for multiplication Hidden = matrix_multiplication(x, w1) Hidden_rectified = Clip(Hidden, minimum_value = 0) output = matrix_multiplication(Hidden_rectified, w2 Value = Value-gradient velocity = velocity * momentum + gradient * (1-momentum) Value = Value-velocity Theoretical_next_state = Value - Velocity gradient = gradient at Theoretical_next_state Velocity = Velocity * momentum + gradient * (1-momentum) Value = Value-Velocity ReLU(x) = max(x, 0) becomes: lReLU(x) = max(x, .1*x)
5
null
2018-08-18
2018-08-18 20:41:25
2018-08-18
2018-08-18 23:52:48
5
false
en
2018-09-16
2018-09-16 20:03:13
2
1611d382c6aa
4.512579
0
0
0
This first blog post will help you design a neural network in Python/Numpy. It will demonstrate the downfalls of vanilla Multi Layer…
1
Neural Network Introduction for Software Engineers 1 — A Vanilla MLP This first blog post will help you design a neural network in Python/Numpy. It will demonstrate the downfalls of vanilla Multi Layer Perceptrons (MLPs), propose a few simple augmentations, and show how important they are. We will conclude by demonstrating how this could look in a well organized software engineering package. For a non-technical preface, read ML Preface to learn about regression. Neural Network Introduction for Software Engineers This first blog post will help you design a neural network in Python/Numpy. It will demonstrate the downfalls of vanilla MLPs, propose a few simple augmentations, and show how important they are. We will conclude by demonstrating how this could look in a well organized software engineering package. First, we will demonstrate build a simple Neural Network (NN or Multi-Layer Perceptron/MLP) Neural Network Introduction for Software Engineers This first blog post will help you design a neural network in Python/Numpy. It will demonstrate the downfalls of vanilla MLPs, propose a few simple augmentations, and show how important they are. We will conclude by demonstrating how this could look in a well organized software engineering package. First, we will demonstrate build a simple Neural Network (NN or Multi-Layer Perceptron/MLP): Mathematically, we will define a neural network with one hidden layer as follows: In this way, we calculate the output as a function of the input. We look back at calculus and linear algebra to optimize (train) our MLP to match the dataset by minimizing a loss function via gradient descent. Oh no! Our loss explodes. A basic neural network is very sensitive to it’s learning rate because its gradients can explode. Once gradients get large they can keep oscillating their corresponding weights back and forth and explode to infinity. Let’s fix this by clipping the applied gradients to [-1,1]. Let’s also decay the learning rate towards 0 to allow the network to first approach a reasonable solution with a large learning rate, and then settle on a specific solution with small adjustments. Gradient clipping is also done through clipping the gradient to having a maximum vector norm, and learning rate decay is sometimes ignored altogether in favor of other fine-tuning strategies like l2 regularization or batch size increasing. Great! Our loss function is now decreasing towards zero. Have you heard about Momentum optimizers or ADAM optimization? Look at the loss wiggling in the above curve, do you notice that it oscillates up and down while moving generally downwards? Momentum is the idea of maintaining velocity in a reasonably constant direction. When we receive a gradient to adjust our parameters, we update the velocity we are moving and then continue traveling in relatively the same direction we think is a productive direction in our parameter space. Precisely, becomes: Now with momentum and functionizing our loss function as a layer of abstraction to prepare for alternative loss functions, our code becomes: Great! Our loss gets small much quicker, approaching a much smaller value. Let’s add one more machine learning tool to help the model learn more reliably, called ‘Nesterov Momentum’ Nesterov Momentum works by the following inspiration. At a given step with momentum, you know approximately the next step you will take. So lets look to where we expect to move, calculate the gradient there, and use that gradient to correct our momentum vector. Then with our new momentum vector, we step as we always do. At the same time, we notice a problem with our ReLU non-linearities in that there is zero derivative for the “off” part of the ReLU curve. Let’s also fix this zero derivative by replace our ReLUs with the much more popular Leaky ReLU’s where the flat region has a gentle slope. Bear with us for this one segment, because the code get’s a little crazy to demonstrate the math. In the next section we will organize it so the software engineers in the audience feel comfortable. Great! Nesterov Momentum worked. But look at this code! It’s horrible. We’ve hard coded this whole system to implement a neural network with Exactly one hidden layer A leaky relu non-linearity Full-batch optimization If we wanted to change any of these we would have to change multiple systems to hard code the other system we wanted to try. I’ll leave out the explanation for now, but lets look at how much cleaner and reusable this exact system could be. We’ll leave it as one file for now so we can show it easily in a blog. Hurray! We have clean software that defines, trains, and evaluates a model. There are still some huge problems with our implementation from a Machine Learning and software engineering perspective, but it demonstrates the general concepts. In the real world we would (Future sequential blog posts): Solve a real world problem, training then evaluating our problem on a separate validation dataset. Think about the meaning of our data, and build our model to represent it in a thoughtful way. (Link to Convolutional post) Think about our loss function in a meaningful way, building our loss function to give us a meaningful gradient. (Link to house price loss post) Think about what we want like above, but implement it using a neural network API like Tensorflow. Wow that was easy! (Link to Tensorflow house price post)
Neural Network Introduction for Software Engineers 1 — A Vanilla MLP
0
neural-network-introduction-for-software-engineers-1611d382c6aa
2018-09-16
2018-09-16 20:03:13
https://medium.com/s/story/neural-network-introduction-for-software-engineers-1611d382c6aa
false
975
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Lee Tanenbaum
My Machine Learning Blog leetandata.com medium.com/@leetandata github.com/leedtan
cb217931d2c
leetandata
9
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-08
2018-05-08 13:13:19
2018-05-08
2018-05-08 13:25:59
4
false
zh-Hant
2018-05-10
2018-05-10 16:45:18
6
1611f480bafc
0.907547
9
0
0
前言
5
自建C#機器學習核心 前言 這篇心得是應之前python spark decision tree 的老師Marcel Wang邀約所寫的,不過我也不清楚怎麼寫才適合,所以就先交待一下那門課的狀況吧.當時他是在FB的python Taiwan開了一個關於spark和決策樹的討論串,我當時剛好也好奇machine learning到底在幹嘛,就跟著討論串+1. Fig1. Support Vector Machine[1] Python Spark課程 這門課以交作業的方式進行,當有人貼出該次作業後,就繼續發放下一週的作業,前幾個禮拜都要搜集和消化不少資料和文獻,不過好處是如果不知道作業怎麼寫,也能參考其他先做出來的人的作品,這門課對上班族來說負擔不算輕,得在下班和假日處理作業的事情,不過也因此從完全不知道ML是什麼東西的人,變成能夠講述ML大概是怎樣的東西. 不過我比較沒定力,從作業9之後,就沒再繼續了,一個是老師沒繼續出作業就懶得繼續下去,另一個是工作和生活上碰不到需要Hadoop的情境,加上hadoop/spark/scala的版本真是一團亂,所以一整個發懶. 自建核心的契機 時間推移到今年三月,我在公司建構機聯網,蓋到一半突然想到用ML來判斷製程的參數到底好或壞,也許是好玩又可行的案子,因此又重新把之前看過的文獻和資料找了出來. 本來我是想說用現成的lib就好,但是沒想到公司的網路完全把python lib wheel擋起來,我又很懶得研究如何翻牆,於是就想說來看看這些lib的原始碼怎麼寫的好了~原本不看還好,看過才發現,「我根本看不懂他們在寫啥…」想說這樣不行,這樣根本寫不出東西呀! Fig2. overview of perceptron[2] 感知器 我想了想,決定還是用最笨的方法好了,從數學方法一步步來蓋自己看得懂的ML(反正工控和製程的數據壓根稱不上“大”數據,一天有個2GB就不錯了,因此跑得好不好不是我的首要考量),由於在工業控制和機械領域,C#是王道(c/c++是大神級玩的,小的玩不起),所以就以C#來撰寫,首先是以類神經網路開刀,花了個半天寫了個最初級的感知器,就覺得其實也不難嘛~哈哈!我就串了十幾個一樣的感知器來跑iris data來玩,感覺還ok.隔天就來挑戰SVM.結果發現我想得太單純了,這個有夠難懂. Fig3. SVM[1] 支持向量機器SVM SVM網路的資料非常非常非常非常的多,但是能夠把數學方法解釋得詳細的沒幾個,甚是錯誤不少,比如說向量和純量的表示法搞錯,符號定義不明,亂用之類的(暈).暈了幾天後,總算找到一個非常淺顯易懂的教材,MIT的machine learning課程[3],用它教學的步驟總算把一個可以二分類的SVM蓋起來了,分類的邏輯搞定,但是需要大量的矩陣運算,因此在一個禮拜內惡補了不少線性代數的東西(以前只有大二修過工程數學,有講到半個學期的線代,但早還給教授了,畢竟她研究有需要用到,她要用的時候沒有會很困擾的),總算蓋了一個高斯消去運算器來處理矩陣,不過實際跑的時候發現,遇到[20*20]以上的矩陣就爆炸了.後來找了找其他替代方案,找到了一個叫做SMO的演算法[4],他是使用軼代法的方式來代替掉大量的矩陣運算,並能夠保證運算的正確性,看到這個的時候,覺得發明這個的人根本是天才!後來我採用的方式是簡易版的SMO[5],在演算法上比原本論文寫得更好懂,超推薦!把SMO寫好後,實際跑的感覺超棒的,it works!(拿iris data試跑有90%up準確率)而且沒想到整個演算法也才不到300行,心情超好. Fig4. Decision Tree[6] C4.5決策樹 經過SVM的摧殘後,寫決策樹就輕鬆多了,只要補好統計學的基礎知識,像是binominal distribution,就能寫出一個C4.5的node,但後來發現決策樹在決定像double/float型別的資料上範圍,沒這麼好用(主要是我文獻和資料的搜集還不夠多,目前看到的範例都是字串資料當教材),所以目前就只拿它來玩字串資料的分類了,像是分類MES上面的資料. 應用 目前是把SVM拿來推論新增的製程參數的好壞,不過由於累積的樣本數還不夠多,所以還沒有一個明顯趨勢出現.也許根本沒趨勢也有可能,或是資料量大起來後發現自己寫的是錯的也難說. 結語 沒想到當初無心參與python spark課程,找過的資料以及學到的ML知識,會真的應用到我工作的領域上,無心插柳呀.算是一個奇特的經驗,在這跟大家分享~哈. 參考資料 [1]: https://en.wikipedia.org/wiki/Support_vector_machine [2]:http://ataspinar.com/2016/12/22/the-perceptron/ [3] https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/tutorials/MIT6_034F10_tutor05.pdf [4] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.4376&rep=rep1&type=pdf [5]http://cs229.stanford.edu/materials/smo.pdf [6]:https://en.wikipedia.org/wiki/Decision_tree
自建C#機器學習核心
72
自建c-機器學習核心-1611f480bafc
2018-05-28
2018-05-28 06:44:12
https://medium.com/s/story/自建c-機器學習核心-1611f480bafc
false
55
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
吳政龍
null
a1fc27e08a16
ZhengLungWu
16
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-13
2018-03-13 23:46:54
2018-03-14
2018-03-14 01:33:30
2
false
en
2018-03-14
2018-03-14 01:33:30
0
16141c208848
4.021069
0
0
0
Train GD and SGD and Convergence
1
CSE 446 Review Train GD and SGD and Convergence GD for an optimal learn rate converges at a constant, accuracy SGD will need a decaying learning rate that is guaranteed to converge GD and SGD Hyperparameters: step size, epochs, regularization Mini-batch size: so you are having a better estimation, performance iid, expected value from a distribution, all random, needs to be permutated GD and SGD optimizing on a loss function, not misclassification updating weights if learning rate is too large, you will go over the minimum and diverge GD and SGD will not converge if it is not regularized Back Propagation: can’t initialize weight to zero, depends on your function, won’t optimize Sigmoid (0 to 1) will always have the same weights, still will cause problems with symmetry Tanh (-1 to 1) Linear regression does not have critical points — minimizing squared error Stationary when gradient is zero saddle the derivative with respect to 2 variables returns different signs local min or max Saturation vanishing gradient problem when you’re weights are too big on log loss change only a little bit if you have too many layers — the first layer has very little impact very really change the first ones Relu — just dies. no saturation — hit zero, it just turns off and doesn’t update Mini-batching more efficieny Regularization trade offs one or infinite number of solutions if you get a bad one, you want to punish it l1 versus l2 regularization, l1 favors things to get really close to zero l2 wants it to be the smallest l1 will favor very close to zero Early stopping fewer interations Linear and Logistic regression logistic is used more for binary classification linear regression as a probabilistic model — picking each one with the normal distribution believe your line is your mu generative story — everything follows a line with some noise let the center be w*x, our point line with the center at our line normal distribution away from our line the farther away, the less confident you are in your answer PCA unsupervised learning algorithm Linear separability linear independence more dimensions than samples — it is linearly separable squared loss does go to zero so does log loss because you can perfectly fit that line more dimensions than samples is a tighter bound Liner regression linear regression as a generative model is a probabilistic model Linear independence yn w dot xn ≥ 1 y and y hat is the same, which is why they’re sign is always greater than 1 no combination will create a combination for another column D x 1 is a column positive or negative will decide its value Feature mapping kernels: (1 + x * z) ^ 2 explode the feature space — mappings non-linear decision boundary while keeping convexity Convexity One local minimums versus many local minimums Function: Computational graph that represents this graph by chain rule properties if no and else statement you can convert it into a graph one output feeds into another can be computed derivative within an additional constant time Baur-Strassen on the forward pass you touch every node once backward pass, you touch every node once as well linear time it takes to go over all the nodes we did the matrix version which made it a little more complicated no transfer function on the output node MLE One versus all Multi-class take the highest probability softmax — convert your scores to probability over the sum of all the probabilities Covultional neural nets groups neurons into specific feature it won’t overfit want to use groups of shared weights on the matrix, things that are close together should be related Probabilistic and latent variables latent — you didn’t observe these labels, you are trying to infer them probabilistic models generative stories — we know that all my points can from a line and my information is centered aroudn it all for the inductive bias and how you phrase it just a story for how the data is generated PCA (x — mu) ^2 summed / end = variance You have to recenter your data first use the projection to reconstruct your points original point subtracted by your value and squared, summed, averaged norm — euclidean distance from origin, everything squared, summed, square rooted Perceptron converge, linearly separable, closest point to the boundary times 2 KNN choose k nearest, choose the best one K Means hard assignments reassess the means Training set, dev, test if they don’t use the tree, they don’t use the test, it’s not biased HW 3 infinitely number solution, can’t invert why is linear regression bad: trying to predict discrete labels with a continuous model Log loss goes to infinity without lamdba — GD will continue to optimize Also infinity d ≥ n means linearly separable, imply the first one Regularize — weights won’t go to infinity, same with early stopping, less time to get too big Bayes rule: MLE Poisson HW 4 squared error does not converge to zero log loss goes to zero when d ≥ n, squared loss does to go zero, so does log infinite solutions if misclassification hits zero, does GD stop? Only depends on the loss function Forward pass with matrix Time complexity for back prop — everything touches each node once output layer gradient, they’re not computational different zero hidden points — linear regression, not a saddle point — it’s convex stationary, most likely a saddle point — unstable — depending on your data sigmoid — not a saddle point symmetric — even is flipped linear regression quadratic feature mapping feature mapping to solve other problems design your network — boolean algebra every perceptron can represent a boolean algebra operator See CIML for Chapter 16 for EM
CSE 446 Review
0
cse-446-review-16141c208848
2018-03-14
2018-03-14 01:33:30
https://medium.com/s/story/cse-446-review-16141c208848
false
964
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
zhou_xiaoquan@hotmail.com
null
b318f782ae1f
zhou_xiaoquan
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-24
2018-05-24 09:14:25
2018-05-24
2018-05-24 11:05:57
4
false
en
2018-05-24
2018-05-24 11:05:57
2
16159b6071da
2.15283
2
0
0
There is always a misconception about Machine learning being this scary technology where robots will one day replace humans completely and…
2
What really is Machine learning There is always a misconception about Machine learning being this scary technology where robots will one day replace humans completely and will take away all our jobs.But,the truth is that machine learning is really a lot different from that. Health Care : Diabetic Retinopathy is one of the leading causes of permanent blindness.The disease cannot be cured but if detected early,there is a chance to provide treatment.Thanks to AI,we can now predict the cases of diabetic retinopathy far more earlier with utmost accuracy.Imagine the impact it could create in the lives of those patients. EDUCATION : There are still many countries in this world where primary education for children is a major concern.Thanks to advances in machine learning.,now we can allot a machine to teach such kids.Imagine the life of such students how it will bring a change in their lives. Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties.After these training's,a robot will be able to teach the same. AGRICULTURE : Seed retailers, for example, are using AI products to churn through terabytes of precision agricultural data to create the best corn crops, while pest control companies are using AI-based image-recognition technology to identify and treat various types of bugs and vermin. Such markedly different scenarios underscore how AI has evolved from science fiction to practical solutions that can potentially help companies get a leg up on their competition. DEFENSE SERVICES : Machine learning also impacts to a great extent in the defense of a country.We can now have an AI robot to detect any foreign particle in air across the border.Just Imagine how many militants could be saved from terrorist attacks and the how peaceful could the family members of the army people . Not only these..,there are much more fields that can have a positive impact through Machine Learning and its advancements.These are just early positives of machine learning.There are many more to come.Though there are some concerns in people about the cons.,but I am pretty much optimistic about it. Cheers… Sunil, An ML enthusiast.
What really is Machine learning
52
what-really-is-machine-learning-16159b6071da
2018-05-24
2018-05-24 11:50:39
https://medium.com/s/story/what-really-is-machine-learning-16159b6071da
false
385
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
sunil kumar
null
2ecec6338314
sunil247621
1
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-17
2018-02-17 03:31:39
2018-02-17
2018-02-17 03:43:13
2
true
en
2018-02-17
2018-02-17 03:43:13
2
161653c47685
3.613522
51
0
0
During Google’s early days, I really liked their “I’m feeling lucky” button for two reasons. The geeks behind google search were betting…
5
The Biggest Change at Google and It was Subtle During Google’s early days, I really liked their “I’m feeling lucky” button for two reasons. The geeks behind google search were betting that they know what is likely the best result for me. That takes guts. And on the sparsely populated white page — on the pride of technical wizards, the word “feel” found a prominent home. That pleased my right brain as well. How did they get the feel right? Here is the backstory in everyday words. If we think of webpages as pages of the book, Google algorithm was unique — their search was like the index section in the back of the book rather than table of contents in the beginning. In a typical book, the page numbers after each word (on the index page) are arranged in ascending order of occurrence. Google reordered the page numbers after a word based on how often people referred to an internet page. The first among the results became the result of “I am feeling lucky” button. Google continued their pursuit of indexing every page on internet and showing us the time for search in centi seconds — their geeky pride. The challenge today, the book has grown bigger and “I am feeling lucky” still exists, but as a play button like a slot machine coming up with other options. Google search, through the years, has made rapid strides on figuring out what we are literally searching based on hierarchy of rules. Can google search morph into an educated guess on what we really hoping to search? That feels like a dream and it is. Dreams Understanding dreams starts with the answer to this question: What does the picture look like? Michael Tyka and MIT Computer Science and AI Laboratory. https://goo.gl/photos/fFcivHZ2CDhqCkZdA Your answer, buildings architecture did not take long, did it? From your visual neurons in the eye, all the way to the processing in the folds of your brain was almost instantaneous. Can you explain step by step, how you came up with it? You may pause to figure out how you did it instantly. Intuition is a good answer. It is a sophisticated form of analysis. Something we all do so well — the beauty within us. On the other hand, what happens if we let a machine do the same by replicating the neural connections, the synapses that create the magic within us? A deeply learning network of artificial neurons is like a house of cards (stronger and firmer than cards) with the inputs at the base and the output at the top. Every layer acts as a foundational input for the upper layer. What happens when the machine shares its answer consistent with our intuition? The bar is higher; we would like to know how it came up with this answer. Our curiosity for logic and earnestness to verify takes over before we can outsource our trust to the machines. In other words, can we pause at every layer and get a human peek on how the black box dreams up the final result? Thanks to the rapid advances in computation speed and technology, engineers at google and labs around the world ,they are able to do just that — A sample below on how a lower layer neuron looks at the picture.. By Zachi Evenor,Günther Noack [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons “All this is interesting, what does it mean for you and me?”- if your thoughts mirror these words, you are not alone. The simple answer- the part of the memory we have outsourced daily — google search is getting turbocharged with machine intuitiveness. Symbols are powerful, many times, they share more than what they represent. The passing of the baton for google search happened two years ago. The current head of google search is the former head of artificial intelligence group at Google. This was a singular moment* in our history where deep learning moved from labs to a mainstream commercial product on such a large scale. Our Human Future: What it means for careers (for our kids next ) Feeling lucky with good educated guess on what I am searching is one thing. On the other hand, feeling a churn at the bottom our stomach is a natural reaction. The sub-text is about machines mimicking our intelligence- we are challenged in our home front. A replica of feelings (even if it is fleeting),I could relate when I first saw Arnold Schwarzenegger land on earth as the terminator in 1984! Here is the comfort — with all the rapid advances — these deep learning machines are like horses with blinkers. They learn what they are taught in front of them. Any thing outside their realm, like connecting the dots between random topics is still the prerogative of the human mind. Creativity is the human hallmark with our emotions ruling the roost — at least when it is time for our children’s generation to rule the workforce. That is what I want to believe. — — Karthik Rajan I enjoy writing at the intersection of analytics and human relationships. * for those who like completeness, actual tip toeing of the tool, rank brain, started in 2015 for more outlier searches.
The Biggest Change at Google and It was Subtle
358
the-biggest-change-at-google-and-it-was-subtle-161653c47685
2018-04-10
2018-04-10 15:34:01
https://medium.com/s/story/the-biggest-change-at-google-and-it-was-subtle-161653c47685
false
856
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Karthik Rajan
Life’s hidden treasures in plain sight. Succinct stories that elevate your spirits. Give any of my shares a read & let me know. P.S. Once a while on blockchain.
2179a719b26d
KarthRajan
3,282
10,677
20,181,104
null
null
null
null
null
null
0
null
0
a230eb59c6e3
2018-01-08
2018-01-08 12:40:38
2018-01-08
2018-01-08 13:50:01
6
false
en
2018-01-08
2018-01-08 13:50:01
3
1617f930808d
2.278302
0
0
0
On Twitter, I stumbled upon this horrendous 3D bar chart. When looking at the data, it might have been made in 2005. Data visualisation as…
5
Redesign of a truly bananas chart On Twitter, I stumbled upon this horrendous 3D bar chart. When looking at the data, it might have been made in 2005. Data visualisation as a skill was yet to be defined. Boy, we have come a long way, but this was even at the time truly bananas. Normally I describe in detail about what improvements can be made, this time I trust my audience to recognise the many pitfalls. I will however explain my redesign choices after the visual. Watch out though, PhillipDRiggs warns you in his tweet for “post-traumatic viz syndrome”. Redesign choices To me, this had to be a line chart because we deal with years in time. Also, if you want to plot everything in one graph, lines can be convenient because they take up little space. Let’s see if that works. First iteration That’s better already, but still your eyes have to switch a lot from the legend to the plot and back again. One thing you could do is plotting the country labels inside the plot and remove the legend, but there’s a lack of space. When trying to squeeze three variables, and in this case ten countries, in one plot, things get messy. Welcome to small multiples, sorted from highest to the lowest mean. That’s way better already. You can easily compare countries because they have their own little plot and they’re sorted, giving the reader instant gratification or information if you will. Per country you can scan the trend through time as well. For the final version, I think that an area chart works better because it represents the magnitude of tonnes of bananas better. And maybe even visualise the trends more precise because the big contrast of the background it creates these sharp edges. Small multiples. Left: line charts. Right: area charts and final version. This is part 9 of the series Data Visualisation Redesigned for the Better. You can find the code behind the redesign on the Colourful Facts Github repo. For more redesigns and other data journalism/visualisation related articles, go to my home page: Do you stumble upon a crappy graph? Please let me know! Cheers 🙂
Redesign of a truly bananas chart
0
redesign-of-a-truly-bananas-chart-1617f930808d
2018-04-12
2018-04-12 00:33:29
https://medium.com/s/story/redesign-of-a-truly-bananas-chart-1617f930808d
false
352
Explorations in the field of journalism, data visualisation and content creation. Email at: thomas@colourfulfacts.com
null
thomas.debeus.7
null
Colourful Facts
thomasdebeus@gmail.com
tdebeus
null
tdebeus
Data Visualization
data-visualization
Data Visualization
11,755
Thomas de Beus
Using #ddj 👨‍💻 to help tighten the gap between reality and peoples’ perception of reality to spark ⚡️ conscious citizenship 🏛 for a better shared future 🌍
43c39f57a3ce
TdeBeus
324
30
20,181,104
null
null
null
null
null
null