audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
4f808f3725a6
2018-06-06
2018-06-06 17:39:21
2018-06-06
2018-06-06 17:49:19
1
false
en
2018-06-06
2018-06-06 17:49:19
1
1d87dd195bd4
0.256604
0
0
0
Saved chat- https://www.facebook.com/SapientIndia/videos/10156325043561992/
4
Facebook Livechat — Embedding Machine Learning & AI with Cloud & DevOps Saved chat- https://www.facebook.com/SapientIndia/videos/10156325043561992/
Facebook Livechat — Embedding Machine Learning & AI with Cloud & DevOps
0
facebook-livechat-embedding-machine-learning-ai-with-cloud-devops-1d87dd195bd4
2018-06-06
2018-06-06 17:49:20
https://medium.com/s/story/facebook-livechat-embedding-machine-learning-ai-with-cloud-devops-1d87dd195bd4
false
15
Enabling business for digital humanity
null
null
null
revolutionfirst
mohammad.wasim@gmail.com
revolutionfirst
CYBERSECURITY,DIGITAL TRANSFORMATION,IT MODERNIZATION,CEO OFFICE,DIGITAL BUSINESS STRATEGY
mwasim
Machine Learning
machine-learning
Machine Learning
51,320
Mohammad Wasim
Technologist, entrepreneur, speaker, coach. Opinions and views are strictly personal
39dea5422c3f
mwasim
40
43
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-30
2018-08-30 02:25:09
2018-09-05
2018-09-05 07:36:20
2
false
en
2018-09-14
2018-09-14 08:23:12
2
1d8852693ed1
2.48522
0
0
0
Written by: Raymond Chetti; Co-author: Felipe Ramirez
5
Our Automated Valuation Model (AVM) prototype for South Korean Commercial Real Estate (CRE) Multi-layer neural network fed by a wide variety of commercial real estate (CRE) data sources Written by: Raymond Chetti; Co-author: Felipe Ramirez Within the course of about a month we have been able to develop a functional prototype of one of our core products called the Value Report, which features a front-end and automated valuation model (AVM / 상업용 부동산 자동화 평가모델), which is able to tell us the market value for commercial buildings in our specified region of focus, Gangnam-gu, Seoul. All any user has to do is enter the building’s address, click enter, and we’re able to generate an automated valuation report within an instant like this: A screenshot of our Value Report product where users may quickly and simply discover the market value of any commercial building; our “Adjusted Value” is one of two outputs of our proprietary machine learning AVM We’re excited for what is to come given the short amount of time and resources we’ve been able to dedicate to our prototype’s development. You might be asking yourself a few questions, a few of which we’re happy to answer below, but for any other questions please feel free to message me on LinkedIn! We’re happy to chat with those who are interested in our work. 1. What is your model’s accuracy? To date we’ve developed two variations of our AVM, let’s call them v1 (developed August 22, 2018) and v2 (developed August 29, 2018). Our v1 took about one month to develop (data sourcing, cleaning/pre-processing, and modeling work) and features only about 3–4% of all the big data we’d like to integrate into our AVM. Within one week of improving the v1 model, we’ve been able to enhance its’ accuracy by lowering our mean absolute error (MAE) by 8% for an updated v2 model. Our v2 model’s MAE is about 6–13% away from what traditional, human appraiser’s margin of error is for when they appraise properties. Traditional appraisals by humans are typically 7–14% away from actual transaction prices (Kok, et.al, 2017). Given our rate of progress (data processing, cleaning, and preparation for consumption by our machine learning model) we can anticipate our AVM will appraise and value commercial real estate like a human by the end of September 2018. 2. How do you know your margin of error/accuracy? We’ve collected actual CRE transaction data for the Korean market from 2006 until July 2018 (updated monthly) and were able to back test our model against these records. Then, we calculated the MAE and the coefficient of determination in order to have a measure of its precision and accuracy. 3. What data are you using and what are your sources? Since this is part of our “secret sauce”, we’re not inclined to say exactly, but please know this — we are compiling and aggregating one of the largest and most diverse sets of big data in South Korea that will be related to CRE valuation and we’ve just begun. We’ve identified dozens of data endpoints from over 30+ data sources that are related to CRE given our team’s collective experience in the CRE industry and are working to collect, process, and integrate these endpoints into our machine learning model. 4. What’s next for us? We’re working to refine our model’s accuracy by pre-processing and preparing more data for the model to consume and learn from. In addition, we’ll be seeking partners who are interested in helping scale our business (our CRE AVM, products, team) to the entirety of South Korea and potentially other parts of Asia.
Our Automated Valuation Model (AVM) prototype for South Korean Commercial Real Estate (CRE)
0
our-automated-valuation-model-avm-prototype-for-south-korean-commercial-real-estate-cre-1d8852693ed1
2018-09-14
2018-09-14 08:23:12
https://medium.com/s/story/our-automated-valuation-model-avm-prototype-for-south-korean-commercial-real-estate-cre-1d8852693ed1
false
557
null
null
null
null
null
null
null
null
null
Commercial Real Estate
commercial-real-estate
Commercial Real Estate
2,235
Raymond Chetti
Co-founder & CEO at CRE Korea (Commercial Real Estate Korea)
924fe7b9f6e3
raymondchetti
3
2
20,181,104
null
null
null
null
null
null
0
null
0
e27e4317c858
2018-07-18
2018-07-18 20:48:06
2018-07-18
2018-07-18 21:08:18
0
false
en
2018-07-19
2018-07-19 06:57:08
0
1d8a26d66f35
1.05283
3
0
0
My plan was to do masters in Data science ever since i came to know about it during the last days of my college. So I did search for…
1
Udacity & Bertelsmann Data Science scholarship program [My experience] My plan was to do masters in Data science ever since i came to know about it during the last days of my college. So I did search for prerequisites for it one of them was a nanodegree course at udacity. As a result i searched for it but the price was too high for me to afford so i had planned to go for it after i earn some money. As i had subscribed to udacity so one day I saw an e-mail regarding this scholarship. I filled up the form asap and waited for my selection. The day i got selected, i recognised that after completion of this course top 1500 people would be selected for nano degree course which was my ultimate aim. I was full of enthu the day i began this course,though i was slow a but i was enjoying. In the due course i realised the what data science actually is and why it is called so. Also i got to find people who were interested like me regarding data science. I got to learn python,sql,basic statistics required for the know how of data science. Through this course I have envisioned my self as a data scientist. This course provided me the much needed insight regarding data science which would rather not have been easy. So i would like to thank the udacity team for giving me this opportunity to chase my goal. Though there were times where problem solving was not easy but that’s what the life of a data scientist is everyday a new challenge and a new approach to it.
Udacity & Bertelsmann Data Science scholarship program [My experience]
19
udacity-bertelsmann-data-science-scholarship-program-my-experience-1d8a26d66f35
2018-07-19
2018-07-19 06:57:09
https://medium.com/s/story/udacity-bertelsmann-data-science-scholarship-program-my-experience-1d8a26d66f35
false
279
A collaborative blog for all the students. Slack channel: #blog_collab
null
null
null
Udacity Bertelsmann Data Science Scholarship 2018/19 Blog
eco1410@uom.edu.gr
udacity-bertelsmann-scholarship-blog
UDACITY,BERTELSMANN,DATA SCIENCE,SCHOLARSHIP
null
Data Science
data-science
Data Science
33,617
Arnav Ashank
null
13307a599e33
aagoingfine
0
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-01
2018-09-01 07:06:32
2018-09-01
2018-09-01 07:06:43
0
false
en
2018-09-01
2018-09-01 07:06:43
1
1d8b643b3fb
2.456604
0
0
0
[PDF] Download Practical Statistics for Data Scientists: 50 Essential Concepts By Peter Bruce Free Link
1
Read Practical Statistics for Data Scientists: 50 Essential Concepts By Peter Bruce eBook PDF #pdf [PDF] Download Practical Statistics for Data Scientists: 50 Essential Concepts By Peter Bruce Free Link Download_pdf : https://bestreadkindle.icu/?q=Practical+Statistics+for+Data+Scientists%3A+50+Essential+Concepts Statistical methods are a key part of of data science, yet very few data scientists have any formal statistics training. Courses and books on basic statistics rarely cover the topic from a data science perspective. This practical guide explains how to apply various statistical methods to data science, tells you how to avoid their misuse, and gives you advice on what’s important and what’s not.Many data science resources incorporate statistical methods but lack a deeper statistical perspective. If you’re familiar with the R programming language, and have some exposure to statistics, this quick reference bridges the gap in an accessible, readable format.With this book, you’ll learn:Why exploratory data analysis is a key preliminary step in data scienceHow random sampling can reduce bias and yield a higher quality dataset, even with big dataHow the principles of experimental design yield definitive answers to questionsHow to use regression to estimate outcomes and detect anomaliesKey . . . . . . . . . . . . Online PDF Practical Statistics for Data Scientists: 50 Essential Concepts, Read PDF Practical Statistics for Data Scientists: 50 Essential Concepts, Full PDF Practical Statistics for Data Scientists: 50 Essential Concepts, All Ebook Practical Statistics for Data Scientists: 50 Essential Concepts, PDF and EPUB Practical Statistics for Data Scientists: 50 Essential Concepts, PDF ePub Mobi Practical Statistics for Data Scientists: 50 Essential Concepts, Reading PDF Practical Statistics for Data Scientists: 50 Essential Concepts, Book PDF Practical Statistics for Data Scientists: 50 Essential Concepts, read online Practical Statistics for Data Scientists: 50 Essential Concepts, Practical Statistics for Data Scientists: 50 Essential Concepts Peter Bruce pdf, by Peter Bruce Practical Statistics for Data Scientists: 50 Essential Concepts, book pdf Practical Statistics for Data Scientists: 50 Essential Concepts, by Peter Bruce pdf Practical Statistics for Data Scientists: 50 Essential Concepts, Peter Bruce epub Practical Statistics for Data Scientists: 50 Essential Concepts, pdf Peter Bruce Practical Statistics for Data Scientists: 50 Essential Concepts, the book Practical Statistics for Data Scientists: 50 Essential Concepts, Peter Bruce ebook Practical Statistics for Data Scientists: 50 Essential Concepts, Practical Statistics for Data Scientists: 50 Essential Concepts E-Books, Online Practical Statistics for Data Scientists: 50 Essential Concepts Book, pdf Practical Statistics for Data Scientists: 50 Essential Concepts, Practical Statistics for Data Scientists: 50 Essential Concepts E-Books, Practical Statistics for Data Scientists: 50 Essential Concepts Online , Read Best Book Online Practical Statistics for Data Scientists: 50 Essential Concepts, Read Online Practical Statistics for Data Scientists: 50 Essential Concepts Book, Read Online Practical Statistics for Data Scientists: 50 Essential Concepts E-Books, Read Practical Statistics for Data Scientists: 50 Essential Concepts Online , Read Best Book Practical Statistics for Data Scientists: 50 Essential Concepts Online, Pdf Books Practical Statistics for Data Scientists: 50 Essential Concepts, Read Practical Statistics for Data Scientists: 50 Essential Concepts Books Online , Read Practical Statistics for Data Scientists: 50 Essential Concepts Full Collection, Read Practical Statistics for Data Scientists: 50 Essential Concepts Book, Read Practical Statistics for Data Scientists: 50 Essential Concepts Ebook , Practical Statistics for Data Scientists: 50 Essential Concepts PDF read online, Practical Statistics for Data Scientists: 50 Essential Concepts Ebooks, Practical Statistics for Data Scientists: 50 Essential Concepts pdf read online, Practical Statistics for Data Scientists: 50 Essential Concepts Best Book, Practical Statistics for Data Scientists: 50 Essential Concepts Ebooks , Practical Statistics for Data Scientists: 50 Essential Concepts PDF , Practical Statistics for Data Scientists: 50 Essential Concepts Popular , Practical Statistics for Data Scientists: 50 Essential Concepts Read , Practical Statistics for Data Scientists: 50 Essential Concepts Full PDF, Practical Statistics for Data Scientists: 50 Essential Concepts PDF, Practical Statistics for Data Scientists: 50 Essential Concepts PDF , #ebook #epubs #epubdownload #PdfReader #Ebook
Read Practical Statistics for Data Scientists: 50 Essential Concepts By Peter Bruce eBook PDF #pdf
0
read-practical-statistics-for-data-scientists-50-essential-concepts-by-peter-bruce-ebook-pdf-pdf-1d8b643b3fb
2018-09-01
2018-09-01 07:06:44
https://medium.com/s/story/read-practical-statistics-for-data-scientists-50-essential-concepts-by-peter-bruce-ebook-pdf-pdf-1d8b643b3fb
false
651
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Randall Mcgee
null
3e5feb17e611
randallmcgee_67177
0
1
20,181,104
null
null
null
null
null
null
0
null
0
f702855ffe47
2017-11-11
2017-11-11 20:00:21
2017-11-11
2017-11-11 20:00:22
6
false
en
2017-11-11
2017-11-11 20:00:22
10
1d8c32f94e22
1.904717
0
0
0
null
3
Blockchain Neural System (BNS) will fundamentally change the Artificial Intelligence market # medium.com Blockchain Neural System (BNS) will fundamentally change the Artificial Intelligence market The future is mu… Elon Musk is Worried. Should You Be? # medium.com If you’ve been scanning technology news in the last few months you’ve probably noticed that Elon Musk is sou… Bring tech wizardry home # medium.com I recently had the pleasure of chatting with a computer developer who has helped build interesting tools tha… Rise of the machines # medium.com Everyone is talking about digitization, but it’s been a longer process than many of us realize. Beena Ammana… Artificial Intelligence Planning Historical Developments # medium.com Research review Automated planning and scheduling is one of the major fields of AI. Planning focuses on stra… Artificial intelligence vs man made sentience # medium.com What is Artificial Intelligence? Machines have proven that you you don’t need to be sentient to be intellige… New advisor in Mirocana # medium.com We welcome Vadim Koleoshkin as a new advisor in Mirocana! Vadim is Hi-tech entrepreneur and software enginee… The Singularity, Virtual Worlds and AI Babies # medium.com From Terra Nova, Sep 08, 2007. Continue reading on Medium » 2017. 11. 11. Weekly Research Updates # medium.com A neural algorithm for a fundamental computing problem[http://science.sciencemag.org/content/358/6364/793] :… Google Pixel Buds review: Google Assistant makes a home in your ears # venturebeat.com Google’s very first pair of headphones go on sale November 17 for $159, and we got a pair to test out their …
10 new things to read in AI
0
10-new-things-to-read-in-ai-1d8c32f94e22
2018-04-08
2018-04-08 09:35:02
https://medium.com/s/story/10-new-things-to-read-in-ai-1d8c32f94e22
false
253
AI Developments around and worlds
null
null
null
AI Hawk
aihawk1089@gmail.com
ai-hawk
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
null
Deep Learning
deep-learning
Deep Learning
12,189
AI Hawk
null
a9a7e4d2b403
aihawk1089
15
6
20,181,104
null
null
null
null
null
null
0
null
0
6fb27197241c
2018-07-08
2018-07-08 06:24:48
2018-07-08
2018-07-08 07:34:26
2
false
en
2018-07-08
2018-07-08 07:34:26
1
1d8c5bb752d8
13.043711
63
4
0
Introduction
5
Cracking the Machine Learning Interview Let’s prepare for Machine Learning interviews! Introduction What is this article about? In this article, I share an eclectic collection of interview questions that will help you in preparing for Machine Learning interviews. This is helpful to someone who is interested in one/more of the following positions in the Machine Learning group of a leading company (Google, Facebook, IBM, Amazon, Microsoft, etc.): Research Engineer Software Engineer Postdoctoral Researcher Research Scientist Data Scientist I will keep on adding more questions to this list with time. This project initially started off as a GitHub repository which can be found here. I continually update the repository with new questions. Why use it? This will be useful to someone who is: Interested in preparing for Machine Learning interviews Preparing for Machine Learning interviews, however, is lost in the plethora of resources and wants to prioritize what to learn. Looking to hone their skills by attempting some prospective interview questions What should I learn? Someone applying to any one of the above positions is expected to know basics of the following broad topics: Computer Science Linear Algebra Statistics and Probability Machine Learning All of these are fairly broad topics and sections dedicated to them in this article lists specific questions related to some of these topics. Note that deeper knowledge of one/more of the above topics might be expected of you depending on the particular position you are interviewing for. This raises our next question. What is expected of me in the interviews? Research or Software Engineer: If you are applying to any one of these positions in a Machine Learning group, you should know the basics of the above four topics with emphasis on Computer Science and Machine Learning. In addition, some projects on Machine Learning in GitHub will be helpful to showcase both your knowledge and coding skills. Postdoctoral Researcher and Research Scientist: Apart from the basics, you should know extremely well about at least one domain of Machine Learning. You should have published multiple papers in this domain. This will demonstrate your authority in this topic. Since you are applying to this position you already know what that would be for your case. Data Scientist: If you are interested in a Data Scientist position, then after learning the basics, please emphasize more on Statistics and Probability. List of questions Now, that you have a general idea of Machine Learning interview, let’s spend no time in sharing a list of questions organized according to topics (in no particular order). Linear Algebra What is broadcasting in connection to Linear Algebra? What are scalars, vectors, matrices, and tensors? What is Hadamard product of two matrices? What is an inverse matrix? If inverse of a matrix exists, how to calculate it? What is the determinant of a square matrix? How is it calculated? What is the connection of determinant to eigenvalues? Discuss span and linear dependence. What is Ax = b? When does Ax =b has a unique solution? In Ax = b, what happens when A is fat or tall? When does inverse of A exist? What is a norm? What is L1, L2 and L infinity norm? What are the conditions a norm has to satisfy? Why is squared of L2 norm preferred in ML than just L2 norm? When L1 norm is preferred over L2 norm? Can the number of nonzero elements in a vector be defined as L0 norm? If no, why? What is Frobenius norm? What is a diagonal matrix? Why is multiplication by diagonal matrix computationally cheap? How is the multiplication different for square vs. non-square diagonal matrix? At what conditions does the inverse of a diagonal matrix exist? What is a symmetrix matrix? What is a unit vector? When are two vectors x and y orthogonal? At R^n what is the maximum possible number of orthogonal vectors with non-zero norm? When are two vectors x and y orthonormal? What is an orthogonal matrix? Why is computationally preferred? What is eigendecomposition, eigenvectors and eigenvalues? How to find eigen values of a matrix? Write the eigendecomposition formula for a matrix. If the matrix is real symmetric, how will this change? Is the Eigendecomposition guaranteed to be unique? If not, then how do we represent it? What are positive definite, negative definite, positive semi definite and negative semi definite matrices? What is Singular Value Decomposition? Why do we use it? Why not just use ED? Given a matrix A, how will you calculate its Singular Value Decomposition? What are singular values, left singulars and right singulars? What is the connection of Singular Value Decomposition of A with functions of A? Why are singular values always non-negative? What is the Moore Penrose pseudo inverse and how to calculate it? If we do Moore Penrose pseudo inverse on Ax = b, what solution is provided is A is fat? Moreover, what solution is provided if A is tall? Which matrices can be decomposed by ED? Which matrices can be decomposed by SVD? What is the trace of a matrix? How to write Frobenius norm of a matrix A in terms of trace? Why is trace of a multiplication of matrices invariant to cyclic permutations? What is the trace of a scalar? Write the frobenius norm of a matrix in terms of trace? Numerical Optimization What is underflow and overflow? How to tackle the problem of underflow or overflow for softmax function or log softmax function? What is poor conditioning? What is the condition number? What are grad, div and curl? What are critical or stationary points in multi-dimensions? Why should you do gradient descent when you want to minimize a function? What is line search? What is hill climbing? What is a Jacobian matrix? What is curvature? What is a Hessian matrix? Basics of Probability and Information Theory Compare “Frequentist probability” vs. “Bayesian probability”? What is a random variable? What is a probability distribution? What is a probability mass function? What is a probability density function? What is a joint probability distribution? What are the conditions for a function to be a probability mass function? What are the conditions for a function to be a probability density function? What is a marginal probability? Given the joint probability function, how will you calculate it? What is conditional probability? Given the joint probability function, how will you calculate it? State the Chain rule of conditional probabilities. What are the conditions for independence and conditional independence of two random variables? What are expectation, variance and covariance? Compare covariance and independence. What is the covariance for a vector of random variables? What is a Bernoulli distribution? Calculate the expectation and variance of a random variable that follows Bernoulli distribution? What is a multinoulli distribution? What is a normal distribution? Why is the normal distribution a default choice for a prior over a set of real numbers? What is the central limit theorem? What are exponential and Laplace distribution? What are Dirac distribution and Empirical distribution? What is mixture of distributions? Name two common examples of mixture of distributions? (Empirical and Gaussian Mixture) Is Gaussian mixture model a universal approximator of densities? Write the formulae for logistic and softplus function. Write the formulae for Bayes rule. What do you mean by measure zero and almost everywhere? If two random variables are related in a deterministic way, how are the PDFs related? Define self-information. What are its units? What are Shannon entropy and differential entropy? What is Kullback-Leibler (KL) divergence? Can KL divergence be used as a distance measure? Define cross-entropy. What are structured probabilistic models or graphical models? In the context of structured probabilistic models, what are directed and undirected models? How are they represented? What are cliques in undirected structured probabilistic models? Confidence interval What is population mean and sample mean? What is population standard deviation and sample standard deviation? Why population s.d. has N degrees of freedom while sample s.d. has N-1 degrees of freedom? In other words, why 1/N inside root for pop. s.d. and 1/(N-1) inside root for sample s.d.? What is the formula for calculating the s.d. of the sample mean? What is confidence interval? What is standard error? Learning Theory Describe bias and variance with examples. What is Empirical Risk Minimization? What is Union bound and Hoeffding’s inequality? Write the formulae for training error and generalization error. Point out the differences. State the uniform convergence theorem and derive it. What is sample complexity bound of uniform convergence theorem? What is error bound of uniform convergence theorem? What is the bias-variance trade-off theorem? From the bias-variance trade-off, can you derive the bound on training set size? What is the VC dimension? What does the training set size depend on for a finite and infinite hypothesis set? Compare and contrast. What is the VC dimension for an n-dimensional linear classifier? How is the VC dimension of a SVM bounded although it is projected to an infinite dimension? Considering that Empirical Risk Minimization is a NP-hard problem, how does logistic regression and SVM loss work? Model and feature selection Why are model selection methods needed? How do you do a trade-off between bias and variance? What are the different attributes that can be selected by model selection methods? Why is cross-validation required? Describe different cross-validation techniques. What is hold-out cross validation? What are its advantages and disadvantages? What is k-fold cross validation? What are its advantages and disadvantages? What is leave-one-out cross validation? What are its advantages and disadvantages? Why is feature selection required? Describe some feature selection methods. What is forward feature selection method? What are its advantages and disadvantages? What is backward feature selection method? What are its advantages and disadvantages? What is filter feature selection method and describe two of them? What is mutual information and KL divergence? Describe KL divergence intuitively. Curse of dimensionality Describe the curse of dimensionality with examples. What is local constancy or smoothness prior or regularization? Universal approximation of neural networks State the universal approximation theorem? What is the technique used to prove that? What is a Borel measurable function? Given the universal approximation theorem, why can’t a MLP still reach a arbitrarily small positive error? Deep Learning motivation What is the mathematical motivation of Deep Learning as opposed to standard Machine Learning techniques? In standard Machine Learning vs. Deep Learning, how is the order of number of samples related to the order of regions that can be recognized in the function space? What are the reasons for choosing a deep model as opposed to shallow model? How Deep Learning tackles the curse of dimensionality? Support Vector Machine How can the SVM optimization function be derived from the logistic regression optimization function? What is a large margin classifier? Why SVM is an example of a large margin classifier? SVM being a large margin classifier, is it influenced by outliers? What is the role of C in SVM? In SVM, what is the angle between the decision boundary and theta? What is the mathematical intuition of a large margin classifier? What is a kernel in SVM? Why do we use kernels in SVM? What is a similarity function in SVM? Why it is named so? How are the landmarks initially chosen in an SVM? How many and where? Can we apply the kernel trick to logistic regression? Why is it not used in practice then? What is the difference between logistic regression and SVM without a kernel? How does the SVM parameter C affect the bias/variance trade off? How does the SVM kernel parameter sigma² affect the bias/variance trade off? Can any similarity function be used for SVM? Logistic regression vs. SVMs: When to use which one? Bayesian Machine Learning What are the differences between “Bayesian” and “Freqentist” approach for Machine Learning? Compare and contrast maximum likelihood and maximum a posteriori estimation. How does Bayesian methods do automatic feature selection? What do you mean by Bayesian regularization? When will you use Bayesian methods instead of Frequentist methods? Regularization What is L1 regularization? What is L2 regularization? Compare L1 and L2 regularization. Why does L1 regularization result in sparse models? What is dropout? How will you implement dropout during forward and backward pass? Evaluation of Machine Learning systems What are accuracy, sensitivity, specificity, ROC? What are precision and recall? Describe t-test in the context of Machine Learning. Clustering Describe the k-means algorithm. What is distortion function? Is it convex or non-convex? Tell me about the convergence of the distortion function. Topic: EM algorithm What is the Gaussian Mixture Model? Describe the EM algorithm intuitively. What are the two steps of the EM algorithm Compare Gaussian Mixture Model and Gaussian Discriminant Analysis. Dimensionality Reduction Why do we need dimensionality reduction techniques? What do we need PCA and what does it do? What is the difference between logistic regression and PCA? What are the two pre-processing steps that should be applied before doing PCA? Basics of Natural Language Processing What is WORD2VEC? What is t-SNE? Why do we use PCA instead of t-SNE? What is sampled softmax? Why is it difficult to train a RNN with SGD? How do you tackle the problem of exploding gradients? What is the problem of vanishing gradients? How do you tackle the problem of vanishing gradients? Explain the memory cell of a LSTM. What type of regularization do one use in LSTM? What is Beam Search? How to automatically caption an image? Some basic questions Can you state Tom Mitchell’s definition of learning and discuss T, P and E? What can be different types of tasks encountered in Machine Learning? What are supervised, unsupervised, semi-supervised, self-supervised, multi-instance learning, and reinforcement learning? Loosely how can supervised learning be converted into unsupervised learning and vice-versa? Consider linear regression. What are T, P and E? Derive the normal equation for linear regression. What do you mean by affine transformation? Discuss affine vs. linear transformation. Discuss training error, test error, generalization error, overfitting, and underfitting. Compare representational capacity vs. effective capacity of a model. Discuss VC dimension. What are nonparametric models? What is nonparametric learning? What is an ideal model? What is Bayes error? What is/are the source(s) of Bayes error occur? What is the no free lunch theorem in connection to Machine Learning? What is regularization? Intuitively, what does regularization do during the optimization procedure? What is weight decay? What is it added? What is a hyperparameter? How do you choose which settings are going to be hyperparameters and which are going to be learned? Why is a validation set necessary? What are the different types of cross-validation? When do you use which one? What are point estimation and function estimation in the context of Machine Learning? What is the relation between them? What is the maximal likelihood of a parameter vector $theta$? Where does the log come from? Prove that for linear regression MSE can be derived from maximal likelihood by proper assumptions. Why is maximal likelihood the preferred estimator in ML? Under what conditions do the maximal likelihood estimator guarantee consistency? What is cross-entropy of loss? What is the difference between loss function, cost function and objective function? Optimization procedures What is the difference between an optimization problem and a Machine Learning problem? How can a learning problem be converted into an optimization problem? What is empirical risk minimization? Why the term empirical? Why do we rarely use it in the context of deep learning? Name some typical loss functions used for regression. Compare and contrast. What is the 0–1 loss function? Why can’t the 0–1 loss function or classification error be used as a loss function for optimizing a deep neural network? Sequence Modeling Write the equation describing a dynamical system. Can you unfold it? Now, can you use this to describe a RNN? What determines the size of an unfolded graph? What are the advantages of an unfolded graph? What does the output of the hidden layer of a RNN at any arbitrary time t represent? Are the output of hidden layers of RNNs lossless? If not, why? RNNs are used for various tasks. From a RNNs point of view, what tasks are more demanding than others? Discuss some examples of important design patterns of classical RNNs. Write the equations for a classical RNN where hidden layer has recurrence. How would you define the loss in this case? What problems you might face while training it? What is backpropagation through time? Consider a RNN that has only output to hidden layer recurrence. What are its advantages or disadvantages compared to a RNN having only hidden to hidden recurrence? What is Teacher forcing? Compare and contrast with BPTT. What is the disadvantage of using a strict teacher forcing technique? How to solve this? Explain the vanishing/exploding gradient phenomenon for recurrent neural networks. Why don’t we see the vanishing/exploding gradient phenomenon in feedforward networks? What is the key difference in architecture of LSTMs/GRUs compared to traditional RNNs? What is the difference between LSTM and GRU? Explain Gradient Clipping. Adam and RMSProp adjust the size of gradients based on previously seen gradients. Do they inherently perform gradient clipping? If no, why? Discuss RNNs in the context of Bayesian Machine Learning. Can we do Batch Normalization in RNNs? If not, what is the alternative? Autoencoders What is an Autoencoder? What does it “auto-encode”? What were Autoencoders traditionally used for? Why there has been a resurgence of Autoencoders for generative modeling? What is recirculation? What loss functions are used for Autoencoders? What is a linear autoencoder? Can it be optimal (lowest training reconstruction error)? If yes, under what conditions? What is the difference between Autoencoders and PCA? What is the impact of the size of the hidden layer in Autoencoders? What is an undercomplete Autoencoder? Why is it typically used for? What is a linear Autoencoder? Discuss it’s equivalence with PCA. Which one is better in reconstruction? What problems might a nonlinear undercomplete Autoencoder face? What are overcomplete Autoencoders? What problems might they face? Does the scenario change for linear overcomplete autoencoders? Discuss the importance of regularization in the context of Autoencoders. Why does generative autoencoders not require regularization? What are sparse autoencoders? What is a denoising autoencoder? What are its advantages? How does it solve the overcomplete problem? What is score matching? Discuss it’s connections to DAEs. Are there any connections between Autoencoders and RBMs? What is manifold learning? How are denoising and contractive autoencoders equipped to do manifold learning? What is a contractive autoencoder? Discuss its advantages. How does it solve the overcomplete problem? Why is a contractive autoencoder named so? What are the practical issues with CAEs? How to tackle them? What is a stacked autoencoder? What is a deep autoencoder? Compare and contrast. Compare the reconstruction quality of a deep autoencoder vs. PCA. What is predictive sparse decomposition? Discuss some applications of Autoencoders. Representation Learning What is representation learning? Why is it useful? What is the relation between Representation Learning and Deep Learning? What is one-shot and zero-shot learning (Google’s NMT)? Give examples. What trade offs does representation learning have to consider? What is greedy layer-wise unsupervised pretraining (GLUP)? Why greedy? Why layer-wise? Why unsupervised? Why pretraining? What were/are the purposes of the above technique? (deep learning problem and initialization) Why does unsupervised pretraining work? When does unsupervised training work? Under which circumstances? Why might unsupervised pretraining act as a regularizer? What is the disadvantage of unsupervised pretraining compared to other forms of unsupervised learning? How do you control the regularizing effect of unsupervised pretraining? How to select the hyperparameters of each stage of GLUP? Monte Carlo Methods What are deterministic algorithms? What are Las vegas algorithms? What are deterministic approximate algorithms? What are Monte Carlo algorithms? I will keep on adding more questions to both this list and my GitHub repository. Moreover, my plan is to add answers to these questions as well. Disclaimer: Views expressed in this post are my personal, individual and unique perspectives, and not those of my employer.
Cracking the Machine Learning Interview
250
cracking-the-machine-learning-interview-1d8c5bb752d8
2018-07-24
2018-07-24 08:25:31
https://medium.com/s/story/cracking-the-machine-learning-interview-1d8c5bb752d8
false
3,355
This is where I write about machine learning.
null
roy.subhrajit20
null
Machine Learning Algorithms and Applications
roy.subhrajit20@gmail.com
subhrajit-roy
DEEP LEARNING,MACHINE LEARNING,NEUROMOPHIC HARDWARE,NEURAL NETWORKS
sroy_subhrajit
Machine Learning
machine-learning
Machine Learning
51,320
Subhrajit Roy
null
be7618863164
sroy20
88
44
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-21
2018-05-21 17:22:05
2018-05-21
2018-05-21 17:25:01
0
false
en
2018-05-21
2018-05-21 17:25:01
3
1d8c618b2495
1.301887
8
0
0
All digital tech is based on Boolean logic, which operates with only either True or False. Since Aristotle we are used to Classical Logic…
3
Fuzzy vs Classical Logic in AI All digital tech is based on Boolean logic, which operates with only either True or False. Since Aristotle we are used to Classical Logic with only 2 possible outcomes. At the same time Plato laid foundations for Fuzzy Logic, but it received decent attention only in the second half of the 20th century in Japan. “As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.” ― Albert Einstein 0 or 1 doesn’t suit well to many real-world tasks, as well as for explanation of cognitive functions. Process control tasks, emerged during evolution of electrical appliances, led to modern developments of Fuzzy Set theory. In this field simple division between Truth and Lie does not exist. As a matter of fact, any value in the middle is possible. Check this video to get intuition about fuzzy operations: Artificial Intelligence and Fuzzy Logic First digital computers were actually simple Turing machines. In parallel with developments in linguistics, many researchers focused their attention on models based on discrete terms. Those attempts were quite successful in problems related to “higher” cognition like algebra and reasoning. Probabilistic logic can be derived from Fuzzy logic with probabilistic interpretations. But this direction is only about 30 years old and haven’t gathered much attention yet. But, dealing with raw data like images, sound or instrument readings was quite complicated these times. Even answering the question “Is there a cat on the picture?” seemed extremely complex. Processes control stimulated spread of fuzzy logic applications on digital computers and many practical techniques. Neural networks became the most successful among them. Logic in Neural Networks I’ve previously wrote a simple guide to understand neural networks. Generally, you can image a NN as a network of simple agents receiving many inputs, making fuzzy logic based inference and sending resulting signal further. However, sometimes (especially in CNN) artificial neurons have output outside (0;1) range, but they may be normalized to fit FL more formally.
Fuzzy vs Classical Logic in AI
33
fuzzy-vs-classical-logic-in-ai-1d8c618b2495
2018-06-13
2018-06-13 05:06:29
https://medium.com/s/story/fuzzy-vs-classical-logic-in-ai-1d8c618b2495
false
345
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Egor Dezhic
AI and CogSci enthusiast
f0c2ea82ce84
Dezhic
1,336
15
20,181,104
null
null
null
null
null
null
0
[ { “id”: “abc”, “ts”: 1318250880000, }, ... ] { “groups”: [ { “id”: “abc”, “title”: “New products”, “createdAt”: “2011–10–10T14:48:00Z”, “comment”: “Has been deleted from log tracking system” } ] } { “groups”: [ { “id”: “123”, “title”: “My test group”, “members”: [ “joe@example.com”, “sue@example.com” ], “bad_ideas”: [ { “content”: “Make things better”, “reason”: “Too generic”, “improvement“: “Ask yourself which steps you would take“, “created_at”: “2011–10–10T14:48:00Z” } ] } ] }
3
8363126dba61
2017-11-20
2017-11-20 13:09:14
2017-11-20
2017-11-20 15:28:42
1
false
en
2017-11-23
2017-11-23 10:33:49
4
1d8fc0e6705c
2.762264
3
0
0
How to make human/machine integration reliable with JSON files
5
Human-as-a-service How to make human/machine integration reliable with JSON files Innovating means trying out lots of ideas — and many of them will fail. We therefore like to test our ideas before we spend too much time implementing them properly. This often requires to blend manual labor into automated processes. Here is how we make this human/machine integration reliable. Human/machine API Right at the start of a project, we define an “API” between the humans and the machine. This is typically a simple JSON file with sample data. With that in place, the developers can start to write code against the “API” while the humans are working on filling out the file with real content. We try to avoid changing the format of the JSON file until the end of the project but, being pressed, we prefer to quickly decide on the API (within 1h max) than loosing too much time over-engineering it. Tips & tricks We learned a couple of lessons how to structure such JSON files that should be filled by humans. Let’s demonstrate these based on an example. For machine-to-machine communication this could be an efficient data exchange format: The developers of the producing and consuming applications would look up the meaning of the various fields e.g. in a JSON schema. We found that this lookup step doesn’t work well if the JSON file is filled by a human, especially for non-developer who’re often unfamiliar with JSON schemas. Instead, we would store the same content in a more human-friendly format: To highlight the changes: Add an intermediate field to clarify array content: By adding the root field groups, it’s clear to the reader that the file contains a list of groups. Denormalize to avoid lookups: The group title isn’t required by the consuming application (which could look it up based on the group id) but it helps human to scan through the file and find back entries. Clearly name fields: createdAt is more specific than timestamp and certainly easier to understand than the abbreviation ts. Use human readable formats: The ISO timestamp 2011–10–10T14:48:00Z is far easier to understand for a human than 1318250880000, the number of seconds since 1970. Add a comment field for the creator: The field comment has no value for the consuming application but it allows the human creator to store notes in place. Case study: Manual input from Data Science For us, manual input often comes from Data Science. For example, our data folks lately found a way to classify if a posted idea is well structured. We concluded that we could use this to send users an email, pointing out the “bad ideas” and give concrete advices how to improve them. At this point, this is just a feature idea. We don’t know if users will appreciate such an email and if they would really act on it. So, we want to test out the idea cheaply. For the test, the Data Science team would manually analyze the data for a couple of test groups. Afterwards, we wanted to inject this manually sourced data into our automated email sending process. The Data Science team agreed with the email developers on the following sample JSON file: Based on the sample data, the developers started to adjust the email sending process. In parallel, disconnected, the data scientists did their magic to detect bad ideas for the test. Once both teams were ready, we could easily connect the two sub solutions, and sent out our test emails. Happy coding! Want to learn more about coding? Have a look to our other articles. Photo: Matthew Hurst
Human-as-a-service
8
human-as-a-service-1d8fc0e6705c
2018-04-12
2018-04-12 08:25:02
https://medium.com/s/story/human-as-a-service-1d8fc0e6705c
false
679
Tech lessons learned while making innovation smart, simple and sticky.
null
null
null
Next Engineering
coding@collaborne.com
collaborne-engineering
CODING,SAAS,JAVASCRIPT,POLYMER,WEB DEVELOPMENT
Collaborne
Json
json
Json
1,438
Ronny Roeller
CTO at @Next. Building agile SaaS platform to make innovation smart, simple and sticky. @stanforddschool @INSEAD
173afae90372
ronnyroeller
313
107
20,181,104
null
null
null
null
null
null
0
null
0
3df7fb09863c
2018-08-17
2018-08-17 08:21:53
2018-08-20
2018-08-20 06:49:08
6
false
en
2018-08-31
2018-08-31 09:22:53
8
1d93161eb06f
2.757547
4
1
0
Dear DML Community,
5
DML Project Update — 20 August 2018 Dear DML Community, First we need to say thank you for the supports we received countinously despite the difficult times we all have come across in the past months. DML Team is maintaining the composure and focus on our product development, to deliver our mission of democratizing the machine learning spaces. We are committed to make it happen. In this update, we will talk about our detailed technical milestones for the next few months. Engineering Updates The prototype of DML Algo Marketplace has been completed. Now we are moving forward to the development of our backend infrastructure, state channels, marketplace enhancements as well as the mobile app interfaces. Special thanks to Matthew Slipper from KYOKAN, our technical advisor, for his valuable inputs and expertise throughout the technical planning, especially in state channel design process. Below is our monthly technical milestones in details: Month 1 Verify Algo Execution and Data Processing; Start Accepting Algos Users, Bounty, Algo Off-Chain Endpoints Complete Base UI Components Complete New Algo UI Complete Base Smart Contract Complete Mobile App Tech and UI Design Month 2 State Channel Smart Contract Complete Algo Contract Complete Bounty UI Complete Mobile App Design Complete Month 3 State Channel Hub Complete Bounty and Job Contract Complete Staging Server Deploy Account Profile UI + Integration Complete Bounty Integration Complete Month 4 Job UI and Integration Complete Job Endpoints Complete Job Contract Complete All Code Complete (Ready for final QA + Productionization) Closed Beta of Mobile App to be Launched Month 5 Testing (Integration, Unit, Manual) QA + Productionization Setup Basic Monitoring and Logging Design Updates After receiving some feedbacks on the complexity of the current algorithm upload UX, we have been working on a more intuitive design for algorithm upload, which is expected to be implemented in our next marketplace update. Now let us take a glimpse on our upcoming algorithm upload enhancement. Filling the info and description of the algorithm to be uploaded, including the price to be charged for algorithm users. A sample testing of your pre-processing code is allowed before uploading. DML Algo Marketplace processed the algorithm upload. You can check against the post-processing result of your algorithm. Once you are set with the info and post processing result of your algorithm, you can click submit to publish your algorithm in DML Algo Marketplace. Going Forward Product delivery is the core component for a project. The development process is not an easy path, as this involves a lot of feasibility studies, code testing, debugging, integration of various technology (smart contract interactions, state channels, mobile app interfaces). Although it is not an overnight miracle and we have to get through quite a lot of challenges ahead, we are determined to work hard and realize the objectives for our Community. We have laid out the details of our imminent milestones to inform our Community on how we are going to achieve it. Let us pave the way for our success together and we hope you will continue to support us. DML Team will keep providing periodic updates about the development progress, please stay tuned for our next one! Cheers, DML Team DML Official Channels Website: https://decentralizedml.com Telegram Community: https://t.me/DecentralizedML Telegram Channel: https://t.me/DecentralizedML_ANN Medium Publication: https://medium.com/decentralized-machine-learning Youtube Channel: https://www.youtube.com/channel/UCT_qj3gQri8uARHWjHw1JNw Reddit: https://www.reddit.com/r/decentralizedML/ Twitter: https://twitter.com/DecentralizedML Facebook: https://www.facebook.com/decentralizedml/
DML Project Update — 20 August 2018
53
dml-project-update-20-august-2018-1d93161eb06f
2018-08-31
2018-08-31 09:22:53
https://medium.com/s/story/dml-project-update-20-august-2018-1d93161eb06f
false
479
Unleash untapped private data, idle processing power and crowdsourced algorithms
null
decentralizedml
null
Decentralized Machine Learning
contact@decentralizedml.com
decentralized-machine-learning
BLOCKCHAIN,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,TOKEN SALE
DecentralizedML
Dml
dml
Dml
23
Decentralized Machine Learning
Unleash untapped private data, idle processing power and crowdsourced algorithms
45a5246d765f
decentralizedml
298
2
20,181,104
null
null
null
null
null
null
0
sudo apt-get install build-essential libssl-dev libffi-dev python-dev python-pip libsasl2-dev libldap2-dev brew install pkg-config libffi openssl python env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography==1.9 pip install --upgrade setuptools pip pip install superset # Crie um nome de usuário e uma senha fabmanager create-admin --app superset superset db upgrade # O Superset disponibiliza data pra servir de exemplo, use o comando: superset load_examples superset init superset runserver -d
5
null
2018-06-01
2018-06-01 01:10:03
2018-06-01
2018-06-01 02:10:47
2
false
pt
2018-06-01
2018-06-01 05:29:58
2
1d93271cef3f
1.839937
17
0
0
Hoje o foco do meu Medium é nesse cara que eu nunca tinha ouvido falar até uns dias, mas que me impressionou demais por ser incrivelmente…
5
Apache Superset: Tutorial de instalação & configuração Hoje o foco do meu Medium é nesse cara que eu nunca tinha ouvido falar até uns dias, mas que me impressionou demais por ser incrivelmente customizável e fácil de manusear. Como eu sou curiosa, já tinha passado um tempo vendo como Redash e Dash funcionavam, mas ainda não era o que eu buscava porque, nesse caso, o cliente final precisaria visualizar a dashboard, resumindo: eu queria gráficos bonitos e claros pra todo mundo sair bem feliz, era um escolha muito mais baseada em interface. O que é o Apache Superset, pra que serve, quem usa? O Superset nada mais é do que uma aplicação (open source!) pra visualizar data e gerar dashboards interativas. Airbnb, Twitter e Yahoo! são empresas que fazem uso dessa riqueza. Ainda não vi muito material em português falando sobre ele, então vou tentar fazer uma breve introdução de como instalar e criar uma dashboard em questão de minutos. (: Acho válido dizer que o Superset tem integração com SQL através do SQLAlchemy (basta a URI) e com o Druid.io. Instalação via terminal Primeiro vamos instalar algumas dependências: Debian & Ubuntu OSX Windows (ainda não há suporte) Python & pip Instalando o Superset Agora abra uma aba no navegador e vá até http://localhost:8088 e faça login com o usuário e senha criados. Integração com o banco de dados Caso você já tenha um banco de dados pronto pra ser usado, vá até a aba Sources, em seguida clique em Databases e dê um nome para a o campo "database", abaixo fornece a URI do SQLAlchemy. E é isso! Depois é só testar diferentes formas de visualização, parâmetros e até mesmo customizar as dashboards com CSS. Espero que seja útil pra alguém! Conforme eu for explorando mais a aplicação, o post vai sendo atualizado. ❤
Apache Superset: Tutorial de instalação & configuração
35
apache-superset-tutorial-de-instalação-1d93271cef3f
2018-06-15
2018-06-15 03:18:13
https://medium.com/s/story/apache-superset-tutorial-de-instalação-1d93271cef3f
false
386
null
null
null
null
null
null
null
null
null
Python
python
Python
20,142
Paula Diniz
Faço sites. E finjo que escrevo.
c8a5d7524144
paula.diniz
728
210
20,181,104
null
null
null
null
null
null
0
null
0
fe46556e54bc
2018-08-24
2018-08-24 15:16:01
2018-08-24
2018-08-24 15:18:38
0
false
en
2018-08-24
2018-08-24 15:18:38
2
1d934bf5a4bc
2.279245
0
0
0
Artificial intelligence innovations often consist of several elements that would be best protected by trade secret. A trade secret is any…
5
Using Trade Secret Protection for AI IP Artificial intelligence innovations often consist of several elements that would be best protected by trade secret. A trade secret is any business information that provides an economic benefit or competitive advantage due to the information not being generally known by the public. Sometimes keeping information a secret is at odds with the requirements of obtaining other IP protections, meaning that a company must choose between one form of IP protection and another. For instance, both patent protection and copyright protection generally require that the applicant seeking protection publicly disclose information related to the innovation. As such, securing patent protection and maintaining trade secret protection are mutually exclusive IP strategies. On the other hand, copyright protection can be sought in such a way that any trade secret information contained in the copyright can be redacted to maintain the secret status of the information. What Aspects of AI Innovations Are Well Suited for Trade Secret Protection? There are several types of information related to AI innovations that are well-suited for trade secret protection. Some aspects of AI that are protectable with trade secret include: Technological know-how. How your AI innovation works is valuable business information. If your way of getting AI to work or to implement machine learning is better than how your competitors are doing it, your way gives you a competitive advantage. To prevent others from taking advantage of your know-how, you can protect the know-how behind your AI innovation by keeping your knowledge secret. Algorithms. Algorithms are often a significant aspect of an AI innovation. However, algorithms are generally not eligible for other forms of intellectual property protection, such as patents and copyrights. The only viable alternative for protecting AI algorithms is with trade secret protection. By keeping your algorithm secret, others will not be able to use your AI algorithm unless they derive it through independent discovery or reverse engineering. What Reasonable Efforts Should Be Take to Maintain Trade Secret Protection Trade secret protection is great for small companies and startups on a tight budget because it costs nothing to create trade secret protection. All that is required is that the information that is to be protected is kept secret and that reasonable efforts are made to maintain the secret status of the information. A few examples of reasonable efforts to maintain trade secret protection include: Keeping a clear record of what AI assets are trade secret information (and, of course, keeping the record secret). Requiring employees to sign confidentiality agreements as part of their employment agreement. Requiring third parties, vendors, suppliers, etc. to sign confidentiality agreements. Imposing company policies that safeguard company trade secret and confidential information. Training employees on how to keep trade secret information confidential. Encrypting any AI software code. Password protecting the AI software code. Keeping the AI software or algorithm asset out of open source. Keeping records of who has accessed the protected code (e.g., using a log-in, log-out system). Limiting access to the protected AI trade secrets to only employees who need to have access to it or who are working on the AI code. Limiting any printouts of the trade secret material. Marking any printouts or other physical copies of the AI code with labels like “Confidential” or “Secret.” Implementing physical and digital security measures as appropriate. At The Rapacke Law Group, we make your IP needs a priority. We aim to help our clients with AI innovations to secure the appropriate IP protections that they need. Contact us today for a free initial consultation with one of our skilled IP lawyers.
Using Trade Secret Protection for AI IP
0
using-trade-secret-protection-for-ai-ip-1d934bf5a4bc
2018-08-24
2018-08-24 15:18:39
https://medium.com/s/story/using-trade-secret-protection-for-ai-ip-1d934bf5a4bc
false
604
The IP resource for startups, inventors, and entrepreneurs.
null
rapackelawgroup
null
Intellectual Property Law Roadmap
legal@arapackelaw.com
intellectual-property-law
INTELLECTUAL PROPERTY,BUSINESS DEVELOPMENT,STARTUP LESSONS,BUSINESS STRATEGY,STARTUP
rapackelaw
Startup
startup
Startup
331,914
Andrew Rapacke
Andrew Rapacke is a registered patent attorney and serves as Managing Partner at The Rapacke Law Group, a full-service intellectual property law firm.
28295d69fda4
rapackelaw
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-04
2018-04-04 16:38:43
2018-04-06
2018-04-06 18:57:44
9
false
en
2018-06-04
2018-06-04 16:17:50
6
1d9585c4d649
5.849057
9
0
0
5 April 2018
5
Image courtesy of https://indstrlmnky.deviantart.com/art/Robot-DJ-22107727 AI Jukebox: Creating Music with Neural Networks 5 April 2018 The AI Jukebox is a neural network that generates music. Lets start out by sampling some of AI Jukebox’s work: The AI jukebox trains on a collection of midi music files, where it gains a “machine understanding” by mapping the latent, internal structural relationships of the dataset, and from this “understanding” is then able to create new, unique generated content. The work thus far has focused on collections of midi files by genre. A few of the genres sampled include: You can train AI Jukebox with your own collections of midi files; simply use the code and follow the operational instructions on its GitHub repo. Now that we have seen what the AI Jukebox can do, lets dive a bit deeper into how it works, and why it is important. Generative Models I’d like to start out by pondering a quote from the legendary theoretical physicist Richard Feynman: Taking a bit of liberty in interpreting, I would draw out a symbiotic relationship between creation and understanding. As in, understanding is required in order to create, and the act of creation feeds understanding: Generative models map the hidden structure within a dataset, and then new, unique content can be generated as “samples” from this mapping. How a generative model works. There are an abundance of potential use cases of generative models, with applications including the creation of new and unique images, audio and text, and perhaps in the not-too-distant future the generation of other key blueprints of our society, such as code, designs or even physical structures (such as with 3D printing). Long Short-Term Memory (“LSTM”) is a type of recurrent neural network which is often used in generative models to generate sequences of text, or in our case, notes and chords. This model is useful in that it carries “memory” which allows information to persist within the network, including long-term dependencies. The state of a recurrent neural network, with LSTM being one of the most popular, is constantly updated both via new inputs as well as via the previous state of the model. A LSTM network. Diagram courtesy of Christopher Olah’s blog. The LSTM network works via a “gate layer” structure, where each LSTM node is actually made up of several “gates” managing the “cell state”, or memory of the network. Architecture The AI Jukebox is designed as a bidirectional Long Short-Term Memory (“LSTM”) network, including two LSTM layers, two dense layers and dropout at each layer. Bidirectional LSTM neural network architecture. Bidirectional is a special type of LSTM network where there are actually two layers of LSTM. One layer trains on the sequence of notes/chords in the forward direction and the other trains on the same sequence in reverse. This is one way to more comprehensively map the latent relationships within the data. Dropout has been evenly distributed throughout so as to avoid loss of memory; sporadic use of dropout in recurrent networks has been known to cause issues of this sort, but the problem is thought to be largely mitigated if care is taken by evenly distributing dropout amongst the layers. For this model, 50% dropout was used at each layer. The input layer is 512 nodes and the softmax is configured to output one of each distinct note or chord as found in the training dataset. The dataset input into the model can be any collection of midi music — a few different genres were explored in this exercise, including Celtic, Dance, Jazz and Classical. The AI Jukebox was built in Python with Keras and Tensorflow used for the neural network, and music21 and musescore2 for music analysis. Models were trained in an Amazon Web Services p2.xlarge instance. Music Generation Sequences of notes/chords are generated by, from a random starting point, looking at a window of sequential notes and chords (200 in our case) from the underlying dataset and predicting the 201st note based on a probability distribution (reflected in the softmax activation function). The sequence window of 200 notes will then will then shift over one by one until the model has generated a full sequence of the requested notes; in our case 500 notes/chords. Sequence generation by an LSTM network. Diagrams courtesy of Sigurður Skúli, Towards Data Science. Temperature is an added hyperparameter which increases or decreases the probability of any given note to be chosen. A decrease in temperature will lead to more accurate, yet less interesting, rhythms. An increase in temperature will lead to more randomized note selection, which could potentially lead to more interesting pieces. But if you turn the temperature up too high you may just get random noise! Testing When listening to the music generated by AI Jukebox (see Soundcloud), we should keep the following points in mind: as the model is generative (as opposed to discriminative) there are no labels, and as such, the best judges are us in general, we are looking for repeating patterns within a reasonable long term structure the model has been trained to minimize both training and validation loss to prevent overfitting on the underlying dataset most importantly, the music should be aesthetically pleasing For those musically-inclined, the first half of the generated Celtic Piano 1 piece is “noted” below: The training and evaluation loss from training the Celtic dataset for 200 epochs are as follows: Key Takeaways In this post we explored one application of how a generative model can generate unique, new content. I hope you would agree that the music samples contain some interesting, if not evocative, rhythmic patterns — but perhaps we aren’t in the running for awards just yet. This model just “scratches the surface” of generative modeling in music — more work to be done! Next Steps There are many avenues to take in refining the performance of this model. We could continue to run different datasets, perhaps narrowing the collections from genres to artist, or even style or tempo for a specific artist. There are also many other architectures which may be able to further enhance the output of the model. Generative Adversarial Networks (GANs), variational autoencoders and attention RNN all seem to be gaining significant traction in the generative music area. We can further explore different inputs, such as raw music or even text. There are also other tweaks that can be explored, such as adding multiple instrument functionality, varying rhythmic patterns (currently we are only working with eighth notes) and rhythm seeding where the initial notes are “seeded” in order to influence initial melodies. Ideally it would be an aspiration to write the model into a web app where a user can upload any collection of music, and then a unique AI-generated midi file would be output. But it isn’t ready for the web app just yet — training times and output consistency would make this relatively infeasible at this time. AI Jukebox is just a simple first prototype into a “bleeding-edge” area of generative neural networks. It is truly amazing to think about how quickly this technology has come, and a bit unfathomable as to where this may lead in the not-so-distant future. In the following video, I presented the AI Jukebox as my “passion project” at Metis Data Science Bootcamp Career Day on 5 April 2018. Once again, the code and presentation are available on GitHub here. If you liked this post, a clap (or two) would be much appreciated, which will make the content more easily available for the benefit of other readers. Thanks for stopping by!
AI Jukebox: Creating Music with Neural Networks
104
ai-jukebox-creating-music-with-neural-networks-1d9585c4d649
2018-06-04
2018-06-04 16:17:51
https://medium.com/s/story/ai-jukebox-creating-music-with-neural-networks-1d9585c4d649
false
1,232
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Brian McMahon
Machine Learning and AI enthusiast. Never stop learning.
7e27e4c02938
cipher813
156
175
20,181,104
null
null
null
null
null
null
0
null
0
3d110445ee67
2018-04-15
2018-04-15 07:44:11
2018-04-15
2018-04-15 07:51:04
1
false
en
2018-04-23
2018-04-23 18:16:41
10
1d95fcfb67ba
1.14717
6
0
0
Saturday April 28, 2018 • iHub, Nairobi
5
🔥 TensorFlow Dev Summit 2018 Extended Nairobi Saturday April 28, 2018 • iHub, Nairobi We are thrilled to announce that we will be hosting The TensorFlow Dev Summit Extended Nairobi on 28th April 2018. Please join us at the iHub for an immersive event that will bring together a diverse mix of machine learning users from around Nairobi for a full day of highly technical sessions, talks, demos, and conversation with the TensorFlow community. About TensorFlow TensorFlow™ is an open source software library for high performance numerical computation. It was originally developed by researchers and engineers from the Google Brain team within Google’s AI organization. Learn More → https://goo.gl/SKZ7iL Event Agenda We’ll have deep dive sessions, as well as introductory overviews, so all levels of TensorFlow familiarity are welcome! Below is the agenda: 09:00–09:30 Registration 09:30–11:00 MLCC part one 11:00–11:20 Break 11:20–13:00 MLCC part two 11:20–13:00 Lunch 14:00–16:00 Tensorflow Dev Summit Extended How/where can I apply to attend? Registration is now open, but keep in mind that space will be filled on a first-come, first-serve basis, so make sure to RSVP here → https://goo.gl/eWRii2. Resources TensorFlow’s Get Started Guide Google’s Machine Learning Crash Course Camron’s TensorFlow in a Nutshell series Annenberg Learner’s course on Mathematical Models Wikipedia’s article on Machine Learning Robby’s explanation of AI, ML, Deep Learning and Data Science For the most up-to-date information, please keep in touch with GDG Nairobi. You can also join the conversation on Twitter: #TFDevSummit #Nairobi #Extended See you then!
🔥 TensorFlow Dev Summit 2018 Extended Nairobi
68
tensorflow-dev-summit-2018-extended-nairobi-1d95fcfb67ba
2018-06-13
2018-06-13 17:36:25
https://medium.com/s/story/tensorflow-dev-summit-2018-extended-nairobi-1d95fcfb67ba
false
251
Google Developers Group Nairobi is a community-focused publication based on GDG Nairobi, an open and volunteer geek community who creates exciting projects and share experiences about Google technology with the passion.
null
gdgnairobi
null
GDG NAIROBI
null
gdg-nairobi
GOOGLE DEVELOPER GROUP,COMMUNITY
gdgnairobi
Machine Learning
machine-learning
Machine Learning
51,320
Ngesa Marvin
IoT at GDGs, SSA | Intel AI Ambassador | Co-Lead, GDG Nairobi | EEE — Telecom Engineer, Hacker & Maker - Android+Electronics+AI #IoT #5G Freak | Opinions = Mine
3d4aa1e43527
ngesa254
1,718
852
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-13
2018-09-13 20:31:22
2018-09-13
2018-09-13 20:34:25
1
false
en
2018-09-14
2018-09-14 22:38:31
11
1d97ae1aa7fd
3.135849
2
0
0
It’s been another weekend with another miss by traditional movie tracking folk. The Nun, the fourth installment of The Conjuring series…
3
The Success of “The Nun” Was No Miracle. And Vault Knew it in March. So, who is Next? It’s been another weekend with another miss by traditional movie tracking folk. The Nun, the fourth installment of The Conjuring series, surprised many to become the most profitable and popular installment of the franchise with a weekend take of $53.5 million. So, why did tracking, which Variety reported sat at $32 million miss the boat again? Our clients caught the trend in March via our predictive analytics platform nearly six months ago. Every metric, from story to consumer demand, had showed us that the movie would be a franchise topper. So why did conventional movie tracking have such a shaky read on the audience? 2018: The Year of the Franchise Topper? The Nun isn’t the only franchise installment of 2018 to become a runaway hit and franchise topper. Who could forget The Black Panther, the first Marvel origin movie to top $200 million? The Conjuring series has been a nice money maker. Each installment has grossed more than $35 million opening weekend. But none of them ever cracked the mid $40s and certainly none broke the $50 million mark. So what happened? Tracking Real Consumer Demand Like consumers of mobile phones, athletic shoes, and music, movie goers react to story messaging in different ways. They delve deeper into the story, watch trailers, discuss with friends and more — before and after they see them in theaters. Our analytics platform is designed to follow and analyze online movie consumer behavior. Using our deep learning artificial intelligence, we distill the data into signals and we follow the level of demand for 24 weeks in the future. This means the platform at any given time has data on over 70 movies. This data is built to alert our clients to breakout movies and struggling titles alike. When we first started tracking The Nun on March 21, 2018, we saw that it was performing way above Anabelle: Creation, which had grossed $35 million over its first weekend. Out of the 70 movies that we were tracking on our platform, The Nun held twice the demand levels of Skyscraper that was slated for a July release. Taking a Better Look at Your Competitors For too long, the entertainment industry has measured its movies against other movies in a rather random fashion — as if they’ve plucked them out of thin air. These competitors or “comps” are both a blessing and a curse. If the comp is indeed similar to another movie, it can help project results. But if the comp is not a good comparison, the resulting business decisions can be faulty as well. For example, if a talented executive suggests that this faith-based movie is similar to Passion of the Christ but needs only one-tenth the budget and will make similar returns, then investing in the movie is a no brainer. Comps can provide great context. They’re helpful, yet fundamentally flawed. Why? Because in reality, comps are just 3–4 movies. By analyzing against a few chosen comps executives are taking a 2D approach. Analyzing a title against all movies in play, allows you to take a true 3D approach. What Does the Future Hold? Vault AI picked up The Nun as an all-time-high franchise winner on March 21, 2018. So what are the future titles to watch? Which movies should be moved to new dates to make way for the future block busters? · New release White Boy Rick is tracking as one of the most in-demand titles from a story perspective. Audiences are not looking at White Boy Rick as a Matthew McConaghy movie, rather they are highly attracted to the story being presented by Studio 8. · 18 weeks before its launch, Glass is pushing ahead. Right now it is tracking more than 10% higher than Split at the minus 18 week mark. What’s more, Glass holds a 60% demand market share for the weekend before it opens, the week of its opening and the week after its opening. This means Universal executives will have almost all the attention of movie goers across a three-week period, while other titles will have to fight of their lives to get noticed. This type of demand is also seen in the upcoming October remake of A Star is Born. · Meanwhile, Aquaman, still fifteen weeks out from release, is tracking a little lower than Justice League, as audiences focus more on Jason Momoa, the main actor, than on the actual story. Executives will need to focus their promotional efforts around the story for it to be a breakout movie.
The Success of “The Nun” Was No Miracle. And Vault Knew it in March. So, who is Next?
2
the-success-of-the-nun-was-no-miracle-and-vault-knew-it-in-march-so-who-is-next-1d97ae1aa7fd
2018-09-14
2018-09-14 22:38:31
https://medium.com/s/story/the-success-of-the-nun-was-no-miracle-and-vault-knew-it-in-march-so-who-is-next-1d97ae1aa7fd
false
778
null
null
null
null
null
null
null
null
null
Movies
movies
Movies
84,914
David Stiff
CEO @ Vault Analytics. Content lover. Data enthusiast. Tech entrepreneur. << www.vault-analytics.com >>
f41f030e3467
dstiff
4
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-04
2018-05-04 16:23:47
2018-05-04
2018-05-04 17:41:28
2
false
en
2018-05-04
2018-05-04 17:41:28
0
1d97d43f9974
3.953145
2
0
0
On paper, Alexander Nix and I might appear to be best friends. He was one of the first four people in the entire world to know — even…
5
Thanks for the great engagement and life lessons, Cambridge Analytica On paper, Alexander Nix and I might appear to be best friends. He was one of the first four people in the entire world to know — even before any of my family — that I was engaged (and no, he didn’t acquire that knowledge from a changed relationship status in Cambridge Analyticas’s trove of Facebook data). He was the first person to buy my fiancee and I champagne upon news of our engagement. I even took a personal photograph with him and a few of his close associates in Istanbul several years ago. In spite of the appearance of our close relationship, I haven’t spoken to Alexander Nix in four years. Our meeting was actually the result of a chance encounter in 2014. We met on the banks of the Bosphorus in Istanbul when my just-proposed-to fiancee and I hopped off of a private yacht in front of a popular waterfront restaurant. Not a half-bad proposal — even if I do say so myself I had chartered the boat to carry out my surprise proposal, and when the captain dropped us off at the restaurant, Mr. Nix and his three compatriots were out for a stroll in front of us. After a brief chat, we parted ways. A true class act, when my fiancee and I arrived at our table, a bottle of champagne was there to great us — sent over from Alexander himself. Congratulatory bottle of champagne sent from Mr. Nix (Before you comment, yes, I know real champagne comes from France) Obviously, my brief encounter doesn’t permit me to adjudicate whether Cambridge Analytica is guilty of the allegations against them. However, the recent public outcry against the company, and social media in general, suggests that a bigger problem is afoot than third-party agents using social media data for questionable purposes. The problem seems to be with the way we use social media — and what we expect from it. Here is a microcosmic example I discovered scrolling through my Facebook feed recently, written by a young white male: “Hey white folks. The N-word is not our word. It cannot be used by us with context or without context. Just don’t do it. Admirable thought? Sure. But what is it that statements like this suggest we really care about — is it about the defense of maligned and oppressed minority groups (never mind the fact that it is difficult to conceive of anything more racist than assuming to speak on behalf of an entire group)? Or is this perhaps more about in-group virtue signaling? I am inclined to believe it is frequently the latter. There appears to be an expectation of a miracle where readers of our latest Facebook post or Tweet marvel at our reason and rationality, and walk away as new converts based on the enlightenment we’ve conferred on them. We seem to convince ourselves that we are the benevolent arbiter of a piece of information that will be the final piece of the puzzle; that will make a reader (or perhaps more subtlety, our muse) finally go, “Aha! That’s what I was missing”. Speculation of our intentions aside, the things we say on social media belie our apparent concern for privacy. We want people to know what we believe. Feel free to peruse your own social media feed for confirmation of that fact ad nauseam. Ironically, there is an expectation that only our “friends” (or imagined intellectual foes) will heed our message — and any interested third-parties will turn a blind eye. This expectation reveals a certain naivety. But if we are as smart and well-informed as we make ourselves sound in social media feeds, shouldn’t we be able to easily spot nefarious actors and those with dubious intentions that might seek to influence our opinions? The public outcry against Facebook, Cambridge Analytica, and other key-holders to “our” data suggests that in the darker corners of our mind, doubts linger. We might be less sure of what we believe than what we outwardly project. By seeking validation from what often becomes a homogeneous echo-chamber of our own creation, maybe the real person we’re trying to convince of what we believe is ourselves. At the time of my meeting Mr. Nix, I had never heard of him or Cambridge Analytica. Over the years, I’ve come to realize the gravitas carried by Mr. Nix and his secretive analytics firm. Unfortunately, I suspect Alexander Nix, Cambridge Analytica and Facebook may become short-sighted scapegoats whose blood will only temporarily cover the truly culpable entity. Who is ultimately to blame? The ironic thing about the court of public opinion is that frequently lacks the introspective insight to condemn the most likely offender: each and every one of us. Trans-humanists and futurists frequently speak of “The Singularity”, that hypothetical point in time where artificial intelligence exceeds the capability of biological humans. I wonder, though — will we become enslaved to machines because they are truly more intelligent than us, or will machines become our masters simply because we outsourced the most basic of human characteristics — virtue and valuation — to them? The thought of human enslavement to machines seems far-fetched, but perhaps the perceived distance is merely an illusion tainted by a culturally imposed “Terminator”-esque expectation of super-intelligent robotic overlords wielding guns. If we allow — or as increasingly seems to be the case, demand — technology platforms like Facebook and Twitter to algorithmically filter “fake news”, we may be awakening a Frankensteinian monster of a different kind; the lack of an anthropomorphic form should not belie its intrinsically dangerous nature. We can likely engineer-out “stupid” or “fake”, but perhaps a better question is, “Should we not demand more of ourselves?”
Thanks for the great engagement and life lessons, Cambridge Analytica
2
thanks-for-the-great-engagement-and-life-lessons-cambridge-analytica-1d97d43f9974
2018-05-04
2018-05-04 20:22:09
https://medium.com/s/story/thanks-for-the-great-engagement-and-life-lessons-cambridge-analytica-1d97d43f9974
false
946
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Daniel Barker
Engineer with a cybernetic bent. Unabashed pseudo-intellectual; can usually be found struggling to capture deeper thoughts in prose ’n code.
3ca6d0746567
daniel_c_barker
62
31
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-03-28
2017-03-28 20:23:01
2017-03-28
2017-03-28 21:14:53
1
false
en
2018-02-27
2018-02-27 21:35:15
16
1d97d4c3e9b
3.049057
791
91
0
New VM image — updated March 2018!
2
Try Deep Learning in Python now with a fully pre-configured VM New VM image — updated March 2018! I love to write about face recognition, image recognition and all the other cool things you can build with machine learning. Whenever possible, I try to include code examples or even write libraries/APIs to make it as easy as possible for a developer to play around with these fun technologies. But the number one question I get asked is “How in the world do I get all these open source libraries installed and working on my computer?” If you aren’t a long-time Linux user, it can be really hard to figure out how to get a system fully configured with all the required machine learning libraries and tools like TensorFlow, Keras, OpenCV, and dlib. The majority of the issues that get filed on my own open source projects are about how to install these tools. A lot of people get stuck while installing everything and give up before ever getting to play around with any code. That’s a shame! Try out machine learning now with this pre-configured Ubuntu VM. Everything is ready to go. There’s no reason it should be so hard to try things out in 2017. To make it simple for anyone to play around with machine learning, I’ve put together a simple virtual machine image that you can download and run without any complicated installation steps. The virtual machine image has Ubuntu Linux Desktop 16.04 LTS 64-bit pre-installed with the following machine learning tools: Python 3.5 OpenCV 3.2 with Python 3 bindings dlib 19.9 with Python 3 bindings TensorFlow 1.5 for Python 3 Keras 2 for Python 3 face_recognition for Python 3 (for playing around with face recognition) PyCharm Community Edition already set up and ready to go for all these libraries Convenient code examples ready to run, right on the desktop! Even the webcam is preconfigured to work inside the Linux VM for OpenCV / face_recognition examples (as long as you set up your webcam to be accessible in the VMware settings). Note: This is a desktop VM meant for educational purposes, not a VM meant for use on a server. Due to licensing and installation complications, there’s no GPU acceleration / CUDA support provided. So you don’t need an Nvidia GPU to try this out, but it also won’t take advantage of a GPU if you have one. How to download and run the Deep Learning VM in 3 simple steps: Download the 7.7GB VM .tar.gz file (hosted on a fast connection thanks to the fine people at CYDNE!). Uncompress the file when it’s complete. A VM for VirtualBox is also available, but the performance in VirtualBox can be pretty bad. So don’t the VirtualBox version unless you don’t have any other choice. You need VMware to run this virtual machine image. If you don’t already have VMware installed, download the appropriate version for your operating system. Windows or Linux users should download the free VMware Workstation Player. Mac users can grab the free VMWare Fusion 30-day demo. Launch VMware, open the VM image and run it! Linux should boot right up. See below for the user account password. Tips The username is ‘deeplearning’ and the password is ‘deeplearning’. You might want to change the password after you log in. This is a 64-bit virtual machine. You’ll need a 64-bit CPU, circa 2011 or newer to run it. Sorry, but it won’t work if you have an older CPU in your computer. If you launch PyCharm Community Edition from the left sidebar, there are several pre-created projects you can open. Try the face_recognition, OpenCV or Keras projects and run some of the demos. Right-click on the code window and choose “Run” to run the current file in PyCharm. If you configure your webcam in VMware settings, you can access your webcam from inside the Linux virtual machine! Try running one of the face_recognition webcam demos after setting it up. Have fun! If you are new to machine learning, you might enjoy my Machine Learning is Fun series. Try starting with Part 1. If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write new articles. You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Try Deep Learning in Python now with a fully pre-configured VM
3,224
try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b
2018-06-21
2018-06-21 12:36:22
https://medium.com/s/story/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b
false
755
null
null
null
null
null
null
null
null
null
Linux
linux
Linux
10,410
Adam Geitgey
Interested in computers and machine learning. Likes to write about it.
ba4c55e4aa3d
ageitgey
40,362
36
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-26
2018-07-26 16:35:51
2018-07-26
2018-07-26 20:53:53
12
false
en
2018-07-26
2018-07-26 23:49:03
1
1d98286cf1e4
3.829245
12
0
0
This post is designed to be an overview on concepts and terminology used in deep learning. It’s goal is to provide an introduction on…
4
Deep Learning: Overview of Neurons and Activation Functions This post is designed to be an overview on concepts and terminology used in deep learning. It’s goal is to provide an introduction on neural networks, before describing some of the mathematics behind neurons and activation functions. What is an Artificial Neural Network? Neural networks can learn complex patterns using layers of neurons which mathematically transform the data The layers between the input and output are referred to as “hidden layers” A neural network can learn relationships between the features that other algorithms cannot easily discover Multilayer Perceptron (MLP) The above diagram is a Multilayer Perceptron (MLP). An MLP must have at least three layers: the input layer, a hidden layer and the output layer. They are fully connected; each node in one layer connects with a weight to every node in the next layer. The term “deep learning” is coined for machine learning models built with many hidden layers: deep neural networks. What is a neuron? An artificial neuron (also referred to as a perceptron) is a mathematical function. It takes one or more inputs that are multiplied by values called “weights” and added together. This value is then passed to a non-linear function, known as an activation function, to become the neuron’s output. The x values refer to inputs, either the original features or inputs from a previous hidden layer At each layer, there is also a bias b which can help better fit the data The neuron passes the value a to all neurons it is connected to in the next layer, or returns it as the final value The calculation starts with a linear equation: Before adding a non-linear activation function: Which brings us to our next question… What is an Activation Function? An activation function is a non-linear function applied by a neuron to introduce non-linear properties in the network. A relationship is linear if a change in the first variable corresponds to a constant change in the second variable. A non-linear relationship means that a change in the first variable doesn’t necessarily correspond with a constant change in the second. However, they may impact each other but it appears to be unpredictable. A quick visual example, by introducing non-linearity we can better capture the patterns in this data Best fit linear and non-linear models Linear Activation Function A straight line function: a is a constant value Values can get very large The linear function alone doesn’t capture complex patterns Sigmoid Activation Function A non-linear function so can capture more complex patterns Output values are bounded so don’t get too large Can suffer from “vanishing gradient” Hyperbolic Tangent Activation Function A non-linear function so can capture more complex patterns Output values are bounded so don’t get too large Can suffer from “vanishing gradient” Rectified Linear Unit (ReLU) Activation Function A non-linear function so can capture more complex patterns Values can get very large As it does not allow for negative values, certain patterns may not be captured Gradient can go towards 0 so weights are not updated: “dying ReLU problem” Leaky ReLU Activation Function A non-linear function so can capture more complex patterns Attempts to solve the “dying ReLU problem” Values can get very large Alternatively, instead of using 0.01, that can also be a parameter, α, whic is then learned during training alongside the weights. This is referred to as Parametric ReLU (PReLU): Softmax Activation Function Each value ranges between 0 and 1 and the sum of all values is 1 so can be used to model probability distributions Only used in the output layer rather than throughout the network Summary Hopefully this post was valuable in providing an overview of neural networks and activation functions. The next post continues by discussing which final-layer activation functions should be used with which loss function depending on the purpose of building the model. Deep Learning: Which Loss and Activation Functions should I use?
Deep Learning: Overview of Neurons and Activation Functions
122
deep-learning-overview-of-neurons-and-activation-functions-1d98286cf1e4
2018-07-26
2018-07-26 23:49:03
https://medium.com/s/story/deep-learning-overview-of-neurons-and-activation-functions-1d98286cf1e4
false
657
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Stacey Ronaghan
Data Scientist keen to share experiences & learnings from work & studies
60a50d133053
srnghn
137
1
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2018-04-24
2018-04-24 23:33:57
2018-04-24
2018-04-24 23:40:50
3
false
en
2018-04-24
2018-04-24 23:43:55
4
1d9afc5abb87
1.980189
6
0
0
On April 17th, researchers from Carnegie Mellon University and Petuum, a Pittsburgh-based CMU spinoff focused on artificial intelligence…
4
New Petuum & CMU Paper Identifies Statistical Correlation Among Deep Generative Models On April 17th, researchers from Carnegie Mellon University and Petuum, a Pittsburgh-based CMU spinoff focused on artificial intelligence platforms jointly published On Unifying Deep Generative Models. The paper introduces a high-level theoretical connection between various deep generative models, particularly Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). It has been accepted as a 2018 ICLR Conference Paper. The paper’s researchers suggest that GAN and VAE lack a unified statistical connection due to their distinct generative parameter learning paradigms. Researchers derived a new GAN formula that has many similarities with VAE, which could spark innovations in R&D of GANs and VAEs, and help researchers discover new common rules of machine intelligence that were previously undetected. Both VAEs and GANs involve minimizing KL divergences of respective posterior and inference distributions, with the generative parameter θ in opposite directions. It is straightforward to inspire new extensions to GANs and VAEs by borrowing ideas from each other. For example, the importance weighting technique originally developed for enhancing VAEs can naturally be ported to GANs and result in enhanced importance weighted GANs. According to the original post, many advantages can be achieved by this unified statistical view: Provide new insights into the different model behaviors. For example, it is widely observed that GANs tend to generate sharp, yet low-diversity images, while images by VAEs tend to be slightly more blurry. Formulating GANs and VAEs under a general framework would facilitate formal comparison between them and offer explanations of such empirical results. Enable a more principled perspective of the broad landscape of generative modeling by subsuming the many variants and extensions into the unified framework and depicting a consistent roadmap of the advances in the field. Enable the transfer of techniques across research lines in a principled way. For example, techniques originally developed for improving VAEs could be applied to GANs, and vice versa. * * * Author: Alex Chen| Editor: Tony Peng, Michael Sarazen * * * Subscribe here to get insightful tech news, reviews and analysis! IJCAI 2018 — Alimama International Advertising Algorithm Competition There are $28,000 worth of prizes to be won in Alibaba Cloud’s Tianchi International Advertising Algorithm competition! Learn more here and begin competing today!
New Petuum & CMU Paper Identifies Statistical Correlation Among Deep Generative Models
109
new-petuum-cmu-paper-identifies-statistical-correlation-among-deep-generative-models-1d9afc5abb87
2018-05-22
2018-05-22 21:07:51
https://medium.com/s/story/new-petuum-cmu-paper-identifies-statistical-correlation-among-deep-generative-models-1d9afc5abb87
false
379
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
global.sns@jiqizhixin.com
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
Carnegie Mellon
carnegie-mellon
Carnegie Mellon
182
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null
0
cannot convert value of type ‘CVaListPointer’ to expected argument type ‘va_list’ (aka ‘__va_list’)
1
null
2018-01-02
2018-01-02 13:45:30
2018-01-24
2018-01-24 09:37:31
2
false
en
2018-10-30
2018-10-30 11:38:30
20
1d9bcd6559dc
4.851258
33
1
0
A simplified AI/ML embedded processing stack
3
Getting Swift to run on NVIDIA Jetson TX2 A simplified AI/ML embedded processing stack One of the core challenges when developing AI/ML embedded processing systems is the ability to balance processing power, portability and development speed. While traditionally AI/ML problems are being solved in the Cloud, some of our special projects involving live video processing and computer vision call for an on-site, low-latent & power-efficient solution. The recently released NVIDIA Jetson TX2 supercomputer board has been a game-changer in the selection of platforms available for our choice when dealing with this topic. The TX2 remains the fastest, most power-efficient embedded AI computing device at the moment. With the TX2, NVIDIA managed to create a very powerful package which is ideal for our requirements in live video processing, object/face detection and Deep Neural Network analysis. With 4 standard CPU cores (+2 auxiliary Denver cores), it delivers enough power for running data-intensive processing + the additional 256 CUDA cores on the GPU give our data scientists something to run their complex deep learning neural networks. NVIDIA Jetson TX2 Developer Kit Why Swift? One practical challenge we encountered when working with NVIDIA hardware (and embedded systems in general), is choosing a programming language/platform to build our end-user solutions. Different dev teams have experience with different technology stacks. Programming languages commonly used by data scientists differ greatly from ones used by application programmers to implement business logic or hardware integrations. The problems we’re trying to solve however, require a close cooperation between the two departments. That’s why we decided to integrate Swift, the latest programming language from Apple, which took the iPhone/iOS development world by storm since its introduction in 2014. From our perspective, Swift has multiple advantages: Performance: being a statically-typed, machine-code-compiled language it has performance equal to that of native C/C++. This is critical for applications like video processing and computer vision. High level programming: Swift is a modern language with multiple features which promotes fast & correct programming. Paired with a relatively complete standard library and useful dependencies, it gives us development efficiency compared to traditional scripting/interpreted languages. Integration: being a natively-compiled language with its roots in ObjectiveC means we can easily integrate with C/C++ code, which is traditionally used to develop components of ML/Vision processing pipeline. A wide pool of dev talent: it’s much easier to find talented Swift developers to help us with embedded app development without major on-boarding/tech-switch hurdles. Future Python integration: Chris Lattner, author of Swift, recently hinted at the upcoming Python integration which will make it even easier for us to integrate upper software layers with low-level science code which is often available in Python. Similar advantages are offered by Kotlin Native, but we didn’t explore that path yet. Getting Swift to run on TX2 wasn’t obvious and we were clearly pioneers in this area. Below, we’d like to share specific steps & fixes we had to apply to get our full stack (including Swift Package Manager) running on NVIDIA Jetson system-on-chip. Build issues 1. “Nor” memory constraint issue in LLVM The Swift version open-sourced by Apple and openly-developed on Github is mostly tweaked to run on x64_64 platform (Intel 64bit). TX2 is based on aarch64 architecture (ARM 64bit, the less popular variant). The LLVM compiler infrastructure has a rare issue generating aarch64 code in which nor memory constraint cannot be resolved properly (during libdispatch compilation). This is mostly documented in the LLVM Bugzilla but unresolved at the moment of publishing this post. With a little help from Saleem at Facebook/LLVM.org we were able to replace nor with a more generic m constraint and get working properly. With the patch applied, LLVM is able to build libdispatch required for Core Foundation library to work properly. LLVM Patch 2. ld gold linker issue in NVIDIA Jetpack NVIDIA ships a slightly older version of binutils which has relocation issues when applying TLSDESC relocations with no TLS segment. Fortunately the issue has been fixed in newer versions and a patch is available. We repacked binutils with the patch applied for Jetpack: ⭐️ld.gold patch ⭐️Ready-made updated binutils debian packages for Jetpack TX2 3. Swift Package Manager hardcoded x86_64 paths Even though Swift compiler by itself is fully cross-platform ready (after all it’s used to compile all the ARM-based iPhone apps!) the Swift Package Manager is not fully there yet, a lot of things are hard coded. Even though the Package Manager has basic notion of 3 platforms (Darwin, Linux, Android), it makes a hard-assumption that the Linux runs in favor of x86_64-unknown-linux. This can be observed in ie. Triple.swift source: Triple.swift snippet We worked around it by adding proper aarch64 enums to the class and replacing x86_64 with aarch64 where necessary. This is a somewhat temporary solution and a proper fix would be to add true cross-platform support to Swift Package Manager (work in progress, as we were able to learn). AArch64 support in Swift Package Manager patch 4. Variadic arguments issue in NSString Foundation implementation Finally, the last problem came out during CoreFoundation compilation. Variadic args support on aarch64 seems to be “almost finished” in LLVM/Swift yet lacking the last VaListBuilder bit required for proper conversion when calling to C. This results in build errors in style of: …in Swift itself, when working with native C/Swift vaargs conversion. We weren’t able to find a fully reliable solution to this problem yet and we simply asserted all the vaargs-based initialisers for `NSString`. It seems those constructors are very rarely used and we weren’t able to hit a code path with our usage where the assert would be triggered. But this issue definitely requires fixing. Hopefully this will be provided soon, as there seems to be a lot of active development in this area. Summary After resolving all the issues, we were able to get a fully working Swift compiler + Foundation library and a Swift Package Manager being able to build integrated Swift/C code in one go. This setup allows us to use Swift, a modern, high-level system programming language to quickly develop and iterate forward business/app/model layers of our projects. We also benefit from a wide talent of senior Swift/iOS developers on board to review & improve our code. Plus, we don’t lose the ability to integrate C/C++ industry-standard ML/AI/Data-science tools & libraries that make full use of NVIDIA GPU superpowers. Hopefully, bootstrapping the existing hurdles of aarch64 can be resolved in the LLVM/Swift repositories soon. In the next weeks/months, we will be working with the upstream developers to make sure this works out of the box without any compromises on all arm64 boards. We think this represents a top of the line dev ecosystem for development of modern, reliable software embedded AI/ML Vision Processing applications. Has this post made you curious for more? Let us know! The article was written by Michał Dominik Kostrzewa, our Head Of Special Projects at YND. Feel free to reach out to us via hello@ynd.co with questions about your tech projects.
Getting Swift to run on NVIDIA Jetson TX2
481
getting-swift-to-run-on-nvidia-jetson-tx2-ai-computing-platform-1d9bcd6559dc
2018-10-30
2018-10-30 11:38:30
https://medium.com/s/story/getting-swift-to-run-on-nvidia-jetson-tx2-ai-computing-platform-1d9bcd6559dc
false
1,184
null
null
null
null
null
null
null
null
null
Swift
swift
Swift
13,689
YND
Product Agency and Startup Studio based in Berlin. We work with FinTech, Wearables, Virtual Reality and Machine Intelligence.
17da4aff9cc6
ynd
165
64
20,181,104
null
null
null
null
null
null
0
null
0
948d4f9c991
2018-04-16
2018-04-16 14:06:48
2018-04-16
2018-04-16 14:25:13
8
false
en
2018-04-16
2018-04-16 17:48:09
4
1d9c25786bc4
6.469182
2
0
0
What will Bitcoin do
5
Bitcoin Trade Signals What will Bitcoin do When asked what the stock market will do, J.P Morgan replied “It will fluctuate.”. If we could hypothetically ask Mr Morgan another question, very popular these days, I bet his answer would be “It will fluctuate a loot”. Of course, the question is about the most hyped thing these days after Deep Learning: What will Bitcoin do? SmartCat team answers that question with mathematical precision, so by the second paragraph you will start trading and by the end of the post, you will be rich. :) And by the beginning of this sentence, you’ve probably realized I was joking. After reading a lot about cryptos, listening to many failure scenarios and blooming future prospects, I was extremely impressed how little we know about main influence factors and how poorly we can quantify risks within investing in Bitcoin. Although the idea of decentralized currency is something I truly believe can bring benefits to the economy and reallocate global resources, before going “all in”, we definitely need to understand better what makes Bitcoin hit the ceiling. This blog post is about our journey of transforming public emotions, big news, blockchain data into signals which can provide us with a better understanding as well as instructions for investing. In the first part, we will describe how we collect and measure public emotions and fundamental data related to blockchain. Then we will continue with prediction models and results of our exploratory analysis. In the end, we will describe how we want to use predictions of our models to empower trader strategies and conclude with further ideas for development. FUD and FBD ( Fear, Uncertainty and Doubt and Fundamental Bitcoin Data) Let’s take a good look at Twitter and assume a possible reaction to these tweets. Oh, poor Woz, he was firstly enticed by Jobs, and now deceived by this bad Bitcoin gang. Too fraudulent for me, I’m not gonna have anything with these wanna be currencies! Good job Germany! I knew someone will start noticing the potential and prosperity Cryptos can bring! I should invest anytime soon! Although exaggerated, these two examples represent some part of public sentiment which can be an incentive to buy or sell. These actions make the market go up or down, so tracking the sentiment could be a useful price predictor. Our first approach relies exactly on this fact. We tracked many people who proved to be big shots in the crypto world a.k.a influencers on Twitter, collected their tweets and estimated the sentiment. We decided to use Twitter as a social platform for this usage because its content is in micro blog format, which is convenient for processing, but any other network or source of information could be added. One of the possible ways to estimate the sentiment of the sentence is to train a model on labeled data. In theory, we could read every tweet and label it with a number from -1 to 1 based on our personal impression. This is followed by training, tuning, testing and voila, the estimator is ready. This process is extremely time and energy consuming so we wanted to do better. The next idea was to use a pretrained model, but the main problem with pretrained models is that their training set is a domain-specific corpus which means that we would be using a model trained on data with a different distribution than our data. In the end, the solution was to use a rule-based sentiment analysis tool. Most of the features were inspired by the work from trading, but some of the features were the results of our own intuition. For example, one of the very interesting insights is brought with the feature constructed as a ratio of mean and standard deviation of positive score of tweets based on previous 30 days, which is shown below. This feature had a high correlation with the Bitcoin price (0.86 for a year) and succeeded to precede to some big price leaps. Although we are aware that correlation is not causation, we can provide an interpretation of this result. If standard deviation is high, which means that the sentiment is highly volatile, even if the overall score is positive, we are not convinced that the Bitcoin market is doing well. However, if everything seems stable and if most of the people are talking good, this will be captured by a high value of this feature and this kind of situation has potential for a bullish scenario. Finally, this may be our way of quantifying FUD :) On the other hand, we also believe that, besides news and speculations, some very solid things could bring valuable information to our models. Blockchain data is a set of time series which represents diverse things related to Blockchain and Bitcoin. We used these time series also as features of our models. Prediction models With all prepared data, the next step is … machine learning. Firstly, we performed training on a daily level. This was a straightforward decision because one of the data sources was on that granularity. In the previous step we created as many features as we could, so it is very probable that some of them are not useful or are highly correlated with each other. In order to have a robust model with good generalization power, we put some effort in reducing the number of features. Feature selection methods were used to choose a set of features which bring the best metrics. In the first fifteen most important features, according to these methods, ten of them were features created with the sentiment from Twitter. Then, we plunged these data into several models starting from SVM to ensemble models. In the end, we presented our results with interactive Superset dashboards. You can read more about Superset setup read here. Comparing our classification accuracy with several baselines, including the one which predicts the same as previous day and the one which randomly predicts class, we beat them with overall accuracy of 72%. Dashboard with results is live with user guest/guest so you can check it out any time you want and the preview of the Bitcoin Trade Signals dashboard is shown below. Trader Now comes the catchy thing or the big question. How can we use outputs from our models to decide whether to buy or sell? One of the most simple answers would be to use signals from our dashboard in order to make a better informed decision. But we look for more than just reading numbers and following the gut feeling. We want an automated trader powered by our predictions. The idea was to improve some of the existing trading strategies, for example SMACrossover. SMA Crossover stands for Simple Moving Average Crossover. The main idea behind this strategy is to follow two time series: Short Moving Average series and Long Moving Average series of price. Let’s assume short MA is MA26 which is just a mean of the last twenty six samples of price and that long MA is MA100. As shown on the picture bellow, if short MA is crossing below the long MA, this is a signal for selling and vice versa, if short MA is crossing above long MA this is a signal for buying. We chose this strategy because it’s simple and interpretable, but our approach could be implemented with far more complex strategies. One of the possible improvements of SMA Crossover with outputs from our classification models is compared to the baseline strategy and in average on daily level, for several test periods in history, it beats the baseline with a 16.9% bigger return. Final remarks In a certain manner, we confirmed and quantified the influence of people’s sentiment to price. Although convenient, Twitter is not the only and best source for the information we want to analyze, so adding more sources promises improvement in metrics. Furthermore, developing more sophisticated sentiment estimation, trying new algorithms and adding more complex trading strategies are our future steps. But the greatest challenge also lies in understanding what would be the most useful feature for people who want to rely on our signals. Is it just pure sentiment, predictions from our models or the net return of strategy? We will put some effort in figuring this out as well. If you have any ideas or want to express your personal opinion, feel free to share them with us. We are looking forward to that. Originally published at www.smartcat.io.
Bitcoin Trade Signals
7
bitcoin-trade-signals-1d9c25786bc4
2018-04-22
2018-04-22 18:36:13
https://medium.com/s/story/bitcoin-trade-signals-1d9c25786bc4
false
1,414
Stories about solutions we develop using combination of Data Science, Data Engineering and DevOps Expertise and problem we face in our day to day work with clients in SmartCat.
null
SmartCat.io
null
SmartCat.io
info@smartcat.io
smartcat-io
DATA SCIENCE,DATA ENGINEERING,DEVOPS,MACHINE LEARNING,CASSANDRA
SmartCat_io
Machine Learning
machine-learning
Machine Learning
51,320
Ljubica Vujovic
Math and coffee lover, passionate for data science and encouraging girls in STEM
8d1d73948d98
ljubica.vujovic
11
14
20,181,104
null
null
null
null
null
null
0
null
0
7f36af053870
2018-01-14
2018-01-14 03:24:02
2018-01-14
2018-01-14 16:49:07
4
false
en
2018-01-15
2018-01-15 23:54:09
0
1d9c3216faf7
5.684906
4
0
0
Using data and design thinking to improve the financial outlook of marginal farmers
5
Predicting Crop Yield and Profit with Machine Learning Using data and design thinking to improve the financial outlook of marginal farmers Over the last few months, I and a team of students at Carnegie Mellon University partnered with KONAM Foundation to research, design, and develop a tool that marginal farmers in India can use to predict crop yield and profit in order to better plan what crops to grow. Together, we were able to develop a data model and user interface that would help improve outcomes for farmers in Rayagada and Nayagarh in Odisha and we’re looking to develop it further to prepare for a pilot in the near future. A farmer in Odisha tests an early version of the profit prediction app. Background In recent years, the stability of rural communities in the state of Odisha, India has been shaken by economic and social forces related to higher suicide rates amongst small and marginal farmers. KONAM Foundation aims to offer assistance and tools to help these farmers and communities address these issues. Generally, this group faces challenges accessing and trusting educational outreach and training to better understand how to increase crop yields and improve financial standing. Because of the serious nature of the issues at stake and general hesitance to trust help from outside the community, any service or product meant to help must be carefully designed and tested in order to ensure positive outcomes and successful adoption. Focus While there are many ways to contribute to improvements in the lives of our target audience, our task was to leverage data to predict a valuable result so that farmers and aid workers would be able to make informed planning decisions. Ultimately, the focus of the work during this project was to both conduct audience research that would direct the design of the product and design a data model that would produce the desired results. Product Goal The tool that we developed during the course of this project is meant to deliver an actionable prediction, based on individuals crop and financial information, that allows them to achieve sustainable financial independence. For our purposes, financial independence means in the short-term, paying off existing loans; in the long-term, financial self-sustainability and removing dependency on loans. The version of the product that we developed focuses on the profit prediction, but eventually, as more data is available and included as features in the data model, we envision the output of the model to be a plan or detailed recommendation set for farmers to optimize their crop selection based on individual factors such as location, farm size, and finances. Initial Scope In support of this larger goal, our tasks over this initial project phase were to: Conduct user research to assess needs, constraints, and product market fit Develop longer term product vision roadmap and adoption strategy Define product design guidelines Define data features and clean data sets Construct a data model Build end to end web app Research and Design Before any progress could be made in building a tool, we needed to understand our users and the context in which they’d be accessing our product. This would allow us both the benefit of creating a compelling and relevant tool for our audience as well as ensuring market fit with a region that most of the team was largely unfamiliar with. Key questions we sought to answer include: What are our users currently using to plan their crops and manage their finances? What problems do they experience with their current process? What prevents them from adopting our service or product, if anything? A group of farmers in Odisha discuss an early version of the prototype. User survey We assembled a survey that would give us a better understanding of our users’ lives and experiences. We asked questions that addressed crop planning, farm work distribution, finances, land ownership, and device ownership and usage. Over the course of roughly three weeks, we saw responses from 42 small farmers living and operating farms in Odisha. Findings The survey results revealed that the majority of our farmer users would be using feature phones, as opposed to smartphones. Additionally, Internet access is very limited, so even those with smartphones do not have reliable access to the web and instead stick to lower bandwidth apps like WhatsApp. Our participant group tended to plan their crops based off of past success or tradition, but if they did change their crops, it was because of influence from their neighbors. Generally, our participant group was well off with little revolving debt and access to farming equipment, irrigation, and more educated relatives to help introduce new practices and bring products to the nearest market. This is helpful to know, but not representative of our more rural target audience. Implications While the initial concept of a web app for farmers to use directly is very attractive, in order to reach our target user base, our product will need to serve multiple use cases including offline mode and use by NGO staff as they visit or remotely meet with farmers to offer them guidance on planning their crops. A web app would work for literate NGO staff with access to smartphones, but not for farmer users until smartphones prevalent and internet access is more stable. Prototype and testing In parallel to the user survey, we developed an initial prototype of a web app that collects individual information such as location, farm size, intended crops, and loan and budget amounts. From there, that data is used to create a crop yield and profit prediction that is meant to be a part of a farmers’ crop planning process. The profit prediction display. We first created an English prototype to align on the first design and then translated the prototype into Odiya for the field test. For our first round, we tested the prototype with six farmers and five NGO staff members. Overall the concept was very attractive, but financial literacy and cost benefit analysis associated with it are areas that the farmer audience doesn’t yet generally grasp, which indicates that providing this tool as just a part of an NGO staff member’s advice and guidance would be a successful route to product adoption. Additionally, NGO staff tend to have access to smartphones, which would allow them to use the web application. Iterations Subsequent versions of the prototype were made to include additional regions within Odisha and relevant crop options as our field test traveled to different sites with one of our key stakeholders. As more feedback came in around a farmers’ understanding of the written questions, we also updated the question screen layout to include room for an audio icon, and eventually, to include audio versions of each question to encourage independent usage of the web app amongst farmers, even if they’re using an NGO staff member’s smartphone. One of the information intake screens from the web application prototype Beyond the design of the application, we also were able to design and build a functional data model that generated crop yield and profit prediction based on individual farmer information and government collected data sets for climate, cost of production, and market pricing. We were also able to construct a view for NGO staff to use with multiple farmer clients. The use case for this would allow this product to serve staff as they consult and help multiple farmers in a community or pilot program. What’s Next We were able to apply a human centric design approach to defining a solution to the problems faced by our target audience, basing our models and our recommendations on the experiences we uncovered and the data we acquired during the course of our work. All of this initial scope supports a foundation for further development of this product and pilot program with farmers. The next areas that need to be developed more include broader data acquisition, model refinements, and pilot design and planning. Team Sarah Papp, Shreya Prakash, Svayam Mishra, Xinwen Liu, Zhuona Ma Mentors Afsaneh Doryab, Systems Scientist, HCII, Carnegie Mellon University; Anind Dey, Director, HCII, Carnegie Mellon University; Sandeep Konam, Executive Director, KONAM Foundation; Kanna Siripurapu, Project Coordinator, KONAM Foundation
Predicting Crop Yield and Profit with Machine Learning
18
predicting-crop-yield-and-profit-with-machine-learning-1d9c3216faf7
2018-03-23
2018-03-23 06:34:52
https://medium.com/s/story/predicting-crop-yield-and-profit-with-machine-learning-1d9c3216faf7
false
1,321
Hub of ideas, conversations and stories on building 'Key Solutions for Onerous and Massive challenges' | https://konamfoundation.org
null
konamfoundation
null
HUB | KONAM Foundation
director@konamfoundation.org
hub-konam-foundation
NONPROFIT,TECHNOLOGY,SOCIAL GOOD,STARTUP,INDIA
konamfoundation
Agriculture
agriculture
Agriculture
12,051
Sarah Papp
User experience designer and grad student @CMU
440c4b34a5a7
sarahpapp
5
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-25
2018-02-25 10:02:19
2018-02-22
2018-02-22 04:58:24
1
false
en
2018-02-25
2018-02-25 10:06:12
0
1d9c322f3fd
0.622642
0
0
0
I just bought a Google Home mini. I was fond of it. For several days, I enjoyed turning lights on and off, play Spotify while showering…
5
10 Things I Wish Home Assistants Could Do I just bought a Google Home mini. I was fond of it. For several days, I enjoyed turning lights on and off, play Spotify while showering, and watch Netflix with the voice. After a while, I have compiled some creative ideas or list of things I wish it could do more: 1. Make coffee. 2. Crawl and read websites. 3. Read emails and messages. 4. Make and accept calls. 5. Suggest things — e.g. gifts to buy, things to do. 6. Do the dishes. 7. Browse and order online (products, food, services) 8. Give writing prompts 9. Give ideas. 10. Assist in decision making.
10 Things I Wish Home Assistants Could Do
0
10-things-i-wish-home-assistants-could-do-1d9c322f3fd
2018-02-25
2018-02-25 10:06:13
https://medium.com/s/story/10-things-i-wish-home-assistants-could-do-1d9c322f3fd
false
112
null
null
null
null
null
null
null
null
null
Life
life
Life
283,638
Creative Awesomeness
Release your inner awesomeness thru creativity and innovation.
9329bead68fa
dmhwall
94
39
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-08
2017-09-08 03:38:43
2017-09-08
2017-09-08 13:06:17
11
false
en
2018-01-05
2018-01-05 23:24:48
3
1d9cb827d36d
4.020755
2
0
0
I’m a sucker for data. I don’t particularly like the term datasexual but I do suffer from a particular disposition that Dominic Basulto…
4
My Newborn Just Went Through 784 Diapers. Here’s what happened. I’m a sucker for data. I don’t particularly like the term datasexual but I do suffer from a particular disposition that Dominic Basulto coined such: The datasexual looks a lot like you and me, but what’s different is their preoccupation with personal data. They are relentlessly digital, they obsessively record everything about their personal lives, and they think that data is sexy. Let’s go to the data. We got some fantastic dimensions and measures for you today because what’s sexier than dirty diapers? Well, breasts for one thing. Today I take you into the data-rich life of my 5-month old. Not the cute videos and pics (she is mezmorizingly adorable). Rather, I’m talking about day-in / day out of nursing and diapers… the staple hour to hour tasks of every new parent. Mom producing more data! My 4th child was born late April of this year. Her mom has indulged me in logging all of this data with a great app called BabyTime. We log the type of diaper change (wet, dirty, both!); we log bottles and pumping sessions (very rare for this baby) and mom dutifully logs every straight up nursing session. I say “we,” but in all honesty she does 90 percent of the legwork (or boob work!). The crew at BabyTime was kind enough to give me a .csv file of all of our data so I put together a rich set of Tableau charts to help you visualize the task of parenting a newborn. (you can browse the interactive charts here) Those are the raw numbers. 784 diapers and 1200+ nursing sessions. It almost seems impossible. At some point it becomes rote but there are fantastic stories of indescribable awfulness that we just laugh at now. Next, take a look at the breakdown of dirty, wet and dirty/wet diapers over a 3 month period. (click for a larger view) Here’s the deal. All children are fickle at the outset. They don’t really have a notion of night or day that they adhere to. Here you can see the feedings and diaper changes by hour of day. 9:00 AM is a key time, but the 7:00 PM hour is where things really start picking up as bedtime approaches. Baby and momma spent a month in July in Hawaii so I had to use a refining technique to adjust the hours for that month. For you Tableau users try DATEPART(“hour”,date) — 3 to get to Hawaii time from PDT. Here’s another chart showing the number of diapers and feedings by day of the week over the course of those 3 months. Friday is when everyone usually heads to grandma’s house. Perhaps some extra feedings and changes are in order when other people are giving some great and needed attention. Here’s a more detailed layout of the right boob/left boob debate. The spikes indicate the duration in minutes of each feeding. Mom is right handed this is why she might favor the left side for more feedings but the right side nurses longer. (Upon proof-reading this the other parental unit conveyed to me that the starboard breast produces more milk.) Here’s a classical distribution chart showing the number of feedings at certain durations by breast. The 15 minute range is pretty typical. Here is another way to look at it by month with the left/right colors to boot. May, the first month of her existence was basically a feeding fest ALL the time. God bless her mother… (she asked me a to add that.) Now we start looking at the key time factors that impact mom and dad. I aggregated feedings and diaper changes which occurred in from 11:00 PM to 6:00 AM. Zombie doesn’t begin to describe what parents sometime feel. Here’s another more pronounced view of the time loss factor. In May, her mother lost 133 hours of sleep which amounts to over 5.5 days. What a trooper! Bonus chart 11! Here you can see the cumulative cost of diapers mapped against the number of diapers per month. People make fun of me for collecting data on every iota of my life but it’s kind of cool to see it all laid out there. Also, it REALLY helps me understand why her parents are tired all the time.
My Newborn Just Went Through 784 Diapers. Here’s what happened.
10
my-newborn-just-went-through-784-diapers-heres-what-happened-1d9cb827d36d
2018-05-09
2018-05-09 07:28:31
https://medium.com/s/story/my-newborn-just-went-through-784-diapers-heres-what-happened-1d9cb827d36d
false
721
null
null
null
null
null
null
null
null
null
Parenting
parenting
Parenting
50,490
Justin Hart
CMO at large. I live at the intersection of AI, machine learning and marketing. It’s a busy corner! (I need to model that).
b71ed0a1bb03
justin_hart
2,724
642
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-27
2018-05-27 08:09:35
2018-05-30
2018-05-30 03:34:39
6
false
en
2018-05-30
2018-05-30 05:02:26
8
1da052f4dc03
5.610377
3
0
0
Loss is the penalty for a bad prediction.
4
Creating Model | Compute & Reduce Loss: Machine Learning Part 5 Loss is the penalty for a bad prediction. Ok, this time we will look into a real world problem statement & their solution. In this session we will focus on three things: 1. Creating a Model. #Relationship between variable. 2. Compute the loss. #Errors 3. Reduce the loss. #Optimise our model. So let’s begin!! Problem statement: We are living in 18th century & wanted to build something called Model to calculate temperature. To do this we found that there is a insect Cricket which chirp different number of times based on atmosphere conditions. Cool !! that mean we can build something around Total number of Chirping & Possible temperature & can figure out how hot it is outside. Following showing relationship between total number of chirps(say X) we collected & possible temperature(say Y). Solution: This is a Linear Regression problem we are discussing here. We wanted to build a Linear Model(blue line in below graph) around this data so our model can predict unknown Y value for given X value & that’s what the purpose of creating model :), find unknown value based on your model training. Linear relation between Cricket Chirping & Temperature. Here we are training our model for given data set from axis relation from (0, 0) to (175, 35) & based on this can we expect result for (200, ?), Ohh yes !! On mathematical side this Linear Model can be written as y = mx + b. Where terms : y: is final result(Label), x: input parameter(Feature). b: is your hyper parameter that can be changed by our machine learning model during number of iteration to get best possible prediction for us. m: is the slope of your model. In machine learning we usually use ‘w’ as weight to denote this. It can be understand as slope = rise/run slope = change in (y)/ change in (x) slope = y2 — y1/x2–x1. We discussed about these basic terminology (feature, label, model…) here in Part-1. please refresh these. OK, so far we understood to (1.)Create a Model, have a Linear model that can help us to give relationship between two variable. You can see we have square relationship between (x, y), in that case we can write equation for given feature(x1) like this: y = w1 x1 + b A little more complex multi-variable linear equation might look like this, where w represents the coefficients or weights, our model will try to learn. Here we have multiple features (x1, x2, x3) & each having a separate weight (w1, w2,.. etc.). f(x1, x2, x3) = w1 x1+ w2 x2+ w3 x3 Let’s jump to (2.) Compute the loss: Life cycle of Creating a Model. We create a model with set of training example then we observe prediction capability of model using test example & then we optimise model back, change in equation, change in hyper parameter and all that. This whole process called empirical risk minimization. loss is a number indicating how bad the model’s prediction was on a single example. If the model’s prediction is perfect, the loss is zero; otherwise, the loss is greater. reference. So we found, there can be scope of improving model by observing test example result, but how we able to do that !! Well, to do this we use some Loss Computing Technics. In case of Linear equation model we used Squared loss. It’s simply about square of(observation — prediction(x)). Isn’t this we usually follow in our daily life. We found our mistake when our prediction diverge from actual outcome. Mean square error (MSE) is the average squared loss per example over the whole dataset. It can be written as following Where N is the number of example. Sum(square of(observation — prediction(x)))/ N So say for given linear equation you can compute the loss using following: Linear Equation: f(x1, x2) = w1 x1+ w2 x2 MSE = (square of(ya1 — yp1) + square of(ya1 — yp1)) / N Check your understanding with given example here. This is the loss function for Linear model. For other type of models (e.g. logistic) computing loss technics can be different. Cool, so we understood about Creating a Model(1.) & Compute the Loss(2.). Let’s begin with How to reduce the loss(3.). One could be understand with this graph. You have to evaluate accuracy of your model component by computing loss. If loss is something that can not be bare then modified your bias parameter & reiterate it to optimise your model. How good if we know what are the parameter we need to update & in which direction +/ — . So wait we have something we called Gradient Descent(#GD) algorithm. There is a very simple yet powerful & popular technic to compute the loss. Gradient is nothing but slope (m/w we discuss above). One thing need to understand is gradient is a vector so it has both of the following characteristics: a direction a magnitude Let’s understand how GD can help us to reduce the loss using following. We have problem statement(ball bowl) where we need to get a ball at bottom of bowl. Couple of things to consider in real life scenario. 1. Start point could be anything. Any random guess. 2. Direction. It matters a lot. In GD we relies on negative gradients. That mean we check against our final result & check difference & take next step accordingly. 3. Velocity of ball. Should not be much else it will went away from bowl. 4. Number of iteration it will take to reach at bottom. This is where we can reduce computation. Check yourself with this example. Probably we are clear what we wanted to achieve. What we really need to taken care is steps. In GD it’s combination of (Learning Rate * Gradient). Here Learning rate (#LR) is a scalar value. Learning rate is a scalar used to train a model via gradient descent. During each iteration, the gradient descent algorithm multiplies the learning rate by the gradient. The resulting product is called the gradient step. Learning rate is a hyperparamter. During life cycle of Model creation, we may need to update this number of times. In case of Linear Equation y = mx + b, b is a hyperparameter. So far we understand how GD work. One thing that need to learn is GD uses total number of examples you use to calculate the gradient in a single iteration. That mean considering if you have billions of record to check for loss it will uses all in one shot & that may cause of super high computation & might be with lot of duplicate data. To fix this we have two another GD approach, we called: 1. Stochastic gradient descent(SGD): Where we uses 1 example per iteration. Like above ball-bowl example we doing manually. It’s good in terms of remove duplicity but it may take longer time. 2. Mini-batch stochastic gradient descent (mini-batch SGD): To overcome SGD, instead taking 1 example/iteration we divide total number of samples in chunk usually between 10 and 1,000 example per iteration. ..and that’s it. That’s all from this session. Hope, you have got basics about how we approach to create optimised model by calculate & reducing loss. You can play with exercise available here. In next session we will talk about Generalisation, Training-Validation & Test data. See you in next. Have a good time !
Creating Model | Compute & Reduce Loss: Machine Learning Part 5
46
creating-model-compute-reduce-loss-machine-learning-part-5-1da052f4dc03
2018-06-04
2018-06-04 13:43:26
https://medium.com/s/story/creating-model-compute-reduce-loss-machine-learning-part-5-1da052f4dc03
false
1,235
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Shubham Patni
#HumanBeing #Programmer
2dee7eba92ab
shubhapatnim86
42
100
20,181,104
null
null
null
null
null
null
0
null
0
5cfda4ec81
2017-11-24
2017-11-24 17:56:19
2017-11-24
2017-11-24 18:43:10
1
false
en
2017-12-13
2017-12-13 01:13:39
4
1da06795b2e9
2.113208
3
0
0
My hope for the future of the job market
5
The Future of Jobs Photo by American Public Power Association on Unsplash I worked at a chemical manufacturing plant in Texas as an engineer for a couple of years right after school. Like any job that pays the bills but that you don’t really like, you can easily convince yourself to stay. You say things like: ‘It’s not really that bad, I mean, we get free food a lot of the time.’ ‘The work isn’t that challenging, but the pay is good.’ Or things to your favorite co-worker like — ‘Without you, I definitely would not be here.’ But I know what I wanted to be saying was: ‘I can’t stop thinking about coming up with a good solution to this problem, damn.’ ‘I feel like I am making an impact at the job and understand the bigger picture.’ ‘I LOVE ALL MY CO-WORKERS.’ When thinking about that problem, it is hard to place blame on anyone or anything in particular. Yeah, the employer should of done a better job navigating through applicants. Yeah, I should of done a better job of choosing my career. But it shouldn’t be that hard. (And don’t get me wrong, I am extremely grateful for the privileged position that I am in. That I can find a job when others may not.) I want a future where the next generation will likely choose a career they will be passionate about instead of having to fumble through jobs they think suck. Alain de Botton hit home when he predicted how companies will attract talent in the future. He talked about LinkedIn’s idealistic, yet courageous, goals for their workplace. To provide the optimum match for employers and potential employees such that the career will be gratifying mentally and financially. As companies are growing and trying to retain top-talent in these competitive times, the push to find a ‘perfect’ match is an ever-growing problem to solve. With this, I believe that companies need to be very honest with themselves on how potential employees see the company (i.e. branding and mission), and most importantly, how they will fit within the exact team you will be placing them in (i.e. values and personality). A lot of this falls onto the interview process and how efficiently the interviewer extracts the important information while balancing time constraints. Also, as employers become more advanced in their HR tactics like using artificial intelligence to look through a search-space of potential applicants (though this could result in morality concerns), the employees, too, need to be equipped with more tools. Possibly a machine learning algorithm that takes your resume, aspirations, career path, moral values as inputs and spits out career(s), people to reach out to, or ways to further your goals as outputs. Possibly programs that encourage little kids, teenagers, and adults to focus on what makes them happy and what they are good at. And to not focus on what society would like them to be. Either way, we need to spend less time in the wrong job day-dreaming about the right one.
The Future of Jobs
10
the-future-of-jobs-1da06795b2e9
2018-03-26
2018-03-26 04:25:36
https://medium.com/s/story/the-future-of-jobs-1da06795b2e9
false
507
The latest and greatest updates about the Future of Work, from the CodeControl crew.
null
codecontrol.io
null
The Future of Work
hello@codecontrol.io
future-of-work
FUTURE OF WORK,REMOTE WORKING,TECHNOLOGY,PRODUCTIVITY,FREELANCING
CodeControl_
Hiring
hiring
Hiring
16,840
Danilo Pena
Life | Random Thoughts | Data Science | Healthcare
72469f5805d8
danilopena
267
242
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-30
2018-04-30 22:29:33
2018-04-30
2018-04-30 22:40:43
0
false
en
2018-04-30
2018-04-30 22:40:43
0
1da1dbdf256
3.396226
1
0
0
no seriously, I developed a novel branch of mathematics and nobody cared until they did… and now I don’t anymore… lol…
5
The Math Is The Magic no seriously, I developed a novel branch of mathematics and nobody cared until they did… and now I don’t anymore… lol… the cliffnotes version… the original video for Erebus & Matt :) I spent most of my time in my second undergrad completing double the academic requirements for a PhD in chemistry so that I could start my real PhD in chemistry with all of the credits required to graduate so that I could sneak in postdoc level courses and research in physics and mathematics. My first undergraduate degree is in computer science, and the truth is I’m a theoretical mathematician masquerading as a physicist, chemist, and machine learning expert… In undergrad I developed some novel algorithms for catalyst design based on point symmetry groups, that eventually turned into a class of theoretical catalysts that only ever saw the light of day on paper (minus one blue reaction that changed my view of synthetic organic chemistry forever), which then turned into my reaction prediction software package, which then turned into the Evil Weebles bot I built to enter the DARPA Cyber Grand Challenge in 2014, which then finally turned into a mathematics and machine learning consulting gig for a fortune 500 company designing novel clustering algorithms, which eventually turned into me accepting that I’d fully developed a novel branch of mathematics — and nobody cared… or so I thought, lol… The truth is that I developed a novel branch of theoretical mathematics mostly for myself, so I was shocked to find that a.)other people hadn’t thought of this before, and b.)It was actually more difficult to translate my math into classical mathematics (which works surprisingly well every time) than it was to just built the state machine — do the transformations, and utilize my math for my own shit… You can’t patent math, and you can’t explain it to others easily either without a professorship in theoretical mathematics and a team of mathematicians interested in peer review — so I’ve decided to spend the rest of what’s left of my life playing around with my novel branch of mathematics and continuing to use synthetic organic chemistry (specifically novel reaction and catalyst design), machine learning (more specifically clustering algorithms and state machine), hair braiding, and abstract art, etc — to play around with pseudo-chaotic systems (say collisions in a chemical reaction, or point group symmetry in a transition state of a metal catalyst), or self-retwisting locs in materials science/hair braiding, or paint splatter in abstract artwork… To learn more about my math you can read the info provided below, or check out my website from time to time — DarkArtProfessor.com… The Math: I developed a novel branch of mathematics that replaces variational calculus with respect to analyzing, calculating, and utilizing degrees of freedom (with respect to maintaining the dimensionality for dependent variables, independent variables, the relationships thereof, and the “meta-associations” thereof, in a dynamic system (such as that found in a “quantum logic gate”, which allows for the accurate prediction of the outcome of patterns introduced into a pseudo-chaotic system via identifying (in some cases creating) an initiation point (for use by a quantum logic gate), that generates (over a specified and controlled period of time) and discernable pattern that allows for maintaining the symmetry (with respect to degrees of freedom and the relationships of variables) in a dynamically changing, theoretically predictable (i.e. manipulatable) quantum/chaotic system… More on this in a video, coming soon :) Variational Calculus is limited in its ability to provide accurate predictions for the final position of a particle that travels through or otherwise interacts with a pseudo-chaotic system with varying degrees of freedom resulting from transformations (folding related to maximizing dimensionality whilst minimizing variables required for calculations)… The truth of the matter is simply that this information cannot be lost. There must then be a way to translate surface area, degrees of freedom, and interconnected meta and actual relationships of independent and dependent variables that are required to “collide” in a system while it steps through its various theoretically possible energy states… So, if the system is closed with respect to energy, then my novel branch of mathematics works cleanly. If the system is not closed with respect to energy, then my novel branch of mathematics requires another technique, which I developed to bring quantum algorithms into the classical world ( I call that technique a “quantum logic gate”← which is a slight variation of the typical use of the term). A quantum logic gate requires three (switchable and stackable decision states) — which interact with a neutral (sometimes binary, but can be more complex (via a state machine) — logic gate (i.e. logic switch) — that sits in the center of the system and controls the position, acceleration, etc, etc of the variables that are interacting with the system… In other words — it controls the state of the system by controlling the “laws of physics” which govern the system… This brings us back to the video I made on the spherical representation of a “quantum logic gate” being used as a compression/data storage algorithm… It’s really the same, the goal of my novel branch of mathematics is to maximize the degrees of freedom while eliminating information loss. Check back later to see how I apply/prove my novel branch of theoretical mathematics using (machine-learning, hair chemistry, abstract artwork, and a little bit of ethical hacking)
The Math Is The Magic
1
the-math-is-the-magic-1da1dbdf256
2018-05-01
2018-05-01 03:20:43
https://medium.com/s/story/the-math-is-the-magic-1da1dbdf256
false
900
null
null
null
null
null
null
null
null
null
Science
science
Science
49,946
Dark Art
"They say that I'm the nhillist, but I'm really just the realist, I see the truth reveal it, when they try to conceal it…" - The Nhillist (by me)
9e1c3d9b4c4f
MyDarkArtStudio
3
1
20,181,104
null
null
null
null
null
null
0
null
0
f772c66cd492
2018-06-15
2018-06-15 16:16:03
2018-06-15
2018-06-15 16:32:12
2
false
en
2018-06-16
2018-06-16 18:29:04
1
1da2527c58fa
3.405975
0
0
0
In this economy, customers cannot afford to have to wait. They have multiple needs that must be met quickly and efficiently, which means it…
5
Right Here, Right Now: “Making Empowering Moments a Reality” In this economy, customers cannot afford to have to wait. They have multiple needs that must be met quickly and efficiently, which means it is critical for them to have products that they can trust. Not only do customers want to feel confident in what they use, they also want those products to be in real time. Just ask Jonathan Ellis, the Chief Technology Officer and Co-Founder of DataStax. He knows what it means for things to be instantaneous, individualized, continuous, and global. That is why he and the experts at DataStax are equipped to deliver Enterprise-Ready features in a Right Now Economy. Their technology guarantees quality and customer satisfaction in the here and now. Tamara: Can you share a story that inspired you to get involved in AI? Jonathan: I was studying computer science in college when the IBM Deep Blue defeated Garry Kasparov. It’s common knowledge that Deep Blue ran on massively parallel IBM RS-6000 servers, but most people don’t know that what made it unique was that these servers were paired with custom chess microprocessors. This let Deep Blue analyze ten times more moves per second than could be achieved in software alone. In a way, this was an early (and expensive) echo of what we see now, with deep learning models today taking advantage of GPU, FPGA, and custom silicon acceleration. Tamara: Describe your company and the AI/predictive analytics/data analytics products/services you offer. Jonathan: DataStax provides DataStax Enterprise (DSE), a distributed cloud database that is at the heart of many applications in IoT, fraud detection, and personalization use cases. Two things make DSE uniquely well suited for modern real-time AI applications. First, it offers a unified platform including both Spark Streaming and Graph capabilities. This simplifies building complex machine-learning applications like Deloitte MissionGraph. Tamara: How do you see the AI/data analytics/predictive analysis industry evolving in the future? Jonathan: One of the big challenges today is updating models in real-time in response to new information without doing a whole training cycle offline. I expect to see a lot of effort towards creating new training methods that offer reusable ways to accomplish this. Jonathan Ellis, Chief Technology Officer and Co-Founder of DataStax Tamara: How do you see your products/services evolving going forward? Jonathan: It’s still too hard to build applications in general on a distributed database. This is a problem across the industry: everyone has a pretty good handle on the relational model at this point, but relational doesn’t scale once you need to partition it across multiple machines. So, we need a way to make data models scalable and partitionable automatically or semi-automatically. Tamara: What is your favorite AI movie and why? Jonathan: I’m not much of a moviegoer, but my favorite AI book might be Isaac Asimov’s I, Robot. Asimov’s robots were intelligent humanoids and this is the book where Isaac Asimov introduced his Three Laws of Robotics. Unfortunately, I understand that the 2004 movie did not actually have much in common with this. Tamara: What would be the funniest or most interesting story that occurred to you during your company’s evolution? Jonathan: Neither my co-founder Matt Pfeil or I had worked for an enterprise software company before, and we were completely naive about what that model entailed. We thought we could just let the world know that we were building the world’s best distributed database for hybrid cloud and that would be sufficient. Fortunately, we had advisors, including our early investors that clued us in to what building a sales force looked like. Tamara: What are the 3–5 things that most excite you about AI? Why? (industry specific) Jonathan: AI is to workers in the 21st century what automation was in the 20th, and while there’s both positive and negative connotations there I see the positive dominating. Yes, people lost their jobs when assembly lines replaced blacksmiths. But, for example, washing machines took something that took basically an entire day each week and turned it into almost an afterthought. Washing machines probably cost some jobs, but most people couldn’t afford to hire a laundry service; they did it themselves. So, automation made their lives immensely better. And I think that’s going to be the dominant effect as intelligent assistants mature — taking some of the drudgery out of knowledge work, making lives better a little at a time. Tamara: Over the next three years, name at least one thing that we can expect in the future related to AI? Jonathan: The skills gap for machine learning itself will narrow, but companies will still struggle in applying AI to business problems. I don’t think we’ll see many turnkey, AI-based solutions in the next three years; it will stay a Wild West of bespoke efforts at least that long.
Right Here, Right Now: “Making Empowering Moments a Reality”
0
right-here-right-now-making-empowering-moments-a-reality-1da2527c58fa
2018-06-16
2018-06-16 18:29:05
https://medium.com/s/story/right-here-right-now-making-empowering-moments-a-reality-1da2527c58fa
false
801
Leadership Lessons from Authorities in Business, Film, Sports and Tech. Authority Mag is devoted primarily to sharing interesting feature interviews of people who are authorities in their industry. We use interviews to draw out stories that are both empowering and actionable.
null
Authority-Magazine-2170294859857034
null
Authority Magazine
editor@authoritymag.co
authority-magazine
LEADERSHIP,CULTURE,WOMEN IN BUSINESS
AuthorityMgzine
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Tamara Nall
CEO; Data analytics expert; Keynote speaker; Consultant; Founder of Nall-Edge (NE)
c674cd53bdfd
tamara.nall
13
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-12
2017-11-12 05:34:45
2017-11-12
2017-11-12 06:22:31
1
false
ja
2017-11-12
2017-11-12 06:57:01
2
1da3cc8b23b7
2.818
0
0
0
最近、HRとテクノロジー(Technology)を掛け合わせてHR Techと言う言葉が話題を呼んでいます。また、HRの分野にもAIを活用する企業が増えてきています。
2
AIで候補者の活躍可能性は予測できるのか? 最近、HRとテクノロジー(Technology)を掛け合わせてHR Techと言う言葉が話題を呼んでいます。また、HRの分野にもAIを活用する企業が増えてきています。 ソフトバンクが新卒の「ES選考」をAIに任せた理由 (1/2) 新卒採用にAIの活用を始めたソフトバンク。エントリーシートの合否判断をAIに任せているという。採用担当者に、その効果と狙いについて話を聞いた。 (1/2)www.itmedia.co.jp HR Techにより、人事業務が変わるのは間違いないですが、特に採用と、人材配置において、AIとどう向き合うかいつも考えています。 Deep learning(深層学習)に関しては、画像認識や、自然言語解析に関して高い威力を発揮します。特に自動運転やAIスピーカーは、無限に学習データがあるので、一気に普及すると思います。しかし、HR領域に関しては、Machine learning(機械学習)がやっと効果があるかどうか分かってきた感じです。 HRではDeep learningは使えない ではなぜ、人事の領域でDeep lerningが使えないかというと、Deep lerningはブラックボックスになり、導き出した答えの過程が全く見えない危険性があるからです。 深層学習(Deep Learning)とは?非エンジニアが説明してみた。 | SiTest (サイテスト) ブログ 今回は非エンジニアの視点から、今一番話題の深層学習(ディープラーニング)について説明してみたいと思います。 今の人工知能ブーム(一過性の「ブーム」という言い方はもうそぐわないかもしれません)の立役者とも言えるディープラーニングですが、い…sitest.jp その点、Machin learningでは、導き出した答えの過程が見えやすく、納得感があります。 リクルートでは、企業に候補者を推薦した後、書類選考の合否を学習させ企業や採用担当の好みを判別できるAIを開発し活用しています。これにより候補者を効率的に紹介することが可能になったそうです。 ただし、採用担当の好みの候補者を推薦し、仮に入社したとしても、その人が活躍できるかどうかは全く分かりません。 私は、人は無限の可能性があり、企業が溜めてきた、過去のデータの蓄積で、入社する人の将来を予測することは不可能だと思っています。なぜならば環境の変化や将来出会う上司など「偶然の出会い」により価値観や行動スタイルが大きく変わることがあります。 そのため、AIを活用して、入社前に候補者の活躍可能性を予測することは、現段階ではとても難しいし、不可能だと思っています。 ソフトバンクやリクルートなど業務の効率化でAIを活用するのは、有効な施策だと思います。しかし、データ量が少ない、中小企業では人事面においてAIを活用することが困難です。 テクノロジーの変化のスピードは凄まじく、世の中いろいろなプロダクトが日々リリースしています。我々のようなHR TechプレーヤーはAIなどのバズワードに惑わされず、まずは、顧客の課題解決のために何ができるか考え、日々前進して行きたいと思います。
AIで候補者の活躍可能性は予測できるのか?
0
aiで候補者の活躍可能性は予測できるのか-1da3cc8b23b7
2018-05-14
2018-05-14 06:52:28
https://medium.com/s/story/aiで候補者の活躍可能性は予測できるのか-1da3cc8b23b7
false
34
null
null
null
null
null
null
null
null
null
Hrtech
hrtech
Hrtech
1,708
Kunio Yamada
株式会社Meta Anchor 代表取締役(https://meta-anchor.com/)
2d5277b1684b
kunioyamada
31
35
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-27
2018-03-27 05:40:47
2018-03-27
2018-03-27 07:09:11
3
false
en
2018-03-27
2018-03-27 07:09:11
0
1da530b8a708
2.217925
1
0
0
Deep learning is an exciting branch of machine learning that uses lots of data, to teach computers how to do things only humans were…
1
Activation functions in Deep learning Deep learning is an exciting branch of machine learning that uses lots of data, to teach computers how to do things only humans were capable of before, such as recognizing what’s in an image, what people are saying when they are talking on their phones, translating a document into another language, and helping robots explore the world and interact with it. Each layer in deep neural networks are comprised of nodes, which combines input data with set of weights. The summation of input data and weights are passed through activation function to determine to what extent the value should progress through the network to affect the final prediction Types of activation Functions Sigmoid activation The sigmoid activation function used in neural networks has an output boundary of (0, 1), and α is the offset parameter to set the value at which the sigmoid evaluates to 0. The sigmoid function often works fine for gradient descent as long as the input data x is kept within a limit. For large values of x, y is constant. Hence, the derivatives dy/dx (the gradient) equates to 0, which is often termed as the vanishing gradient problem. This is a problem because when the gradient is 0, multiplying it with the loss (actual value — predicted value) also gives us 0 and ultimately networks stop learning. Rectified Linear Unit (ReLU) A neural network can be built by combining some linear classifiers with some non-linear functions. The Rectified Linear Unit (ReLU) has become very popular in the last few years. It computes the function f(x)=max(0,x). In other words, the activation is simply thresholded at zero. Unfortunately, ReLU units can be fragile during training and can die, as a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again, and so the gradient flowing through the unit will forever be zero from that point on. To overcome this problem, a leaky ReLU function will have a small negative slope (of 0.01, or so) instead of zero when x<0: Exponential Linear Unit (ELU) The mean of ReLU activation is not zero and hence sometimes makes learning difficult for the network. The Exponential Linear Unit (ELU) is similar to ReLU activation function when the input x is positive, but for negative values it is a function bounded by a fixed value -1, for α=1 (the hyperparameter α controls the value to which an ELU saturates for negative inputs). This behavior helps to push the mean activation of neurons closer to zero; that helps to learn representations that are more robust to noise.
Activation functions in Deep learning
1
activation-functions-in-deep-learning-1da530b8a708
2018-03-27
2018-03-27 07:09:12
https://medium.com/s/story/activation-functions-in-deep-learning-1da530b8a708
false
442
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
prashanth k
Software Engineer @KiSSFLOW , Traveller, amateur photographer
cf453ca57a66
prashanth9962
18
25
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-02
2018-03-02 03:39:26
2018-03-02
2018-03-02 03:48:59
1
false
en
2018-03-02
2018-03-02 03:51:42
0
1da5c32be567
2.29434
10
0
0
Bedroom
5
March 5th, 2048 — Morning Bedroom It’s 6:30am on March 5th, 2048. You wake up on this, another workday. The bed kept track of your sleep, recording every movement and carefully monitoring sleep states, breathing, heartbeat and other vitals. Knowing that you have back problems, the bed adjusted its contouring accordingly, targeting just the right muscles. You feel good and well rested. Your bed also coordinated the correct room temperature to make it exactly right for your preference. Bathroom You get up and step into the bathroom. The lights turn on automatically, light switches were phased out years ago. Since you’re a bit older, you know that every room still has a small hidden manual light switch within the wall but kids nowadays aren’t even aware of its existence. You turn on the faucet and the temperature is just right though it’s a cold and dreary day in Silicon Valley. You step into the shower, the water is just the right temperature. No need to shampoo or use soap, the shower takes care of everything, you just step in, shower and then experience the pleasant drying mode. Unbelievable that people used hair dryers in the past! Nowadays, human hairdressers are said to offer vintage bespoke haircuts by manually cutting hair. Hair styling and cutting are entirely automated. The shower dries and styles your hair in minutes to that perfect look. Wardrobe Back in the bedroom the closet doors quietly slide open after your hand wave. A coordinated outfit is presented. You don’t shop for clothes like you used to — you simply approve suggested outfits and tap to purchase. Coordinated outfits show up in your closet. The system detects old clothes that don’t get used and after your approval, auto recycles. You never see a thing that happens behind the scenes. Kitchen Breakfast time! You come downstairs and step into the kitchen. The rooms light up automatically as you pass through the house, it’s still a bit dark. The lit kitchen is suggesting a litany of things you could have for breakfast on the frove door. Frove is an appliance introduced a decade ago. It’s a combination of fridge, stove and oven. Suggested items are personalized based on your age, genetic makeup and health conditions. The kitchen is softly playing bossa nova. It knows you like to start the day on a happy note. With a few taps on a screen of options you make your breakfast choices, you’re not ready to speak yet. If you wish, you can also tell your kitchen what you want. In minutes the kitchen produces a breakfast that drops in from a designated cubby. Grocery shopping no longer exists. With a subscription service everything is shipped directly to your kitchen. You never see any boxes or dispose of trash. Kitchen replenishment is seamless and invisible. It just happens. You think back to old times when you had to waste time grocery shopping. Kids are awake now. They go through their own motions and tumble downstairs into the kitchen. Advent of another Workday You’re checking your devices now and see the flurry of early morning work activity. You kiss the kids goodbye as your front door opens. Their self driving car is here to take them to school. Yours is arriving in five minutes to usher you to the office.
March 5th, 2048 — Morning
114
march-5th-2048-morning-1da5c32be567
2018-06-18
2018-06-18 16:38:58
https://medium.com/s/story/march-5th-2048-morning-1da5c32be567
false
555
null
null
null
null
null
null
null
null
null
Short Story
short-story
Short Story
94,626
Natalia Burina
Entrepreneur in Residence @foundationCap ex Director of Product @salesforce Previously Co-Founder @getparable Group Product Manager @eBay & PM @microsoft
6450f72fc7a2
nale
881
913
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-17
2018-02-17 16:47:53
2018-02-17
2018-02-17 17:08:39
9
false
en
2018-02-17
2018-02-17 17:08:39
0
1da6c631584b
3.026415
1
0
0
In this post I will explain the principles of support vector machines (SVM). It may be a bit simplified as SVM is a complex topic with some…
4
Some easy R-examples for support vector maschines In this post I will explain the principles of support vector machines (SVM). It may be a bit simplified as SVM is a complex topic with some theory behind. There is some hype nowadays about SVM as there is with many concepts of maschine learning. Basically SVM is a binary classifier, i.e. it sorts data points in two buckets. We can extend SVM to more buckets by running the algorithm several times on a one-against-one comparison and chose the best fit. Nothing exciting so far. What is remarkable is how SVM handle non-linearity, i.e. by projecting in a higher-dimensional space where the classification-problem is actually linear. So, how does SVM do the classification? Basically it identifies the hyperplane, i.e. a plane one dimension below the data space, that separated the two classes best. This means maximizing the distance to the points close to the separation border (that is where the name support vectors comes from). Now this works well for linear separable classes but this prerequisit is not very realistic. To overcome this problem the data space is transformed via some kernel function to a higher dimensional space where linear separability is possible Enough theory, let the games begin. We will use the R-package e1071 that is an interface to libsvm, a C++ implementation of SVM. Let’s create a 3-dim sample data set, i.e. two numeric dimensions, the third one consisting of the two categories we want to classify. Here is the R-code: ############################################ # Support vector maschines: Examples ############################################ library(e1071) library(rpart) n = 1000 testSize <- 0.33 #create data set set.seed(1) df <- data.frame(x=runif(n,-3,3),y=runif(n,-3,3)) #Example 1a: linear split in 1 dimension df$Class <- as.factor(ifelse(df$x>1,”red”,”blue”)) #Example 1b: linear split in 2 dimensions df$Class <- as.factor(ifelse(df$x+df$y>1,”red”,”blue”)) #Example 1c: polynomial split in 2 dimensions df$Class <- as.factor(ifelse(df$x²+df$y²>1,”red”,”blue”)) ## split data into a train and test set index <- 1:nrow(df) testIndex <- sample(index, trunc(n*testSize)) testSet <- df[testIndex,] trainSet <- df[-testIndex,] plot(df$x,df$y,col=as.character(df$Class)) # svm svm.model <- svm(Class ~x+y, data = trainSet, cost = 100, gamma = 1) svm.pred <- predict(svm.model, testSet[,-10]) plot(testSet$x,testSet$y,col=as.character(svm.pred)) ## compute svm confusion matrix table(pred = svm.pred, true = testSet$Class) sum(svm.pred==testSet$Class)/nrow(testSet) Let’s look at the different examples: Example 1a: linear split in 1 dimension df$Class <- as.factor(ifelse(df$x>1,”red”,”blue”)) Example 1a: The created two-dimensional dataset with classes blue and red Example 1a: Here is the classified test-set We see that the SVM is very good in this simple classification, the accuracy is ~99% Example 1b: linear split in 2 dimensions df$Class <- as.factor(ifelse(df$x+df$y>1,”red”,”blue”)) Example 1b: original dataset Example 1b: The predicted testset Example 1c: non-linear split in 2 dimensions (circle) df$Class <- as.factor(ifelse(df$x²+df$y²>1,”red”,”blue”)) Example 1c: The original dataset Example 1c: The classified testset Example 2: Now let’s get more complex and define a plane that splits the two groups df$z <- 3*df$x³-2*df$y²-1 df$Class <- as.factor(ifelse(df$z>0,”red”,”blue”)) In 3D the original dataset looks like this Just looking at x, y and class in 2D And the classified testset Here is the R-code for the 3D-plot library(plot3D) scatter3D(x = df$x,y = df$y,z = df$z,phi=20,theta=20,bty=”b2") So this gives a little impression what SVMs are capable of. I hope to provide some realistic setting soon.
Some easy R-examples for support vector maschines
1
some-easy-r-examples-for-support-vector-maschines-1da6c631584b
2018-02-18
2018-02-18 17:00:32
https://medium.com/s/story/some-easy-r-examples-for-support-vector-maschines-1da6c631584b
false
484
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Holger Aust
null
30b2dd57b071
databraineo
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-04
2018-03-04 01:53:08
2018-03-16
2018-03-16 23:15:15
4
false
en
2018-07-08
2018-07-08 15:05:06
5
1da71c314816
3.59434
2
0
0
As for any bootstrapping startup our major problem is low coverage and our major short-term goal is to gain traction. So we submitted our…
5
BetaList review: story of one B2B SaaS startup with numbers and pies As for any bootstrapping startup our major problem is low coverage and our major short-term goal is to gain traction. So we submitted our startup to BetaList and have been featured there in February — https://betalist.com/startups/demografy It was a long awaited result. And it was worth waiting. Now we would like to share our experience with others. We hope it will be a useful piece of content since we decided to provide detailed data with numbers and pies. Besides that Demografy is a B2B startup, relatively rare type of startups listed on BetaList. So we hope our experience will help other B2B startups. We are going to start from results and then will describe the listing process. Results First of all, we deem results as very good and the overall BetaList campaign as success. Three major aspects of success are: We received a decent number of sign-ups. We’ve got 40 sign-ups from February 18 till March 1 (and few more after). Considering that we’re B2B startup, this is a good number. Quality audience. Among those 40 sign-ups we have a large share of marketing agencies and other B2B businesses. This segment of clients is especially valuable to our startup. Besides that, there are many decision makers among them. Last but not least, we have a high conversion rate of about 35%. On the one hand that’s because our product and pre-launch page performed well. On the other hand BetaList delivered us a very targeted audience. For those who are interested in traffic numbers and dynamics: Listing process First of all, We built a small pre-launch page with custom back-end for tracking leads. This is also a part of our SaaS platform. Added Google Analytics. Prepared required media kit for submitting our startup to BetaList. The submission process is straightforward so we’re not going to describe it here in details. After submission you need to choose either free or paid package. Paid packages start from $129 and are for expedited review and publish while free package generally takes several weeks to review and publish startup after submission. Important note: BetaList has a selection criteria for both packages. So your startup should qualify in the first place. Since we are a B2B startup we had doubts about success on BetaList. The audience of BetaList is more suited for B2C apps and services. So we decided to proceed with a free option. In our case waiting period was 12 weeks. Which is pretty long. During this period we have changed and tuned our pre-launch page significantly. After publishing startup BetaList promotes it via three channels: BetaList website. They state that the website gets 70,000+ pageviews per week. There is also a “trending” section on homepage that generates more hits. Though we didn’t end up in this section this channel was still the biggest source of leads in our case. BetaList Twitter account. We’ve got 161 retweets here as many other startups. While most retweeters are definitely not our target audience it helped to spread the word and reach our potential clients. This channel generated just 14% of our leads. However we were published in some non-major media and tech aggregators because of BetaList retweet. And some non-BetaList leads probably came from these sources. So don’t underestimate this channel. Email newsletter. After a day or so BetaList also includes your startup in email newsletter. In our case newsletter generated 32% of our leads. Below is a distribution of our BetaList leads among these channels: Conclusion As we said, we had doubts about viability of BetaList for B2B startups. So we hadn’t any big expectations from our listing. However BetaList has proven that it’s a viable option not only for B2C apps/services but for enterprise solutions either. So considering the results we’ve got, we would definitely choose paid option which was $129 by the time of submission. It is definitely worth it in our case. We didn’t receive hundreds or thousands of visits like some startups. But we attribute this mostly to the fact that we’re a B2B startup, not some free app. B2B startups generally receive lower number of hits from sites like BetaList because they’re interesting mainly to businesses. And businesses are a smaller segment of audience of such sites. On the other hand every 1 in 3 of visitors converted with almost half of them from bigger clients segment and most of them are decision makers. Follow us in social networks to get updates: Twitter Facebook Medium
BetaList review: story of one B2B SaaS startup with numbers and pies
94
betalist-review-story-of-one-b2b-saas-startup-with-numbers-and-pies-1da71c314816
2018-07-08
2018-07-08 15:05:06
https://medium.com/s/story/betalist-review-story-of-one-b2b-saas-startup-with-numbers-and-pies-1da71c314816
false
767
null
null
null
null
null
null
null
null
null
Startup
startup
Startup
331,914
Demografy
First privacy-enabled platform that predicts customer demographics using AI - www.demografy.com
3441f6119822
demografy
13
10
20,181,104
null
null
null
null
null
null
0
library(nflscrapR) season_to_csv <- function(years,playoffs){ for (season in years){ if (playoffs) { print(paste('Extracting Playoff Games from',season,'season...')) po <- extracting_gameids(season,TRUE) count = 0 for (game in po){ print(paste('Game ', count, ' of ', length(po))) write.csv(game_play_by_play(game),paste('po_pbp_',season,'.csv')) count <- count + 1 } } else { print(paste('Extracting Season Games from',season,'season...')) rs <- season_play_by_play(season,playoffs) write.csv(rs,paste('rs_pbp_',season,'.csv')) } } }
2
null
2017-09-24
2017-09-24 22:18:10
2017-10-02
2017-10-02 12:42:02
2
false
en
2017-11-03
2017-11-03 04:08:47
7
1daa1901c0a3
8.194654
1
0
0
How to wrangle NFL play-by-play data into a predictive machine.
4
NFL Data and Machine Learning… One Play at a Time — Part 2 How to wrangle NFL play-by-play data into a predictive machine. In this post we get more technical about our data, make a basic model to choose the right play given a specific game scenario, and showcase a web app developed to run the model whenever, wherever you want. Part one investigated our data-set provided by the NFL APIs since the 2009 season and explored it in different ways. We looked at a few different visualizations of the game and dug into how a rule change back in 2015 could be detected in the data. Part three will take this model one step further and incorporate some more advanced methods to indicate which actions on a play lead to positive or negative outcomes. The Data Our data comes from the NFL via their publicly available API with detailed stats on every play since the 2009 season. We will use Python and a tiny bit of R to analyze, build, and train our model with such packages as pandas, numpy, and scikit-learn. To help wrangle the very dense and messy data that comes from the NFL there are two great projects that have done a ton of work to tidy everything up. One is for python, nflgame, and the other is for R, nflscrapR. The python package puts a stronger emphasis on answering player and team stat questions like “Which 5 players had the most rushing yards for week 1 of the 2017 season?” The R package can answer the same question but it’s more Data Frame focused so to get the same answer a bit more code is required. For our purposes the play-by-play Data Frames created by the R package perfectly suite our needs. We generate data frames for each NFL season’s regular and postseason data since 2009. Below is the R script I used to not only make these data frames but also save them as csv files for loading everything into a python workspace. Next we use pandas to pull these files into our python(3) workspace and generate one huge combined pandas data frame to work with. All said and done our data frame contains 329,373 plays with 81 columns of details about each one. Now we don’t need all 81 columns. Each play can turn out a number of ways and most of these columns are not the most relevant for each play such as what the game ID is, the date, the current drive number, etc. but in addition to that some of this data is actually dangerous to include in our model. More on that soon. Here is what we are using from the data to build our model. Quarter Integer of 1, 2, 3, or 4 indicating what quarter of the game it is. Time Remaining This is set to the current minute of the game ignoring the number of seconds remaining. Down 1, 2, 3, or 4. Yes, integer values. Yards to 1st Down Any integer from 1 to however terrible of a situation a team has gotten themselves into… bad things happen and this model is trying to help that =). Field Position From 1 to 100 yards away from scoring a touchdown. This means if your on your own 25, this number would show 75 yards to the goal. Score Difference This can be a positive or negative integer indicating if the team is up, down, or tied by some number of points. Play Type The final piece of our model. It indicates if the play was a pass, run, field goal, punt, or QB kneel. This is what our model will be trying to predict given all the other inputs we’ve described. Now after reading this some you may be saying: Why did you pick these pieces of information to build the model? Why ignore all that other information in the data?! Why aren’t you incorporating “_____”? Here’s why. When its game time and you are either playing or watching it, what key pieces of information do you have about a play before it starts? The announcer typically says something like this, “Its 1st and 10 at the 20 yard line. Dallas is down by 3 with 2 minutes to go in the game.” If you parse that information down into the key points you’ll notice all the inputs described above are in there. That is the basic set of information you get each and every play. The key thing we are doing with our model is seeing how can we take over a quarter million of these game scenarios and try and have it predict what should happen next. Let’s explore the different play types we will be trying to get our model to predict. The Choices Each play has five basic options to pick from: Pass Run Punt Field Goal Kneel A typical play has these five options that the offense can use to move the game along in their favor given the current situation. There is also much more data available for these options vs other edge cases which make predicting when and how to use these options with more data more reliable and less complicated. Let’s see the historical breakdown of these five different play options. Not too surprising that passing and running the ball dominate decisions made. These are a team’s go to ways to move the ball down the field, but both have many game time factors that go into which option to choose and how to execute them successfully. Next up is punting the ball (aka a failed drive). Teams pick this option because they aren’t in field goal range and its 4th down and too many yards to risk turning the ball over to the opposing team at an unfavorable field position. So why include this decision in our model? We want to see if it can determine 4th down situations where going for it is worth the risk. Knowing when you have the best chance at a successful 4th down attempt can make this decision easier and potentially get your failed drive back on track. Field Goals are pretty straight forward given certain game scenarios, but just like with punting, the decision to kick vs pass or run could be counter-intuitive. We should be able to determine for each yard what your chances for a successful kick are and analyze this choice given the game scenario. The QB Kneel is typically used as a delay tactic towards the end of either half to safeguard the ball when a team is winning and doesn’t want to risk a turnover. The QB Kneel very rarely happens outside of this specific gaming situation so our model should be able to recognize this and we use this as a sanity check to ensure it’s not telling us to kneel when we are losing and 5 yards from scoring a comeback touchdown. Building the model If you notice, nowhere are we saying that the model will indicate if a play is a good or bad one, just that this is the most likely given the inputs provided. Why? Well there’s the next post for that! But really, here we assume each play’s outcome was decided by a team that had an interest in winning the game. A team won’t make a play to intentionally turn the ball over. Heck the QB Kneel where a team rather do absolutely nothing than lose the ball is a testament to that. This model combines all that experience and smart NFL decision making together. So while the model isn’t specifically searching for the positive or negative outcomes of a play, we will assume that it’s built into our data. Each Play Type can be considered a different “class” which labels the set of inputs associated with it as promoting that outcome. We can use a random forest multi-class classification method to build a model that can take a game scenario like we described and spit out probabilities of what Play Type it most likely should be, given the quarter million other game scenarios that have happened in the past. There are a few key points to how our data is structured and the kind of training we will be doing that we need to discuss. First, our data is quite imbalanced. We have far more passing plays than QB kneels. We need to ensure our model won’t be biased to predicting classes that we have a plethora of data for. We can accomplish this using a cross validation method called Stratified K-Folding. All this means is when we train our model we will not only break the training data down into K groups, we will ensure that in each group there are an equal number of classes represented for training. Using any version of K-fold cross validation is designed to reduce our chances of over-fitting our training data and ensuring our model is as generalized as possible so when we look at new data, say the 2017 season, it will hopefully perform as well as we have seen in our training. Now scikit-learn is pretty smart in this regard since it can detect this imbalance for certain models and automatically use a stratified k-fold vs a regular one. In this model we will be explicit about telling scikit-learn to use a stratified version. We also need to tune our model for the different hyper parameters used in a random forest. Specifically we will use a technique called grid search to run our model for various combinations of hyper parameters and determine which combination performed the best and use those parameters in our final model. As we can see from above the model does a great job of classifying punts, field goals, and QB Kneels. However, Those two curved lines are pass and run predictions. These values tend to get confused by our current model it seems. It is still quite effective at predicting them, but just not as effective as the other parts of our model. This is worth a follow up with more factors, deeper investigation, or new metrics to build the model against to try and discern even more when a play will be a pass or a run. So what’s the final verdict on this fancy magic forest of predictive power? About 70% accuracy. (Mainly due to the confusion between passing and running). I will say though I am very impressed the model could discern the less common play types so well, as these situations have much less data here. This just means what is available compared to what we have for passing and running makes the plays standout to the model and it developed strong decision rules around those game scenarios. If you want to see this model in action you can head over to Monday Morning Quarterback (Currently just an amazon EC2 server hosting this webapp, once I feel it’s got all the tweaks done I’ll push everything over to a more dedicated page). The site is designed to let you come up with a game scenario, put in a guess for what you think the play is going to be, and it will compute the probabilities for each so you can see how right or wrong you were when compared to this model built on hundreds of thousands of different game scenarios. Hope you enjoyed reading! The next post will take this model and try to morph it to answer the question of “How good is the decision to run this play type for this game scenario?”. We will discuss building a metric to help classify good from bad plays and some challenges that currently face scientists who try and build various kinds of models like this. The code for this blog post series is available on my GitHub under Monday Morning Quarterback. Thanks for reading! Feel free to reach out to me via the links below. LinkedIn | Twitter | GitHub | Website
NFL Data and Machine Learning… One Play at a Time — Part 2
5
nfl-data-and-machine-learning-one-play-at-a-time-part-2-1daa1901c0a3
2018-02-08
2018-02-08 19:06:24
https://medium.com/s/story/nfl-data-and-machine-learning-one-play-at-a-time-part-2-1daa1901c0a3
false
2,070
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Michael Skrzypiec
Data Geek
3a94ec56a037
skrzym
20
62
20,181,104
null
null
null
null
null
null
0
null
0
f0db56adb08d
2018-06-19
2018-06-19 11:11:54
2018-06-19
2018-06-19 11:27:02
3
false
en
2018-06-19
2018-06-19 11:27:02
7
1daa8a481c5a
4.172642
13
0
1
This paper proposes a novel method for conducting multimodal sentiment classification from user-generated videos. Multimodal methods…
5
State of the art Multimodal Sentiment Classification in Videos This paper proposes a novel method for conducting multimodal sentiment classification from user-generated videos. Multimodal methods comprise of combining various modes of information such as audio, video, and text. The framework is mainly based on a long short-term memory (LSTM) model that enables utterances (units of speech bound by breathes or pauses) to capture contextual information. What is Sentiment Analysis? A sentiment analysis task involves many NLP sub-tasks and most commonly aims to detect polarity (positive/negative sentiment) in text. Emotion recognition is a derivative task in which the aim is to predict fine-grained emotions (e.g., fear and joy). Why Multimodal information? By combining vocal modulations and facial expressions with textual information, it is possible enrich the feature learning process to better understand affective states of opinion holders. In other words, there could be other behavioral cues in vocal and visual modalities that could be leveraged. Contributions The proposed framework considers the order, inter-dependencies, and relations that exist among utterances in a video, where others treat them independently. In other words, surrounding context should help to better classify the sentiment conveyed by utterances. In addition, audio, visual, and textual information are combined to tackle both sentiment and emotion recognition tasks. Example Consider the following utterance found in a review: “The Green Hornet did something similar”. Without any context, we may perceive this utterance as conveying negative sentiment. What if we included the nearby utterances: “It engages the audience more” and “I just love it”. Would the sentiment change to positive? You be the judge of that! Note that it is highly subjective but we can train a machine to detect these correlations automatically. Models Two main types of feature extraction methods are proposed: F1: Context-Independent Features (a.k.a unimodal features for each modality) Textual feature extraction is performed using a convolutional neural network (CNN) where the input is the transcription of each utterance, which is represented by the concatenation of corresponding word2vec word vectors. (See paper for more details of CNN) Audio feature extraction is performed using the openSMILE open-source software, where low-level features, such as voice intensity and pitch, are obtained. (See paper for more details on audio features) Visual feature extraction is performed using a 3D-CNN, where frame-level features are learned. (See paper for more details of 3D-CNN) F2: Contextualized Features An LSTM-based network is adopted to perform context-dependent feature extraction by modeling relations among utterances. Basically, unimodal features are fed as input to a LSTM layer that produces contextualized features as shown in diagram below. Different variants of the LSTM model are experimented with, such as sc-LSTM (unidirectional LSTM cells), h-LSTM (dense layer ignored), bc-LSTM (bidirectional LSTMs), and uni-SVM (unimodal features are used directly with SVM for classification). Fusing Modalities There are basically two frameworks for fusing modalities: Non-hierarchical Framework — unimodal features are concatenated and fed into the various contextual LSTM networks proposed above (e.g., h-LSTM). Hierarchical Framework — The difference here is that we don’t concatenate unimodal features, we feed each unimodal feature into the LSTM network proposed above. Think of this framework as having some hierarchy. In the first level, unimodal features are fed individually to LSTM networks. The output of the first level are then concatenated and fed into another LSTM network (i.e., second level). (Check diagram below for overview of hierarchy or see paper for all the details) Datasets An important consideration in multimodal sentiment analysis is that person-independent datasets must be designed. In other words, train/test splits are disjoint with respect to speakers. The following datasets were used for the experiments: MOSI — contains video-based topic reviews annotated by sentiment polarity MOUD — contains product review videos annotated by sentiment polarity IEMOCAP — contains scripted affect-related utterances annotated by emotion categories Main Findings Hierarchy vs Non-Hierarchy: From the results in the table above we can observe that hierarchical model significantly outperform the non-hierarchical frameworks (highlighted in green). LSTM variants: sc-LSTM and bc-LSTM models perform the best out of the LSTM variants, including the uni-SVM model (results highlighted in red). These results help to show the importance of considering contextual information when classifying utterances. Modalities: In general, unimodal classifiers trained on textual information perform best as compared to other individual modalities (results highlighted in blue). The exception was the MOUD dataset, which involved some translation. However, combining the modalities tend to boost the performance, indicating that multimodal methods are feasible and effective. Generalizability: To test for generalizability, the models were trained on one dataset (MOSI) and tested on another (MOUD). Individually, the visual modality caries the more generalized information. Overall, fusing the modalities improved the model. (See paper for more qualitative analysis on the importance of contextualized information for multimodal sentiment classification.) Call for Research Here are a few ideas you can try to improve the current work: Currently, this work aims to evaluate methods on benchmark datasets, which are somewhat clean. You can try to collect your own datasets and label them automatically, rendering large-scale datasets. Also, keep in mind the domain; i.e., you can try to work on a different type of dataset that doesn’t include reviews. It would be interesting to see more cases where contextualized information helps with sentiment classification. Also, a more advanced idea includes the fusion part of the framework. You can try to experiment with more sophisticated fusion techniques, such as those used here. Software: openSMILE — Software for extracting acoustic features from audio Dataset: MOSI Paper: Context-Dependent Sentiment Analysis in User-Generated Videos Presentation: Video Clip Have any other questions regarding this paper? Send me a DM @omarsar0.
State of the art Multimodal Sentiment Classification in Videos
73
state-of-the-art-multimodal-sentiment-classification-in-videos-1daa8a481c5a
2018-07-05
2018-07-05 21:26:34
https://medium.com/s/story/state-of-the-art-multimodal-sentiment-classification-in-videos-1daa8a481c5a
false
960
Diverse Artificial Intelligence Research & Communication
null
null
null
dair.ai
ellfae@gmail.com
dair-ai
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,RESEARCH,TECHNOLOGY,DATA SCIENCE
dair_ai
Machine Learning
machine-learning
Machine Learning
51,320
elvis
Researcher and Science Communicator in Machine Learning and NLP; I discuss more about Linguistics, Emotions, NLP, and AI here: (https://twitter.com/omarsar0)
41338000425f
ibelmopan
1,667
661
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-02
2018-02-02 12:59:42
2018-02-02
2018-02-02 16:42:03
2
false
en
2018-02-02
2018-02-02 16:54:02
8
1dac0a56ae3f
7.251258
32
2
0
Caveat: This is a personal view of work underway that I’m leading. What I describe is subject to incredible change as this policy work…
4
Towards Rules for Automation in Government Caveat: This is a personal view of work underway that I’m leading. What I describe is subject to incredible change as this policy work winds its way through government and consultations. Our approach may change for reasons that I’m simply not privy to, and that’s fine. This is meant to solicit ideas, but also show the complexity about what it takes to make policy. I hope that people find it useful, particularly students of public admin. It also represents my view of the world only, and neither my organization’s or the Government of Canada writ large. “Le Parlement, effet de brouillard” — Claude Monet, 1903. The best symbol for rule-making in a time of disruption. In my humble opinion, I have one of the greatest jobs in the country right now. A typical day is often chaotic, but at its heart, I get to be the pen on the rules that will govern how federal institutions can use AI and other forms of automation. To some (well, most) “designing rules” perhaps isn’t as interesting as an actual AI project, but as someone with years of experience working in science policy and ethics, this represents one of the most challenging tasks of my career. I fervently believe that innovation happens closest to the user; as I sit in an ivory tower in a galaxy far, far away, I feel like my primary role is to facilitate and shape that innovation to be as productive and inclusive as possible. Alistair Croll talks about a “bias for action” while accepting, understanding, and managing externalities. What I’m trying to do is the latter part so that federal institutions can get to the former with less uncertainty. AI is a rapidly evolving space, and trying to create rules in a time of disruption is risky. Too severe and innovation can be hindered; this is unacceptable during a time when the Government of Canada is embracing digital culture. On the other hand, if the rules don’t have meaning and teeth, and Canadians will not be sufficiently protected from the negative outcomes of this technology, like this or this. Trying to strike the right balance between facilitating innovation while being protective of right is a challenge, and one that benefits from ongoing discussions with different sectors across the country. It also means that I might work hard to build a consensus around a set of rules that we try out and have to scrap and redesign after a year in deployment because they don’t work. Why rules around AI? To answer that question, I urge you to read the AI Now Institute’s 2017 report, as it expands on the reasoning much more elegantly than I ever could. Delegating judgement in matters of government previously reserved by humans to machines is, in my opinion, worthy of rule-making. In short: All automated systems, whether they use decision trees or deep learning or another methodology, bring with them an issue of scale. Simply, if a human makes a series of errors, the impact of their errors is limited to those affected by the decisions that cross their desk. An employee silently harbouring racist biases limits the impact of their bias to their delegated decisions. An automated system making biased decisions, or decisions that reinforce inequalities, affects an entire population. It’s critical that procedural fairness be maintained in an era where machine decisions play a growing role in the lives of Canadians. The government should be able to explain decisions about people, do so clearly, and with enough information that the decision can be challenged. Our systems don’t only have to work well; the population has to trust them. My management gave me a pretty wide berth in this area (thanks!). So where did I start? Obviously with toughest possible starting point! I have proposed the development of a “Treasury Board Standard on Automated Decision Support Systems,” (title TBD) a binding policy instrument that will provide federal departments and agencies with a flexible, technology-agnostic set of rules designed to help them innovate responsibly with minimal paper burden. This policy would guide federal entities only, not the wider digital economy in Canada. It’s an important distinction because most of the media requests I get ask about the broader picture, which is outside my mandate. Automated Decision Support Systems Automated systems that make recommendations or decisions about individuals have been in use for decades. Banking and insurance were the vanguard industries in these areas, and it is the concept of automated decision making that underpins, for example, each credit card transaction you make. The introduction of machine learning may have changed the way that the decision is made, but it does not take away from the fact that machines have been making lots of decisions for a long time about people and society seems largely comfortable with this idea. An engraving of Lloyd’s of London Subscription Room in 1809. The insurance market was one of the first examples of mathematical tables being used to assess risk of human activity to reduce subjectivity. By Thomas Rowlandson (1756–1827) and Augustus Charles Pugin (1762–1832) (after) John Bluck (fl. 1791–1819), Joseph Constantine Stadler (fl. 1780–1812), Thomas Sutherland (1785–1838), J. Hill, and Harraden (aquatint engravers). But it is precisely because we are capable of automating more complex tasks than ever before, that now is a good time to introduce guidance in this area. In short, automated decision support systems are those that either recommend or decide on a course of action about an individual or business, often as they apply for a benefit or service from a federal institution. The decision almost always flows from authority provided in legislation or regulation. Renewing a passport, licensing a natural health product, rating a pilot for a certain aircraft, or granting a patent, are all examples of administrative decisions. Statistical research, for example, is not. Each have different risks to society associated with them, impact peoples’ lives and livelihoods differently, and require varying levels of “paper burden”/bureaucratic effort to process. The system doesn’t necessarily need to make the decision, it needs to support it in a direct and meaningful fashion. So a human can still provide the final nod, but if a machine provides an assessment of risk, for example, then it can be covered. That said, this scope can quickly spiral out of control. Does an AI system that reads incoming mail and provide it to the right desk analyst based on its content constitute an “administrative decision support system?” After all, timeliness of service can impact how a decision was made. The answer is likely no; there needs to be some closeness between the system and the rendering of the decision, otherwise I risk capturing every system the government uses, even though this is not the spirit or objective of the exercise. Why use these systems, given the risk? Because the reward seems to be worth it. I’ve written about this before, but I’ll restate a couple of reasons: Humans decision making is problematic too. We make decisions on uninterpretable hunches, show unconscious biases, and fluctuate in our analytical capacity over a given day, much less our lifetime. Machines can reduce this inconsistency. Machines can process applications much faster than people can. Is it ethical to make someone wait for 30 days to receive a notification of eligibility for a service, especially if we can give them an answer in a day? So there is clear motivation to pursue this technology. But how to maximize benefits and minimize drawbacks? Playing Pitfall! I was seven when I first played the notorious Atari game Pitfall! at my cousin’s house in Toronto. Navigating this policy is like taking control of Pitfall Harry again and dreading the alligators and spike pits to come. Here’s just some of the complexity we will need to face over the next few months. I should also mention here that this isn’t the only thing I’m working on over the next few months, either. Application — Which institutions will be covered? The more covered, the more legislative nuances we’ll need to consider, or exemptions to keep track of. Scope — Like I mentioned above, how do we design language around a scope that institutions will find helpful, or will interpretation be an overly subjective exercise? Requirements — Interpretability of models. Control of training data. Transparency of automated service. Security — physical and cyber. System audits. Contingency systems. Most importantly, can we scale requirements to the degree of impact that automating could have on society? Consequences — What happens if the rules aren’t followed? What is a meaningful consequence, especially when legislation already provides some? What is the right balance between ensuring compliance but also allowing risk-taking? Roles and Responsibilities — What degree of central oversight is necessary, desired, and/or practical? What is the role of CIOs in an AI age, versus the emerging role of chief data officers, or more technology-minded business owners? Each of these elements will bring a difficult discussion around security vs. transparency. Honestly, I’m looking forward to them. The era of algorithmic government should be one defined by difficult discussions, particularly ones where there is a tension of values. Pitfall! was one of the most popular games for the Atari 2600 precisely because it was fun to avoid overcome the titular obstacles. To start, I called a group of around 50 public servants representing a wide array of departments and agencies for a brainstorming session. There was palpable passion in the room around this subject, but overall the mix of data scientists, lawyers, ethicists, policy analysts and enterprise architects who participated generally supported the idea. I hope everyone does at the end of the day, but again, it is a long and winding road to the finish line. Of course, the devil is the details, and there will be so many in this Standard that you will think Goethe wrote it. I personally believe that standard-setting, even in a limited context such as this, should be an open exercise. I endeavour to keep drafts of our work available to anyone interested in reviewing. This openness has led to a marked increase in quality of our AI white paper and I feel that it would lead to more sophisticated and nuanced policy. Will the Standard see the light of day? Who knows. Maybe we just won’t be able to come to agreement on an approach. Innovation doesn’t always succeed, but we’ll try and see what happens. So to conclude a long post, I’ll leave a personal reflection. Rules don’t hinder innovation, they target it. They assist action. They fit in a “just do it!” type of world. But rules can only accomplish this if they are designed correctly. We have to take calculated risks in our rule-making, and be prepared to continually iterate and improve rather than drop text and let it go stale. Like the systems they seek to guide, these bodies of rules should be updated to fix bugs and include new features based on the feedback of their users. I sincerely hope that the end result is government AI that reflects the values of Canadians and works towards the best outcomes possible. We have powerful new tools at our disposal; let’s use them right. Une version française de cet article sera disponible la semaine prochaine. Un lien sera affiché ici.
Towards Rules for Automation in Government
131
towards-rules-for-automation-in-government-1dac0a56ae3f
2018-04-13
2018-04-13 16:16:25
https://medium.com/s/story/towards-rules-for-automation-in-government-1dac0a56ae3f
false
1,820
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Supergovernance
Hi! I’m Michael and I write about AI and government.
63a7f33b3689
supergovernance
328
279
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-11
2017-12-11 16:14:56
2017-12-11
2017-12-11 16:16:55
4
false
en
2017-12-11
2017-12-11 16:16:55
0
1dad6083a77e
0.730189
0
0
0
null
5
ChatBot: Why does your online store need it?
ChatBot: Why does your online store need it?
0
chatbot-why-does-your-online-store-need-it-1dad6083a77e
2017-12-11
2017-12-11 16:16:56
https://medium.com/s/story/chatbot-why-does-your-online-store-need-it-1dad6083a77e
false
8
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Debut Infotech
We transform your App Ideas into Reality.
3fcaefe1956f
helpthr
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-04
2018-09-04 18:53:30
2018-09-05
2018-09-05 13:27:30
16
false
en
2018-09-05
2018-09-05 16:47:14
45
1dae00246e42
10.09717
12
0
0
Valued Members of DeepBrain Chain Global Community:
5
DeepBrain Chain Monthly Report August 2018 Valued Members of DeepBrain Chain Global Community: August saw a significant and industrious month for the development of DeepBrain Chain. We are elated to have brought the DBC AI Training MainNet online, a particularly major event in the road-map of DeepBrain Chain. Succeeding this milestone was yet another great stride in the journey towards a new era of AI as we proudly introduced DeepToken Exchange- the world’s first digital asset exchange dedicated to AI. Concurrently, the team has been making major progress reaching out to AI clients, including students and teachers from renowned universities as well as AI companies in the U.S. and around the world. These esteemed clients will be the trailblazing pioneers of our DBC AI Training MainNet, providing the solid foundation upon which the structure of the DeepBrain Chain Platform will develop and expand. 1. Marketing ⦁ Media Meetup (August 7th) On August 7th, DeepBrain Chain were invited to attend a media gathering with journalists from 30 finance and blockchain media agencies. With such a wide and influential audience, Feng He, our CEO, seized the opportunity to re-emphasize DeepBrain Chain’s main proposition; using blockchain to solve AI companies’ computing power bottleneck, thereby saving 70% on computing costs. Feng He promotes the DeepBrain Chain vision ⦁ NewBlockchain CHINA 2018 (August 11th) NewBlockchain CHINA is a series of blockchain events held in 20 Chinese cities. On August 11th, DeepBrain Chain CEO Feng He was invited to speak about DeepBrain Chain’s innovative integration of the blockchain and how this technology provides the catalyst to bring about a new era of AI. CEO Feng He sits on the expert panel at NewBlockchain CHINA ⦁ 2018 China Blockchain Technology Applications Award (August 18th) On August 18th, the 2018 China Blockchain Technology Applications Conference, co-hosted by the State Information Center of China and Shenzhen Municipal Internet Society, themed “Chain the World to Create a Better Future” gave voice to a heated discussion regarding the future of blockchain technology, including its potential applications in both AI and, in a more general sense, real-world solutions in general. DeepBrain Chain were proud to be awarded the “2018 China Blockchain Technology Applications Award” and receive media coverage from Shenzhen Municipal TV Station. 2018 China Blockchain Technology Applications Award ⦁ “AI + Blockchain” Meetup (August 25th) On August 25th, DeepBrain Chain was invited to attend Shanghai’s “AI + Blockchain” meet up, together with SingularityNET, Cortex and Bottos to discuss trends in Artificial Intelligence and Blockchain integration. At the event, Eric, Marketing Director of DeepBrain Chain, shared the ecological strategic layout of our Artificial Intelligence program. Unlike other projects, DeepBrain Chain will employ a symbiotic relationship with DeepToken Exchange, the world’s first AI industry digital asset exchange, to accelerate and proliferate the era of Artificial Intelligence integrated with Blockchain technology, to the mutual benefit of investors, providers and requesters. ⦁ Blockchain Tech Summit (September 2nd) On September 2nd, a Blockchain Tech Summit was held in Shanghai and hosted by Donghua University, one of the most prestigious universities in China. At the invitation of the host, Steve Miao, our Blockchain Test Architect, attended the summit and provided the audience with a technical demonstration, including displaying the architecture of the DBC AI Training MainNet and the multiple advantages of distributed AI computing power. He also discussed the real-time AI training being conducted on the DeepBrain Chain platform, a topic which piqued the interest of the audience. Steve Miao, Blockchain Test Architect, conducts technical demonstration 2. Media Attention Reports on DeepBrain Chain ⦁ Jinse Finance: “Constructing AI computing infrastructure DeepBrain Chain breaking artificial intelligence robustness” ⦁ Dark Reading: “The Enigma of AI & Cybersecurity” — CAO Dr. Dongyan Wang muses on the potentialities of a future in which AI is the basis of advanced cybersecurity ⦁ CryptoBriefing published “How Blockchain Will Work With The Cloud and AI” and mentioned DeepBrain Chain, specifically the ability of the platform to reduce computing costs by 70%. ⦁ Coin Report published : “Unlocking Incentives with Blockchain”, featuring contributions from DeepBrain Chain’s CEO Feng He, discussing the ways in which blockchain technology achieves decentralized, self-sufficient operation via motivation of incentives akin to traditional market practices ⦁ IdeaMensch published an interview with DeepBrain Chain CEO Feng He, delving into behind-the-scenes stories about his business journey and the illustrious framework of ideas behind DeepBrain Chain. AI Training MainNet The DBC AI Training MainNet went live on August 8th; a technical milestone that allows AI users to conduct real-time AI training and pay an equivalent amount of DBC for computing resources. Teachers and students from universities and AI companies around the globe have started to use the DBC AI Training MainNet. These early users will provide the much needed stepping stone to broader, actualized commercial availability of our platform, including and accompanied by the recognition and interest of seasoned industry partners. Currently, the Institute of Computing Technology of the Chinese Academy of Sciences has successfully accessed the Deepbrain Chain AI cloud computing platform. The Chinese Academy of Sciences, Shanghai Institute of Microsystems, Beijing Institute of Technology, “Work Together”, other research institutions and typical customers of AI companies have begun to use the GPU power provided by DeepBrain Chain. Simultaneously, students from a number of colleges and universities in the United States (such as the University of California at Berkeley, University of Texas, and Illinois Institute of Technology) have begun using the DeepBrain Chain AI platform as the first AI training network. Sludgefeed report “DeepBrain Chain Training Net Goes Live, Offering an Alternative for GPU Miners” PR Log: “AI Training Net — Affordable Computing Power Made Possible” Netease News: “DeepBrain Chain AI Training Officialy Online to create Global AI CCloud Computing Platform” Jinse Golden Finance: “DeepBrain Chain AI Training Officially Online” Russian Media Bitjournal Kriptovalyuta Crypto-mining Coinrater Coinrace Criptonomica Blockonomi: “DeepBrain Chain Goes Online: DLT Could Power AI Development” DeepToken Exchange Reports 24–7 Sludgefeed BlockTribune Reuters NASDAQ Yahoo Finance Yahoo Canadian Money Morningstar Digital Journey KOL Cryptocurrency YouTubers Keith Wareing and Crypto Fiend interviewed Dr. Wang, the Chief Artificial Intelligence Officer of Deepbrain Chain, and conducted in-depth discussion regarding Deepbrain Chain’s ambitious venture with DeepToken Exchange, the world’s first AI industry digital asset exchange. In the interviews, Dr. Wang explains how the mutually beneficial collaboration between DeepToken Exchange and DeepBrain Chain will provide computing support, algorithm model trading, big data circulation for AI enterprises and enable the sharing of data use and ownership rights. Keith Wareing Interview Crypto Fiend Interview 3. Technological Progress Silicon Valley Team China and the United States, collaborating on the popular “Warhawk” project, the largest online training institution for international students, will utilize the DeepBrain Chain AI training network and senior experts from Silicon Valley to train their students in AI application. Thousands of undergraduate and high school students who wish to stay in the US will become DeepBrain Chain users. Moreover, the data and models generated internally by the reputable AI application project will also be able to be shared and traded in the AI ​​marketplace provided by the DeepBrain Chain platform. Cybereason, the world’s leading endpoint security monitoring company, will establish a strategic partnership with DeepBrain Chain to jointly explore the application of the most advanced real-time “endpoint awareness” technology in the blockchain field. This cooperation will greatly enhance the reliability and security of the DeepBrain Chain AI platform nodes around the globe. The AI ​​application team’s successful cross-industry AI video anomaly detection technology developed in the DeepBrain Chain AI platform will participate in the World Smart Manufacturing Conference in October. The team will demonstrate blockchain technology to professionals from around the world in the field of intelligent manufacturing to develop the advancement and efficiency of cross-industry AI technology. The AI ​​application development team successfully completed the training of the deep learning model for the detection of surface flaws in steel plates. Distributed high-efficiency training of complex models on 2,4, and 8 GPUs respectively using Horovod technology. The AI ​​application team trained the arrhythmia (Arrhythmia) AI model for portable electrocardiograph manufacturer QT medical, achieving a detection accuracy of over 97%. Successfully demonstrating the AI ​​software development capability of the DeepBrain Chain platform to service the majority of hardware manufacturer’s AI requirements. Slovenian data center with 768 enterprise-class GPUs and huge storage space has become a DeepBrain Chain in Europe, laying the foundation for providing enterprise-class high-end users with secure and reliable AI computing power. Shanghai Team Key tasks of the R&D team: including AI Training Net v0.3.4.0 release, v0.4 main chain design, etc. Architecture design Multi-chain + fragmented architecture network; architecture design: cross-sliced transaction design; state fragmentation; organizational technology sharing and discussion of state fragmentation architecture design; DGP solution analysis: read and analyze DGP source code; Data encryption scheme analysis, container enhancement technology; Testing v0.3.4.0 version completed the second round of testing, the entire network upgraded; Development v0.3.4.0 version: fixed the second round of test problems, upgraded the whole network; AI Training Net v0.3.4.0 version manual refresh; Further enhance the writing of UT unit test code for key components of Matrix core; 4. Future Events DeepBrain Chain and Silicon Valley Business School bring ConsenSys, Danhua Capital, Defengjie founder to GDIS GDIS October 2018 On October 1, 2018, the second GDIS Global Disruptive Innovation Summit (and the first International Blockchain Expo) will be held in Silicon Valley, focusing on three major themes: blockchain application and investment, blockchain + artificial intelligence and blockchain technology talents. The conference is divided into two sections. The main venue area will be used by top technical experts in order to share their extensive blockchain experience. Well-known investors impart relevant wisdom, and dinners are arranged for VIPs and speakers. The event provides a global, renowned social platform for investors, entrepreneurs and technical elites, engendering the best communication opportunities. The Summit will present the “Global Disruptive Innovation Award” for quality projects and companies participating in the exhibition. The Expo Pavilion will gather a number of companies including blockchain and AI companies, in order to showcase emergent products and technologies while also providing opportunities for all participants to seek out valued business cooperation and talent. World-renowned, high-caliber guests will be in attendance at the event providing cogent and wise insight into the the topic of blockchain and artificial intelligence! Just some of the experts scheduled to be in attendance at GDIS Tim Draper, the famous American venture capitalist, founder of DFJ Investment Fund, third generation of the Draper family and originator of Silicon Valley venture capital. Some of the well-known companies he has invested in include Baidu, Tesla, Hotmail, Skype, and SpaceX. Zhang Zhang, the founder and chairman of Danhua Capital, a member of the National Academy of Sciences, a tenured professor of physics at Stanford University, and the discoverer of “Angel Particles”. He is committed to investing in the most disruptive innovative technology in the United States, covering a number of emerging and expanding fields such as blockchain and artificial intelligence. Guo Hongcai, a famous angel investor, early proponent of Bitcoin, and the founder of the Digital Money Training Club known as the “Digital Currency Circle Whampoa Military Academy”. He is the lead consultant for several well-known ICO projects and is the founder and promoter of BitcoinGod, the charitable eco-blockchain project, dedicated to the promotion and development of global blockchain businesses. Experts and enthusiasts in the field of artificial intelligence and blockchain are welcome to purchase tickets! Sponsorship and media cooperation please contact: contact@deepbrainchain.org DeepToken Exchange Overseas Road Show DeepBrain Chain will be bringing DeepToken Exchange, the world’s first personal smart industry digital asset exchange, on its first overseas road show. The current planned areas include Vietnam (September 12), Thailand (September 15) and South Korea (TBA). CEO Feng He will visit Vietnam and Thailand to raise international awareness of the ways in which DeepBrain Chain and DeepToken Exchange work in harmony to create the “Blockchain + Artificial Intelligence” ecosystem and contribute to the development of the AI ​​industry. Stay tuned for further information. 5. Recruitment Warm Welcome to New Member of Silicon Valley Team Meimei Ouyang, Personnel Director Meimei Ouyang is from Chengdu, Sichuan. A year after graduating from Beijing, she went to the United States to obtain a Master’s Degree in Human Resources Development. Subsequently, she has worked in the United States for more than 10 years. She believes that life is like a journey and likes to constantly learn and challenge herself. She has worked in both already powerful and startup companies, accumulating relevant experience in recruitment, employee relations, compensation and benefits, institutional processes, corporate culture, and leadership training. In her spare time, she enjoys playing the piano, reading, traveling and spending time with her family. We are hiring development engineers, security architects and other positions. All self-provided/recommended resumes are given feedback within 48 hours, and we welcome external referrals. If you recommend a friend/classmate/colleague successfully, after the month of the candidate’s entry you will be given a recommendation reward of 20,000RMB. Contact email: erin@deepbrainchain.org Please attach the resume of your recommendation, as well as referee contact information, so that we can issue bonuses as awarded. YouTube Medium , Apple Podcast Twitter , Facebook Telegram English Group: ( https://t.me/deepbrainchain ) Telegram Korean Group: (https://t.me/DeepBrainChainKor ) Telegram Vietnam Group:( https://t.me/DeepBrainChainVietnam) Telegram Indonesia Group:(https://t.me/DeepBrainChainIndonesia ) Telegram Thai Group: ( https://t.me/DeepBrainChainThai ) Telegram Russian Group:(https://t.me/DeepBrainChainRussia ) Telegram Mining Machine Group:(https://t.me/DeepBrainChainAIminers ) Twitter: (http://twitter.com/DeepBrainChain ) Facebook Page:( https://www.facebook.com/OfficialDeepBrainChain/ ) Reddit: ( https://www.reddit.com/r/DeepBrainChain/ ) About DeepBrain Chain DeepBrain Chain is the world‘s first AI computing platform driven by blockchain. It uses blockchain technology to help AI companies save up to 70% of computing costs while protecting data privacy in AI training. Its vision is to build a “Decentralized AI Cloud Computing Platform” and become “The AWS in AI” -Yours sincerely, the DeepBrain Chain Team https://www.deepbrainchain.org
DeepBrain Chain Monthly Report August 2018
394
deepbrain-chain-monthly-report-august-2018-1dae00246e42
2018-09-05
2018-09-05 16:47:14
https://medium.com/s/story/deepbrain-chain-monthly-report-august-2018-1dae00246e42
false
2,265
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
DeepBrain Chain
AI Computing Platform Driven By Blockchain
379a9e7edef2
DeepBrain_Chain
960
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-20
2018-06-20 12:50:10
2018-06-20
2018-06-20 12:53:03
1
false
en
2018-06-27
2018-06-27 12:58:45
0
1daffc5ec7e1
3.486792
0
0
0
Artificial intelligence becomes a trending topic in many fields not only in technology or IT industry. The term artificial intelligence…
5
7 Ways How Artificial Intelligence Can Be Used in Marketing Artificial intelligence becomes a trending topic in many fields not only in technology or IT industry. The term artificial intelligence itself refers to any type of technology that can mimic human intelligence and it has many subfields like machine learning and deep learning. In this blog, you will know how AI is used in marketing to find the best strategies you can use on your website to create better customer experience with AI. 1.Search Process In the past search engines were merely based on a number of keywords and how you cover those keywords on your website but now search engines become smarter and understand how your content delivers a profitable user experience. With the use of AI, search engines will focus more on high-quality content that engages the user the users. In other words, they focus more on human writing content, not the content that is designed for the search engine only. Search engines start to analyze factors like location, date, time and the device type to understand user intent in the search query. Moreover, AI now is developing to predict the next query of the users after they finishing the main search process. A good example of that is Wikipedia as it uses data schema to predict what is next important information that will seem interesting to the users. 2. Content Curation With thousands of content repositories on the internet, content curation becomes easier each day but to discover the content that will fit best on your website is a hard process. However, AI identifies trending topics that go with your keywords and recognizes the best topics that are suitable for your audience. AI is used also to create content that requires data analysis and current time response from the user. It will bring a new dimension of content curation as it will make it faster and more focused. 3. Chatbot customer relationship management(CRM) is a very crucial part of each successful business. Now many companies take the advantage of AI to automate and update its CRM and Chatbots are the best application of that automation. Chatbots use machine learning an AI to create various algorithms to mimic human interaction, which creates a more intimate relationship with the customers. Currently, they are able to do small things like answering questions and fulfill orders. In the future, Chatbots’ technology will develop to the extent that its architecture will become standard for the customer service. 4. Voice Search The evolution of voice search queries, makes the marketers now optimize for natural language to make their website sound more conversational. As mobile phones play an active part in search engine process it’s estimated that by 2020, 50% of search will be voice activated. As AI depends on rich and real-time data so it will give the user the accurate information they need from their voice search. Nowadays marketers have to optimize their site in a conversational way that answers a complete question. That is because we type in a shorthand way, but when we speak we pronounce complete phrases and questions. 5. Personalize User Experience People always love the content that makes them live a personal experience and focuses mainly on them and their problems, challenges, etc. Over 33% of marketers now use AI to deliver personalized content by analyzing their location, interests, past likes and demographics. With the help of AI, the analysis of data become easier and you can send to your customer regular notification and display for them a relevant content related to their experience. AI also helps marketers not only to create personalized content but also helps them to analyze which part of their content that their visitors find it engaging. In this way, they are creating engaging and customized content for their target audience. 6. Dynamic Pricing In the ecommerce marketing sites things like discount and coupons help in increasing sales. With AI marketers and dynamic pricing marketers can only send offers and discount to people who only interested in offers campaigns. This will help the company to save more money and maximize its profits. 7. Propensity Modeling When you make segmentation to your customers you, you group them based on shared experience and behavior. But with the use of propensity modeling in marketing you will be able to divide your audience into categories based on their anticipated behavior not on the previous experience. As propensity modeling correlates customer characteristics with expected behaviors or propensities. So you can create a predictive analytics in many areas of your business like predicting which price your customers would love or the actions and offers that will make your customers repeat purchases from your brand. Wrapping Up: Artificial intelligence is now being a part of everything in our life and it is a promising technology that can work in a different number of fields. In the field of marketing AI proves itself as a very powerful technology that can create a great user experience and increase revenue. If you decide to use it in the right way, it will help you in building solutions for your and at the same time make the time-consuming tasks easily for you.
7 Ways How Artificial Intelligence Can Be Used in Marketing
0
7-ways-how-artificial-intelligence-can-be-used-in-marketing-1daffc5ec7e1
2018-06-27
2018-06-27 12:58:45
https://medium.com/s/story/7-ways-how-artificial-intelligence-can-be-used-in-marketing-1daffc5ec7e1
false
871
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Allaa Muhammad
Content writer and Social Media Specialist with experience in writing educational blogs for the companies to add value to their brand
2b7f4d86756d
allaamuhammed11
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-27
2018-03-27 16:45:47
2018-02-17
2018-02-17 16:15:45
0
false
en
2018-05-29
2018-05-29 18:46:59
1
1db01ed11ece
0.856604
0
0
0
The Trademindx roadmap has been published, it’s an ambitious project and we are excited to be working through it with good momentum. In…
5
Trademindx Roadmap Q1 2018 Update The Trademindx roadmap has been published, it’s an ambitious project and we are excited to be working through it with good momentum. In this article, we take a deeper dive into the first 3 milestones (phase 1), provide updates on our progress and expand on some of the high-level details! Q4 / 2017 Project Launch The Trademindx project was started at the end of 2017 with Alex coming up with the initial concept. We created our website and published our whitepaper. We gathered feedback from our network, the community and made improvements. We also kicked off the technical design, broke the project down into smaller deliverable modules and started building out the first prototype. The Trademindx project was launched! Q1 / 2018 Prototype Development The focus during the start of 2018 has been to complete the first working prototype — which we have now successfully done! It was an incredible moment when Trademindx came to life, it backloaded all historical market data points for all cryptocurrencies with a market cap of more than $100m, started sourcing news feeds, streaming Twitter feeds, getting real-time market data ticks, calibrating it’s internal artificial intelligence models, applying natural-language processing algorithms and… then it actually started making market movement predictions! Read full article on Trademindx blog… Originally published at trademindx.com on February 17, 2018.
Trademindx Roadmap Q1 2018 Update
0
trademindx-roadmap-q1-2018-update-1db01ed11ece
2018-05-29
2018-05-29 18:47:00
https://medium.com/s/story/trademindx-roadmap-q1-2018-update-1db01ed11ece
false
227
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Trademindx Official Blog
Artificial Intelligence for Cryptocurrency Trading — https://trademindx.com
f8cb75d93ce3
trademindxofficial
3
2
20,181,104
null
null
null
null
null
null
0
null
0
4784cbb01f20
2018-07-17
2018-07-17 05:30:10
2018-07-17
2018-07-17 10:18:00
4
false
en
2018-07-17
2018-07-17 10:18:00
4
1db0ccaecfa9
4.356604
6
0
0
Last year was a nightmare for the Indian IT workforce. Almost every top Indian IT company had its share of layoffs. This is likely to…
5
You might lose your job soon, if you don’t start reskilling! Last year was a nightmare for the Indian IT workforce. Almost every top Indian IT company had its share of layoffs. This is likely to continue for years to come. In 2017, Wipro and Infosys along with other five top Indian IT companies laid off more than 56,000 of their employees, sending a chill down the spines of almost every employed engineer. The reasons for layoffs were attributed to adoption of automation by these companies, demand for skilled people in emerging technologies which are replacing older technologies, and incapability of people to be re-skilled in emerging technologies. Worried about losing your job? Don’t fret. Learn some emerging technologies and prepare yourself for the upcoming opportunities. Get started here. In 2016, Infosys alone had laid off almost 9000 employees and its spree of layoffs continued till 2017, while Cognizant had to let go its 6000 employees. In fact, a few companies had to take desperate measures, and had their employees forcibly put down their papers which even landed them in legal troubles. Fact of the matter is during the last couple of years a very large faction of IT employees lost their job. Lack of skilled workforce in emerging technologies The layoffs have majorly affected professionals with five to six years of experience, while a few companies have reached up to senior managers, VPs, and other professionals in senior management roles. Meanwhile, hiring among these companies has also been down. The technologies for which these companies train are now replaced with new technologies which have made employees with expertise in older technologies redundant. The new technologies are more efficient and seek a different skill set with little to zero connection with older technologies. The lack of skilled workforce in these new technologies has slowed down the hiring among companies as well as forced companies to re-equip their employees with the knowledge of new technologies. A report in Economic Times states “the sector witnessed a decline in hiring as Indian IT services companies such as Infosys, Tata Consultancy Services, Wipro, HCL Technologies and others shifted focus from“scale to skill”. Save yourself from imminent layoff by re- skilling yourself in emerging technologies. Companies are taking initiatives to upskill their employees, but fear of employees leaving after gaining the skills and joining some other company, holds these companies back from taking further steps in this direction. Re-skill to avoid getting redundant In a recent survey conducted by NHRD, which included 200 CHROs ( Chief Human Resource Officers) noted that Indians need to upskill multiple times in their career to fill the talent gap, as skills will be the key driver to determine pay scales of working professionals in the coming days. Moving to new technologies is a progressive step for the industry, but it is a threat to the job security of employees. Start learning emerging technologies and save yourself against imminent layoff Over the last few years, there has been creation of new job roles that didn’t exist before. These new jobs were created because of the application of cloud technologies to enterprise operations and making them operationally efficient. Lately, the amount of data available with enterprises and the potential it harbours to solve challenging business problems has created new jobs. Big data, data science and machine learning are few of these jobs. These jobs require fundamentally different skill set from the jobs that had existed before, and requires people to learn skills from ground up. The following are some of the prominent job roles and the respective skills that they require. Data Scientist: This job role requires knowledge of R, Python, and SAS programming, Inferential statistics, and data visualisation tools like Tableau. The major task of Data Scientists is to solve operational and business challenges using the huge amount of data available with companies. The average salary of Data Scientists in India is approx. ₹6 LPA. Major companies that hire Data Scientist with the best-in-market salary are HSBC, Fractal Analytics, and McKinsey & Co, BCG, amongst others. Blockchain Developer: Lately, cryptocurrency has gained grounds in India, and this consequently given rise to a number of startups working in cryptocurrency space. Hence, there’s huge demand for Blockchain Developers. Blockchain development requires a Developer to have strong programming skills and familiarity with crypto platforms such as Ethereum, HashGraph, Hyper Ledger etc. Machine Learning Expert: This job role requires knowledge of probability (and techniques like Markov Decision Processes, Bayesian Network, Statistics, data modelling and more. The average salary of a Machine Learning Expert in India is ₹ 7–9 LPA. Full-stack Developer: Full-stack developers are expected to be familiar with functional coding and some frequently used web dev. technologies such as Node.js, Redis, Angular/ React framework, AWS etc. These technologies are major web development technology which have been increasingly in demand among companies. The average salary of a Full-stack Developer in Developer is India is ₹ 6 LPA. Professionals with some form of experience in managing and manipulating database are moving to these jobs as the scope for database administration diminishes. Similarly, full-stack development, UI/UX Developer, etc are some of the new jobs that companies are looking to hire for. The skills for these jobs require knowledge of advance web technologies that didn’t exist a few years ago which has now left working professionals with no option, but to learn these new technologies and progress in their career, or risk their career and get replaced by the younger generation with the knowledge of new technologies. edWisor is a re-skilling/up-skilling platform where you can learn new technologies and skill yourself for new job roles in the IT industry. You can learn skills at your own pace and work on projects, so for working professionals it offers complete flexibility with respect to learning time. edWisor also offers to help working IT professionals looking to switch to better paying job roles and get them hired.
You might lose your job soon, if you don’t start reskilling!
260
you-might-lose-your-job-soon-if-you-dont-start-reskilling-1db0ccaecfa9
2018-07-17
2018-07-17 10:18:01
https://medium.com/s/story/you-might-lose-your-job-soon-if-you-dont-start-reskilling-1db0ccaecfa9
false
969
edWisor is stimulating hundreds of students & professionals to become experts to get their Dream Job and steer their career to next level. We aim to solve the skill problem and unemployability in India and the World. #GetSkilled #GetHired
null
edwisorIndia
null
edWisor
contactus@edwisor.com
edwisor
JOBS,SKILLS,DATA SCIENCE,MEAN STACK,CAREER ADVICE
edWisorindia
Data Science
data-science
Data Science
33,617
edWisor
Get Skilled! Get Hired!
dfecd292929e
edWisor
108
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-09
2018-01-09 21:21:33
2018-01-11
2018-01-11 18:47:11
8
false
en
2018-01-30
2018-01-30 01:22:53
11
1db1dd13c69b
2.650314
0
0
0
Here is a collection of content that I have created over the past few years. Many more are available upon request.
4
George Demarest work portfolio Here is a collection of content that I have created over the past few years. Many more are available upon request. MapR eBook: Architects Guide to Implementing a Digital Transformation read the Architect’s Guide eBook The MapR ebook Architect’s Guide to Implementing a Digital Transformation that I authored is based on extensive research of MapR customer production use cases. Compiled from over 200 separate customer the guide suggests a operational maturity model for big data programs within enterprise IT. This model considers the number and complexity of use cases, the type and amount of data used, and a number of other useful metrics. To date, this piece has had over 3000 downloads and is still actively promoted by MapR through socials channels. Bigstream website redesign visit bigstream.co The redesign of the Bigstream website from a simple, flat wordpress site to an SEO/SEM optimized, high-touch, solutions-oriented destination was a fairly tricky task. We needed to strike the right balance between the necessary technical depth and the practical applicability to business and IT processes. Bigstream Poster: Accelerating Big Data Workloads with FPGAs view the full size poster image This poster was designed for, submitted, and accepted by the Program Committee of the Hot Chips 2017 conference (HC29) in August 2017. This poster was one of only 8 selected from “a record number of poster submissions” and lays out how big data workloads can be accelerated using FPGAs. MapR Guide to Big Data in Healthcare read the Healthcare Guide Like the Architects Guide mentioned above, the basis for the MapR Guide to Big Data in Healthcare was the profiling of numerous MapR healthcare customers such as Valence Health, UnitedHealthcare, and Novartis. The guide also examines the data, stakeholders and use cases that are most instructive. Bigstream: Fast Data in Finance solution brief read Fast Data in Finance Developed to target an executive persona in the financial services and fintech sectors, the Fast Data in Finance solution brief was created to target very specific use cases that are top of mind for quants and analytics teams. MapR Blog: When Big Data Breakthroughs Occur read the blog In this blog, I introduce the notion of the big data maturity model and provide some context from MapR customer experiences in deploying large scale data platforms. CMSWire article: Following the Data Through a Digital Transformation read the CMSWire article This CMSWire article Following the Data Through a Digital Tansformation predates the maturity model developed in the Architects Guide but clearly shows that the main ideas behind the book were on my mind.
George Demarest work portfolio
0
george-demarest-work-portfolio-1db1dd13c69b
2018-01-30
2018-01-30 01:22:55
https://medium.com/s/story/george-demarest-work-portfolio-1db1dd13c69b
false
402
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
George Demarest
null
8213028020e8
gfdemarest
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-16
2018-07-16 18:48:58
2018-07-16
2018-07-16 18:52:45
1
false
en
2018-10-10
2018-10-10 21:39:06
4
1db22ba7d893
2.649057
0
0
0
Thomas John is the Senior Director and Commerce Practice Lead at Rightpoint. Connect with Thomas on Twitter and LinkedIn.
5
The Future of Commerce Search Thomas John is the Senior Director and Commerce Practice Lead at Rightpoint. Connect with Thomas on Twitter and LinkedIn. For a few years now, the use of Artificial Intelligence and Machine Learning in online Commerce has steadily increased. The early adopters and retail’s big players have primarily led in this area due to the high cost. 2018 could be the year this starts to change. Machine Learning is becoming more affordable and widely obtainable for businesses outside of the Commerce giants and this shift will influence the future of commerce search. At the recent Coveo Impact 2018 conference, there was an almost palpable sense that the future of Search must also include Artificial Intelligence (AI) and Machine Learning (ML). These are closely related subjects that are gaining wider acceptance in a number of areas, including search context, personalization and marketing technology. There are so many exciting developments set to be released to the market soon, including the latest development from Coveo. I’m really looking forward to seeing how these take shape and specifically impact the future of Commerce Search. Impact of AI for Merchandisers and Product Content Managers Business Tools that are enhanced with Artificial Intelligence can help Commerce teams (merchandisers, product line managers) make the vast amounts of data within their platform work for them thereby converting it into a valuable resource. The task of optimizing search in Commerce scenarios is normally a manual one. For example — building synonym dictionaries normally involves an analysis of lengthy search logs and search result reports to identify the search terms. AI-enhanced Business Tools can surface these terms without lengthy analysis and suggest exactly which synonym dictionary they should be added to. The merchandisers and product line managers see the immediate benefit, and the site user ends up seeing much better search results with fewer “no results” pages. AI-Infused Search and Recommendations Consumers’ expectation of search is changing in the Commerce space. The importance of the search bar has shifted. It is now more important to present the shopper with the right product at the right time, regardless of how the search takes place. This concept of transactional experience is blending search functionality into a contextual presentation. Search solutions such as Coveo are taking this change to heart and positioning their platform to leverage AI and ML. Search platforms are no longer just utilized to crawl content and present relevant search results. They are positioned to be a critical part of the enterprise which can consume data from multiple sources and provide recommendations and decisioning tools. Businesses can utilize AI and ML to make search even better by utilizing it to power personalized and contextual recommendations for the shopper. Based on site awareness, before a shopper starts typing a single letter into the search bar, they can be presented with search recommendations or create a widget in the presentation which offers personalized recommendations based on prior activity. As the level of experience expectation from shoppers increases, businesses will be forced to adapt and will have to leverage more AI and ML to respond. Wrapping Up As search platforms adapt their mindset and offerings to reflect “more than search,” we will start to see the impact across the enterprise. Search technologies will be leverage to mine and process data from various sources. AI and ML will be the engine that drives processing and interpretation of data. We are already seeing some of the impacts in this space with the advances in automated bots within Commerce storefronts. These bots derive intelligence based on the data that has been harvested and processed within search platforms. Soon the same changes will be coming to the AR/VR and IoT applications for Commerce scenarios. I am excited about the future of Commerce Search as we start to see the advanced application of AI and ML within the traditional search space.
The Future of Commerce Search
0
the-future-of-commerce-search-1db22ba7d893
2018-10-10
2018-10-10 21:39:06
https://medium.com/s/story/the-future-of-commerce-search-1db22ba7d893
false
649
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Rightpoint
Rightpoint is an independent customer experience agency with technology at our core.
89be00badb49
Rightpoint
37
22
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-30
2018-04-30 08:35:11
2018-04-30
2018-04-30 08:49:28
4
false
en
2018-06-07
2018-06-07 16:25:07
6
1db3de178463
7.009434
1,784
15
1
Artificial Intelligence is a state-of-the-art technological trend that many companies are trying to integrate into their business. A recent…
5
GAME-CHANGING TRENDS TO LOOK OUT FOR WITH AI Artificial Intelligence is a state-of-the-art technological trend that many companies are trying to integrate into their business. A recent report by McKinsey states that Baidu, the Chinese equivalent of Alphabet, invested $20 billion in AI last year. At the same time, Alphabet invested roughly $30 billion in developing AI technologies. The Chinese government has been actively pursuing AI technology in an attempt to control a future cornerstone innovation. Companies in the US are also investing time, money and energy into advancing AI technology. The reason for such interest towards artificial intelligence is that artificial intelligence can enhance any product or function. This is why companies and governments make considerable investments in the research and development of this technology. Its role in increasing the production performance while simultaneously reducing the costs cannot be underestimated. Photo by Matan Segev from Pexels Since some of the largest entities in the world are focused on promoting the AI technology, it would be wise to understand and follow the trend. AI is already shaping the economy, and in the near future, its effect may be even more significant. Ignoring the new technology and its influence on the global economic situation is a recipe for failure. The current state of AI and the challenges it is facing Despite the huge public interest and attention towards AI, its evolution is still somewhat halted by the objective causes. As any new and fast-developing industry, AI is quickly outgrowing its environment. According to Adam Temper, an author of many creative researches on artificial intelligence, the development of AI is mostly limited by the “lack of employees with relevant expertise, very few mature standard industry tools, limited high quality training material available, few options for easy access to preconfigured machine learning environments, and the general focus in the industry on implementation rather than design”. With any new complex technology, the learning curve is steep. Our educational institutions are several steps behind the commercial applications of this technology. It is important that AI scientists work collaboratively, sharing knowledge and best practice, to address this deficiency. AI is rapidly increasing its impact on society; we need to ensure that the power of AI doesn’t remain with the elite few. Another factor that may be hindering the progress of AI is the cautious stance that people tend to take towards it. Artificial intelligence is still too sci-fi, too strange and, therefore, sometimes scary. When people learn to trust AI, it will make a true quantum leap in the way of general adoption and application. Adam Temper supports this point, too, describing the possible ways for AI technology to gain public trust as “more media attention on positive human impact AI applications, e.g. medical diagnostics and scientific discoveries, general awareness that progress in the field is 100% human driven… Self-improving AI is some way off! Appreciation of the number of new jobs this technology will create, employment shift from manual/repetitive tasks to those with higher creativity seen in a positive light.” At the same time, if we analyze the primary purpose of AI, we will see it for what it really is — a tool to perform the routine tasks relieving humans for something more creative or innovative. When asked about the current trends and opportunities of AI, Aaron Edell, CEO and co-founder of Machine Box, and one of the top writers on AI, described them as follows: “Artificial intelligence is a broad term that generally refers to any computer system that replicates human intelligence. As such, a lot of use cases can come to mind from predicting the stock market to vacuuming floors. Where the real opportunity in artificial intelligence lies is in the ability to perform pattern recognition to automate repetitive tasks. At Machine Box, we see a lot of success with customers who are trying to solve a problem centered around automating a task that a human might not be very efficient at, such as tagging people, products, and locations in millions of images or recommending content to people they’re more likely to engage with. Some of these tasks are impossible to scale without something like machine learning to take on the computational burden. The challenges to machine learning are now mostly around having the right combination of use cases and training data. By taking advantage of new research into reinforcement and transfer learning, we’ve built our tools to minimize the amount of training data you need, which is important because where AI and machine learning will succeed is in the ability to tune the models to your data and your needs. A single, grand, machine learning model trained with all the data in the world will not perform as well as one created with your data. What we’re going to see in the near term are smaller, more focused machine learning models that will learn quickly, and can be deployed on-demand like a compute or storage resource.” AI Influence on Domestic Politics AI has also become a political talking point in recent years. There have been arguments that AI will help to create jobs, but that it will also cause certain workers to lose their jobs. For example, estimations prove that self-driving vehicles will cause 25,000 truck drivers to lose their jobs each month. Also, as much as 1 million pickers and packers working in US warehouses could be out of a job. This is due to the fact that by implementing AI, factories can operate with as few as a dozen of workers. Naturally, companies gladly implement artificial intelligence, as it ensures considerable savings. At the same time, governments are concerned about the current employment situation as well as the short-term and long-term predictions. Some countries have already begun to plan measures about the new AI technology that are intended to keep the economy stable. In fact, it would not be fair to say that artificial intelligence causes people to lose jobs. True, the whole point of automation is making machines do what people used to do before. However, it would be more correct if we said that artificial intelligence reshapes the employment situation. Together with taking over human functions, it creates other jobs, forces people to master new skills, encourages workers to increase productivity. But it is obvious that AI is going to turn the regular sequence of events upside down. Photo by McKylan Mullins from Pexels Therefore, the best approach is not to wait until AI leaves you unemployed, but rather proactively embrace it and learn to live with it. As we said already, AI can also create jobs, so a wise move would be to learn to manage AI-based tools. With the advance of AI products, learning to work with them may secure you a job and even promote your career. Investing Into Your Future Your future largely depends on your current and expected income. However, another important factor is the way you manage your finances. Of course, investing in your own or your children’s knowledge is one of the best investments you can ever make. At the same time, if you need some financial cushion to secure your family’s welfare, you should look at the available investment opportunities. And this is where artificial intelligence may become your best friend, professional consultant and investment manager. In the recent years, in addition to the traditional banks and financial institutions, we have witnessed the appearance of a totally new and innovative investment system. We are talking about the blockchain technology and the cryptocurrencies that it supports. Millions of people all over the world have already appreciated the transparency and flexibility of the blockchain networks. By watching the cryptocurrency trends carefully and trading wisely, individual investors have made fortunes within a very short time. Nowadays, the cryptocurrency opportunities are open for everyone, not only for the industry experts. There are investment funds running on artificial intelligence that are available for individual investors. With such funds, you are, on one hand, protected by the blockchain technology. It ensures proper safety of your funds and the security of your transactions. On the other hand, you do not need to be an investment expert to make wise decisions. This is where artificial intelligence is at your service. It analyzes the existing trends on the extremely volatile cryptocurrency market and shows you the best opportunities. AI Opportunities The main point is that we should not regard AI as a threat to our careers and a danger to our well-being. Instead, we should analyze the investment openings created by AI technology that can secure our prosperity. For example, Wolf Coin is using AI technology to create a seamless investment channel for savvy individuals. This robust channel opens great opportunities that investors can use to become new rich kids on the block. Most noteworthy, the low entry cost of $10 has made it one offer that will enjoy a huge buzz. The focus on this new market opening will help people build a solid financial nest egg that will keep them safe even in the face of the storm. Wisewolf Fund launching the Wolf Coin focused its effort on creating a great opportunity for people who wish to benefit from cryptocurrency trading but are new to this trend. With artificial intelligence and advanced analytical algorithms, the fund arranges the most favorable conditions for individual investors. Mainstream manufacturers, companies, and factories are embracing AI technology to change the mode of their operations. Therefore, it is critical to keep tabs on this reality as it can bring many benefits that cannot be found elsewhere. AI is one of the hottest topics of discussion, however, it is now clear that AI is here to stay. So, people should accept the obvious in order to create the future that they desire. The wisest strategy is to embrace artificial intelligence and let it work to maintain our well-being. The following article was originally published on WiseWolf blog. Don’t want to miss the next story? Subscribe to our newsletter and stay tuned!
GAME-CHANGING TRENDS TO LOOK OUT FOR WITH AI
18,293
unique-trends-to-look-out-for-with-artificial-intelligence-1db3de178463
2018-06-25
2018-06-25 10:14:01
https://medium.com/s/story/unique-trends-to-look-out-for-with-artificial-intelligence-1db3de178463
false
1,672
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
WiseWolf Fund
The WiseWolf Crypto Fund provides an easy way to enter the cryptocurrency market even for non-techies.
7ac96b14a626
WisewolfFund_io
4,408
52
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-28
2017-09-28 02:46:20
2017-09-28
2017-09-28 18:14:43
2
false
en
2017-09-28
2017-09-28 18:14:43
2
1db3ed58c165
2.039937
0
0
0
If you’re new to UCLA, welcome, otherwise, welcome back! This is just a quick post to (re)introduce ACM AI, UCLA’s artificial intelligence…
1
Welcome to AI at UCLA! If you’re new to UCLA, welcome, otherwise, welcome back! This is just a quick post to (re)introduce ACM AI, UCLA’s artificial intelligence and machine leaning club. We’re an organization dedicated to teaching the fundamentals of AI and ML to all undergraduates, regardless of major, background, or previous experience. Not only do we teach, but we also hack around on the newest and coolest ideas out there, host reading groups for AI-related textbooks and papers, and engage with the broader community through collaboration with graduate students, professors, and industry leaders. Why this might interest you AI and machine learning has rapidly risen to be the hottest topic in the computing world over the last few years. Whether your goals include just wanting to get a solid understanding of machine learning and AI, or learn how to apply these concepts to a specific field that interests you, or building an AI to take over the world, we’re here to help. We’ll be taking you from no background in machine learning an AI to having a solid understanding of the field, using the modern tools industry professionals and researchers use, and discussing tips and tricks we’ve learned from our experiences in AI. What we’ve done in the past One of our Tensorflow Workshops In the past, we’ve hosted events and workshops that cover fundamental topics in AI and machine learning, including introductory tutorials to the A* algorithm and a deep dive into linear regression. Last quarter, we hosted an 8-week workshop series on machine learning with Tensorflow, where we delved into some of the most impactful topics in recent years in the field, while learning the most popular library among researchers and industry specialists alike. We’ve also hosted reading groups, where a few of us get together and discuss chapters and topics that interest us. What do we plan to do in the future In the future, we hope to keep working towards our goal of educating anyone who’s interested in the fields of AI and machine learning. We also hope to continue with our reading groups, and increase our involvement with the broader AI community. Where can you learn more The best way to stay updated about ACM-AI related happenings is through our Facebook page. If you’re interested in learning some of the basics of machine learning and are not sure where to start, we’ve written a post detailing how to get started in machine learning. As always, you’re encouraged to contact any of the AI officers with any questions, concerns, or suggestions for us! Thanks, and hope to see you at one of our events this quarter!
Welcome to AI at UCLA!
0
welcome-to-ai-at-ucla-1db3ed58c165
2018-06-04
2018-06-04 15:59:16
https://medium.com/s/story/welcome-to-ai-at-ucla-1db3ed58c165
false
439
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Rohan Varma
http://rohanvarma.me
98fc8233a778
rvarm1
36
98
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-01
2017-10-01 19:03:18
2017-10-01
2017-10-01 19:09:36
0
false
en
2017-10-05
2017-10-05 00:38:41
0
1db4e75f9e6b
0.950943
1
0
0
Customer experience is today’s buzzword in the market and for good merit- the best sales organizations make their living discovering…
5
Is Automation the next Tech Revolution? Customer experience is today’s buzzword in the market and for good merit- the best sales organizations make their living discovering customers’ pain points and devoting tremendous resources to delivering custom solutions. While businesses continue to throw heaps of money and time towards the rolling stone that is today’s customer experience, as margins diminish, organizations need to use their acumen towards leaning out cumbersome processes and fixed expenses on their balance sheets. Too often, we lose the nimbleness that was so integral in carving out our slice in the marketplace. I remember reading Smiths ‘Wealth of Nations’ in a survey economics course and learning about the concept of interchangeability. In the late 18th century, a French General realized that by developing muskets with parts that were interchangeable, the cost of repairing and making these guns would diminish. Henry Ford ultimately maximized margins by adopting this concept with interchangeable laborers. Today, we see automation at the avant-garde of efficiency. The ability to leverage automation bends the cost curve tremendously and frees up your people to focus on high pay-off activities. I’ve experimented with applications that result in doing a days work with the click of a mouse. Customer experiences will always play a pivotal role in getting the wallet share of any sector. In today’s market, where the $50 is the new $5 and salaries are at a premium, automation provides a sleek, low cost solution that will push the business world into the 21st century.
Is Automation the next Tech Revolution?
1
is-automation-the-next-techrevolution-1db4e75f9e6b
2017-12-03
2017-12-03 19:27:51
https://medium.com/s/story/is-automation-the-next-techrevolution-1db4e75f9e6b
false
252
null
null
null
null
null
null
null
null
null
Sales
sales
Sales
30,953
Jeff Gottlieb
null
ee75fc24718b
JeffGottlieb
34
72
20,181,104
null
null
null
null
null
null
0
# Load pickled data import pickle import pandas as pd import numpy as np # TODO: Fill this in based on where you saved the training and testing data training_file = 'traffic-signs-data/train.p' validation_file= 'traffic-signs-data/valid.p' testing_file = 'traffic-signs-data/test.p' with open(training_file, mode='rb') as f: train = pickle.load(f) with open(validation_file, mode='rb') as f: valid = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_train, y_train = train['features'], train['labels'] X_valid, y_valid = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels'] Number of training examples = 34799 Number of validation examples = 4410 Number of testing examples = 12630 Image data shape = (32, 32, 3) Number of classes = 43 from sklearn.utils import shuffle X_train, y_train = shuffle(X_train, y_train) X_valid, y_valid = shuffle(X_valid, y_valid) X_test, y_test = shuffle(X_test, y_test) #Nomralisation X_train = (X_train-X_train.mean())/(np.max(X_train)-np.min(X_train)) X_valid = (X_valid-X_valid.mean())/(np.max(X_valid)-np.min(X_valid)) X_test = (X_test-X_test.mean())/(np.max(X_test)-np.min(X_test)) def LeNet(x): # Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = 0, stddev = 0.1)) conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b # Activation 1. conv1 = tf.nn.relu(conv1) # Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # Layer 2: Convolutional. Input = 14x14x6. Output = 10x10x16. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = 0, stddev = 0.1)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b # Activation 2. conv2 = tf.nn.relu(conv2) # Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # Flatten. Input = 5x5x16. Output = 400. flattened = flatten(conv2) #Matrix multiplication #input: 1x400 #weight: 400x120 #Matrix multiplication(dot product rule) #output = 1x400 * 400*120 => 1x120 # Layer 3: Fully Connected. Input = 400. Output = 120. fullyc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = 0, stddev = 0.1)) fullyc1_b = tf.Variable(tf.zeros(120)) fullyc1 = tf.matmul(flattened, fullyc1_W) + fullyc1_b # Full connected layer activation 1. fullyc1 = tf.nn.relu(fullyc1) # Layer 4: Fully Connected. Input = 120. Output = 84. fullyc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = 0, stddev = 0.1)) fullyc2_b = tf.Variable(tf.zeros(84)) fullyc2 = tf.matmul(fullyc1, fullyc2_W) + fullyc2_b # Full connected layer activation 2. fullyc2 = tf.nn.relu(fullyc2) # Layer 5: Fully Connected. Input = 84. Output = 43. fullyc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = 0, stddev = 0.1)) fullyc3_b = tf.Variable(tf.zeros(43)) logits = tf.matmul(fullyc2, fullyc3_W) + fullyc3_b return logits learning_rate = 0.001 epochs = 40 batch_size = 64 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate) training_operation = optimizer.minimize(loss_operation) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(epochs): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, batch_size): end = offset + batch_size batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) valid_loss, valid_accuracy = evaluate(X_valid, y_valid) print("Epoch {}, Validation loss = {:.3f}, Validation Accuracy = {:.3f}".format(i+1, valid_loss, valid_accuracy)) print() saver1.save(sess, './classifier') print("Model saved") def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 total_loss = 0 sess = tf.get_default_session() for offset in range(0, num_examples, batch_size): batch_x, batch_y = X_data[offset:offset+batch_size], y_data[offset:offset+batch_size] loss, accuracy = sess.run([loss_operation, accuracy_operation], feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) total_loss += (loss * len(batch_x)) return total_loss/num_examples, total_accuracy/num_examples
14
29b90ea3a2db
2018-04-26
2018-04-26 06:21:34
2018-06-06
2018-06-06 02:51:40
6
false
en
2018-08-22
2018-08-22 13:12:21
4
1db4eda67979
5.863208
6
0
0
1. Introduction
5
Traffic sign detection - Udacity’s selfdriving car nanodegree— deep learning series 3 1. Introduction Convolutional Neural Network (CNN) is a powerful tool in computer vision and self driving cars. Traffic sign detection is one of the major task in self-driving as it gives the input of what sign is in the image to decision making. I have done this project as part of Udacity’s self-driving car engineer course and all the credit goes to them. If you would like to get a brief introduction on CNN, please visit my previous article in this series. 2. Dataset We have been provided with training, validation, and testing dataset in pickle format. Each dataset contains number of images and it’s label. We can use below code to load the data from pickle. The pickled data is a dictionary with 4 key/value pairs: 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels). 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id. 'sizes' is a list containing tuples, (width, height) representing the original width and height the image. 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES. Here is Summary statistics of the traffic signs data set we have: Example dataset 3. Pre-processing Before starting the training processs, dataset needs to have basic preprocessing using normalisation, grayscale etc. I found out normalisation itself gives very good output result and did not use other preprocessing techniques. I did a reshuffling of the data so that it can increase the random nature of the datset. Then did the normalisation to make sure the image data has been normalized so that the data has mean zero and equal variance. Image before and after normalisation are displayed here. Before: After: 4. Model architecture The above picture is the architecture of LeNet-5 which is considered as one of the first Convolutional Neural Network(CNN). We are using LeNet-5 for the traffic sign detection project. Pseudo-code of the architecture as follows: Input The LeNet architecture accepts a 32x32x3 image as input Architecture Layer 1: Convolutional. The output shape should be 28x28x6. Activation. Your choice of activation function. Pooling. The output shape should be 14x14x6. Layer 2: Convolutional. The output shape should be 10x10x16. Activation. Your choice of activation function. Pooling. The output shape should be 5x5x16. Flatten. Flatten the output shape of the final pooling layer such that it’s 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you. Layer 3: Fully Connected. This should have 120 outputs. Activation. Your choice of activation function. Layer 4: Fully Connected. This should have 84 outputs. Activation. Your choice of activation function. Layer 5: Fully Connected (Logits). This should have 43 outputs. Actual code implementation is below: 5. Train & Evaluate To train the model, I used following hyperparameter after several trial and error method. Lenet model gives the logits and cross entropy. Then loss operation give the error compared to actual result and predicted result. Finally Adam optimiser is used for optimisation. The above steps do the forward and backward pass and doing this on iterative manner will reduce the error at the end. The above code is the typical way of running the training in Tensorflow. Then evaluate function added below get called to check the validation accuracy at each epoch. 7. Testing Once the training is done, we can run the model against test dataset to check the final accuracy. In this case, trained model is able to correctly guess 6 of the 6 traffic signs, which gives an accuracy of 100%. The result of the prediction as follows: The above table gives the indication that, if we provide an image of traffic signs, then it can predict it accurately. I have only provided the label on the image columns instead of image due to this article space constraints. If you would like to see the full code in action, please visit my github repo. If you like my write up, follow me on Github, Linkedin, and/or Medium profile. Deep learning series Deep learning series 1- Intro to deep learning Deep learning series 2 — simple image classification using deep learning Reference Udacity’s self-driving car engineer nanodegree
Traffic sign detection - Udacity’s selfdriving car nanodegree— deep learning series 3
108
traffic-sign-detection-selefdriving-car-deep-learning-series-3-1db4eda67979
2018-08-22
2018-08-22 13:12:21
https://medium.com/s/story/traffic-sign-detection-selefdriving-car-deep-learning-series-3-1db4eda67979
false
1,302
Let's learn AI together
null
null
null
Intro to Artificial Intelligence
null
intro-to-artificial-intelligence
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,TECHNOLOGY,SELF DRIVING CARS
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Dhanoop Karunakaran
Software Engineer, Deep Learning & Machine Learning Engineer, Self Driving cars nanodegree holder@ Udacity
9d9e487d186
dhanoopkarunakaran
438
168
20,181,104
null
null
null
null
null
null
0
Discharge date : 01/01/2001 : Diagnosis : Fluffy was diagnosed with diabetes Discharge date : 10/02/2004 : Diagnosis : Rex has an endocrine disorder Discharge date : 30/03/2016 : Diagnosis : Fido has acute pancreatitis Discharge date : 01/01/2001 : Diagnosis : differential ddx cushings / hyperadrenocorticism Discharge date : 10/02/2004 : Diagnosis : pancreatitis????? or maybe foreign body??????? need to sedate we hope that her lameness will improve NULL hope that her lameness will improve we NULL that her lameness will improve we hope NULL her lameness will improve we hope that NULL lameness will improve we hope that her NULL will improve we hope that her lameness NULL improve we hope that her lameness will NULL <h3>diagnosis</h3> <p>ACROMEGALY<p/> ...
5
null
2018-05-16
2018-05-16 12:27:50
2018-05-21
2018-05-21 13:44:19
9
false
en
2018-05-21
2018-05-21 14:33:58
6
1db86722e8c2
7.139623
0
0
0
Method 2 : Hide each token in the input sentence one-by-one and see how this changes the likelihood of the sentence’s class
1
CNN Insights : Hiding input tokens to reveal classification focus. Part 5 of 7 Method 2 : Hide each token in the input sentence one-by-one and see how this changes the likelihood of the sentence’s class Series links Part 1 : Introduction Part 2 : What do convolutional neural networks learn about images? Part 3 : Introduction to our dataset and classification problem Part 4 : Generating text to fit a CNN Part 5 : Hiding input tokens to reveal classification focus Part 6 : Scoring token-sequences by their relevance Part 7 : Series conclusion In a sentence classification task an algorithm is given a sentence as input and it must output the class of the sentence. A trained algorithm might be highly accurate at making these decisions but rather than learn how to make its classification decision from pertinent words in the sentence, it has instead learned some spurious but correlated words. Spurious but correlated words I want to give some intuition as to how you can get great classifications results but still have a classifier that is actually no good at your task. Here are three sentences about three different patients with different diagnoses (ie all these sentences should be classified as TP (true positive) as the patients all have the diseases that are mentioned in their sentences: Our classifier might correctly score all these sentences as being TP and we might have high confidence consequently that the classifier is good at discriminating between TP (true positive-patient has disease referred to) and FP (false positive- patient doesn’t have disease referred to). However, we don’t know how the classifier is making its decisions. For instance, the classifier might simply learn something that can be glossed as : if a sentence starts with the tokens ‘Discharge date : [DATE]’ then the sentence is a TP. It might simply ignore the rest of the sentence, including the actual disease reference. What would such a classifier output when given these sentences in which the true class is FP: Since the sentences start with the pattern it has learned it could simply output TP as it has learned to ignore the part of the sentence which actually gives it useful information. Free text health records are littered with spurious correlations like the above where portions of the free text are actually artefacts of how the programmer wrote the free text extraction process to get the data out of the source medical system. Here is an example from our own system: A screen shot of a portion of our electronic medical record. How will this be converted to free text? This screen was converted to the following free text document with HTML markup to convey the original screen headings: <h3>Information for Referring Vet</h3> Referral letter choice not know <h4>Diagnostic Tests</h4> <p>Arthorocentesis left elbow and carpi — unremarkable</p> You should hopefully see that there are is plenty possibilities for spurious correlations to be learned here. The following method can help reveal what a CNN has fitted to : either useful tokens or spurious but correlated ones. Experimental setup Fit the CNN to the training corpus Choose a sentence to analyse. Let m be the number of tokens in this sentence. Record CNN’s prediction for the true class for the sentence Create a batch of m sentences which are identical to the sentence except for the mth token which you blank out using a padding term, ie ‘NULL’ or ‘STOP’ or zero or whatever you used to pad your sentences. Predict the class of your m sentences. Visualise how the CNN’s prediction of the true class changes when each of the m tokens is occluded. We should hope to see that when tokens which are pertinent to the classification task are hidden, then the classification decision should also change. This indicates that the CNN has fitted to relevant features in the sentence rather than spuriously correlated tokens. Example I will work through an example sentence. Here is a sentence from the VetCompass corpus : The disease reference which is the target for the classifier is ‘lameness’. The classifier must determine if this sentence was written before the patient was diagnosed as lame was written at the point the patient was diagnosed as lame was written after the patient was diagnosed as lame was written in the notes of a patient who was never diagnosed as lame at any point For this particular sentence the true label is that the sentence was written in a patient’s notes after the patient was diagnosed as lame. The trained classifier outputs the likelihood of the true class as 93% eg the classifier correctly predicts the true class. But does it do it for the right reasons? Let’s apply the occlusion method to find out. For this sentence m = 7 as there are 7 tokens in the sentence. The occlusion batch looks like this (NULL represents a blank or meaningless token, ie NULL is the occlusion window): The CNN model predicts the class of each of the 7 batch sentence above. These changes of likelihood can be plotted to visualise what the CNN has fitted to. Here is how to read the sentence plots: The target disease reference tokens that the classifier should classify is shown IN BOLD in the y-axis. All other tokens are shown in lower case. The bars indicate the absolute change in % likelihood of the true class when the token in the y-axis is occluded The true class is shown in the title of the plot From this plot we can observe the following: if the actual disease reference we are classifying (‘lameness’) is hidden from the CNN then the likelihood that the sentence was written after diagnosis plummets by around 40%. This is good; it shows that the CNN has focussed on the disease phrase that we want it to classify. If the modal verb ‘will’ is hidden, then the CNN’s confidence also drops substantially that this patient was diagnosed before the sentence was written (‘will’ indicates a future state). Plausibly, when ‘hope’ is hidden there is less likelihood that the patient was diagnosed. Less plausibly, there is also less likelihood when the subordinating conjunction ‘that’ is hidden ie if you read the sentence without ‘that’ in it, it doesn’t seem any less likely to me that the patient was diagnosed as lame. Hiding the other tokens ‘we’, ‘her’ and ‘improve’ has minimal effect on the predicted class (less than 5%). For this (cherry-picked) sentence we can see that the CNN is getting the right answer by fitting to the germane tokens. We may quibble with some choices, ie ‘improve’ should be more germane than ‘her’ but by and large it has fitted to relevant tokens. Code Results It’s hard to quantify the insights that this method gives as it works sentence by sentence but overall the impression from a few hundred sentence is that our network mostly fits the most pertinent tokens in the sentence. In the following plots we can see that when any one of the germane tokens are occluded then the likelihood drops substantially: However, sometimes the CNN seems to get the right answer by fitting to the wrong tokens for the classification. In the following plot, the CNN seems to under-rate the significance if the differential diagnosis marker ‘ddx’ and instead focuses on an unrelated diagnosis in the differential. The chances that this patient wasn’t diagnosed with hypertension should fall substantially when ‘ddx’ is occluded, not so much when ‘cognitive’ is occluded: For some sentences, it’s hard to make sense of what the CNN is doing. For instance, the chance that this sentence was written after diagnosis makes large swings depending on which token in the disease reference ‘skin’ or ‘lesions’ is occluded. eg if ‘skin’ is occluded the true class likelihood actually increases by 20%, but falls by 15% when ‘lesions’ is hidden: There is some evidence that the CNN sometimes fits to correlated and spurious tokens rather than the most relevant tokens in the sentence. For instance, this HTML markup at the start of the sentence: These HTML tokens were artefacts of the data extraction process which was used to get data out of the clinical system in which the sentence was originally written. Classification decisions taken based on these artefacts would not necessarily generalise to the exports of another clinical system. Conclusion This method is different from the other two in that it can be applied to a particular sentence to give feedback to a user on the algorithm’s decisions on that particular sentence. I can envisage embedding this method into an application to visualise a CNN’s decision; eg a heat-map over the tokens in a sentence. Next post : Part 6 : Scoring token-sequences by their relevance
CNN Insights : Hiding input tokens to reveal classification focus. Part 5 of 7
0
cnn-insights-hiding-input-tokens-to-reveal-classification-focus-part-5-of-7-1db86722e8c2
2018-05-21
2018-05-21 14:34:00
https://medium.com/s/story/cnn-insights-hiding-input-tokens-to-reveal-classification-focus-part-5-of-7-1db86722e8c2
false
1,574
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Noel Kennedy
null
6f551e3b1570
noel_kennedy
5
3
20,181,104
null
null
null
null
null
null
0
null
0
2de955844a7a
2018-08-13
2018-08-13 02:39:03
2018-08-13
2018-08-13 02:45:29
1
false
en
2018-08-13
2018-08-13 07:10:30
0
1dba4638bcfc
0.656604
3
0
0
I’ve been enormously fortunate to have stumbled across the reading lists of several brilliant folks, wherein they basically assemble books…
5
Reading List I’ve been enormously fortunate to have stumbled across the reading lists of several brilliant folks, wherein they basically assemble books they have either read or recommend to others. To that effect, I think it’s useful for me to do the same. I’ll no doubt be updating this list periodically, and add to it. I’m constraining it to all the books I’ve read in the past few years that I recommend in some shape or form. Be warned, many will be technical and are related to my interests, mostly including Math, Computer Science, Biology, Economics, and the technology industry. How to Solve it by G. Polya The Society of Mind by Marvin Minsky The Book of Why by Judea Pearl
Reading List
101
reading-list-1dba4638bcfc
2018-08-13
2018-08-13 07:10:30
https://medium.com/s/story/reading-list-1dba4638bcfc
false
121
Thoughts on Artificial Intelligence, Computational Biology, and other cool things.
null
null
null
Technomancy
has727@g.harvard.edu
technomancy
COMPUTER SCIENCE,BIOTECHNOLOGY,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,NEUROSCIENCE
harshsikka
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Harsh Sikka
Grad student @Harvard studying Computational Neuroscience and Artificial Intelligence
354ea6b3d6de
HarshSikka
705
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-13
2018-08-13 15:09:35
2018-09-12
2018-09-12 14:40:58
1
false
en
2018-09-12
2018-09-12 14:40:58
5
1dbba3f2a74c
2.173585
3
0
0
As a developer with training in data science making web applications always seems like a hassle to me. Then like any other masters student…
5
Making data driven web apps with Dash Part-1 As a developer with training in data science making web applications always seems like a hassle to me. Then like any other masters student, I applied for a summer internship (at TolaData) and my internship task to deploy an interactive web app in 2 days. After googling for 2 hours i thought ill do it with django(bad choice, wasted 3 hours on trying various django third party apps) or flask(It’s ok but still not easy enough to learn in a day) then I landed on the Github page of Dash and the rest is history. I developed and deployed the web app for the task in less than 3 hours!!(http://toladata-task-app.herokuapp.com/). I continued to work in Dash during my internship at TolaData and now it’s the end of my internship so i decided to share my work in this series of blog posts. The first blog (this one) will contain the basic introduction to dash, the second one will include more interactive features and the third one will include features like optimization, authentication, and deployment of dash app and multipage dash app. I’ll also be sharing the code of my Dash app during next posts. So let’s get started with Dash, it’s an open source tool by Plotly build on top of flask and you can use various tools like D3.js, React, Plotly.js etc with it. I’ll try to show how to make a web app with Dash here but first let’s Learn about the components of Dash. Basically there are two component libraries for a Dash app: Dash core components: Contains most of the plotly’s features like dcc.dropdown, dcc.link and dcc.graph. Dash HTML components: Contains the HTML components for the application like html.div, html.H1 etc. Now let’s talk about the parts for out dash app: Layout: “The Dash layout describes what your app will look like and is composed of a set of declarative Dash components”. It acts as a HTML layout for our application, you simply list the elements you want in your application in the layout section. Callbacks: So callbacks are basically I/O functions for our Dash applications, if you want to compute something every time an input is provided then we use callbacks. They can be a bit hard to understand at first but they are very useful. For installation follow the instructions here: https://dash.plot.ly/installation Now let’s make a simple Dash app which takes the index of a file and outputs the row. As you can see the app looks a lot like a simple flask app and it’s written in pure python(No CSS, HTML or JS). Though we can still add those extra assets in dash, I’ll be describing them in the next part of this series. Dash is an amazing tool for developing and prototyping data-driven web applications. Though it’s still quite a young project so there are still few bugs and lack of features, but it’s definitely worth a try. Plus Plotly’s community is super awesome. Resources: Dash user guide: https://dash.plot.ly/ Plotly’s community https://community.plot.ly/ Dash show and tell thread : https://community.plot.ly/t/show-and-tell-community-thread/7554
Making data driven web apps with Dash Part-1
13
making-data-driven-web-apps-with-dash-part-1-1dbba3f2a74c
2018-09-12
2018-09-12 14:40:58
https://medium.com/s/story/making-data-driven-web-apps-with-dash-part-1-1dbba3f2a74c
false
523
null
null
null
null
null
null
null
null
null
Data Visualization
data-visualization
Data Visualization
11,755
Prabhant Singh
Masters student @UniTartu, MLSec enthusiast
c8d7c5692c9a
prabhantsingh
66
52
20,181,104
null
null
null
null
null
null
0
null
0
b54d31a2a99a
2018-09-21
2018-09-21 09:05:24
2018-09-21
2018-09-21 09:18:14
3
false
th
2018-09-21
2018-09-21 09:18:14
0
1dbbf2c97ec3
1.380189
0
0
0
หัวเว่ย เทคโนโลยี่ (ประเทศไทย) ประกาศเปิดตัวบริการคลาวด์สาธารณะ ในประเทศไทย ในงานแถลงข่าวพิธีเปิด “หัวเว่ย คลาวด์ ไทยแลนด์”…
5
หัวเว่ยเปิดตัว บริการคลาวด์สาธารณะ ในประเทศไทย “หัวเว่ย คลาวด์ ไทยแลนด์” หัวเว่ย เทคโนโลยี่ (ประเทศไทย) ประกาศเปิดตัวบริการคลาวด์สาธารณะ ในประเทศไทย ในงานแถลงข่าวพิธีเปิด “หัวเว่ย คลาวด์ ไทยแลนด์” ซึ่งจัดขึ้นที่ศูนย์แสดงสินค้าและการประชุมอิมแพ็ค เมืองทองธานี ช่วงเดียวกับงาน “Digital Thailand Big Bang 2018” โดยมี ดร. พิเชฐ ดุรงคเวโรจน์ รัฐมนตรีว่าการกระทรวงดิจิทัลเพื่อเศรษฐกิจและสังคม นายนฤตม์ เทอดสถีรศักดิ์ รองเลขาธิการคณะกรรมการส่งเสริมการลงทุน มร. เจิ้ง เย่หลาย ประธานบริหาร กลุ่มธุรกิจคลาวด์ของหัวเว่ย มร. เจมส์ อู๋ ประธานบริหาร หัวเว่ย เอเชียตะวันออกเฉียงใต้ และมร. โซเล่อร์ ซุน หัวหน้ากลุ่มธุรกิจคลาวด์ ของหัวเว่ยประเทศไทย ร่วมงาน หัวเว่ยได้รับมอบใบอนุญาตในการดำเนินธุรกิจบริการคลาวด์ในประเทศไทยจากคณะกรรมการส่งเสริมการลงทุน (BOI) ทำให้หัวเว่ยเป็นบริษัทด้านเทคโนโลยีระดับโลกรายแรกที่เปิดให้บริการคลาวด์ในประเทศไทย พร้อมด้วยระบบโครงสร้างพื้นฐานในประเทศ ที่รองรับการเข้าถึงเครือข่ายทั่วโลก มร. เจิ้ง เย่หลาย ประธานบริหาร กลุ่มธุรกิจคลาวด์ของหัวเว่ย กล่าวว่า “การเปิดตัวบริการคลาวด์ของหัวเว่ยในประเทศไทยถือเป็นก้าวสำคัญของเรา หัวเว่ยเชื่อมั่นว่า การเปิดศูนย์ข้อมูลใหม่นี้จะช่วยขับเคลื่อนนโยบาย Thailand 4.0 โดยใช้เทคโนโลยีอันทันสมัยในด้านการประมวลผลแบบคลาวด์ (Cloud Computing) บิ๊กดาต้า (Big Data) และปัญญาประดิษฐ์ (AI) การนำเทคโนโลยีที่มีอยู่ในแพลตฟอร์มของเรามาใช้จะช่วยให้บริษัททุกขนาดในอุตสาหกรรมต่าง ๆ ขยายธุรกิจออกไปสู่ระดับภูมิภาคและระดับโลกได้อย่างมีประสิทธิภาพมากยิ่งขึ้น หัวเว่ยจะพัฒนาประเทศไทยให้ก้าวไกลต่อไป ด้วยการแบ่งปันความรู้ความเชี่ยวชาญด้านเทคโนโลยีไอซีทีที่บ่มเพาะมากว่าสามทศวรรษ ด้วยการสนับสนุนจากฝ่ายเทคนิคระดับมืออาชีพในไทย หัวเว่ยพร้อมที่จะให้บริการคลาวด์ระดับโลกชั้นยอดและน่าเชื่อถือแก่ลูกค้าในประเทศอย่างเต็มกำลัง” ในงาน Huawei Cloud Thailand หัวเว่ยได้แนะนำเทคโนโลยีคลาวด์ต่าง ๆ มากมาย เช่น การเรียนรู้เชิงลึก (Deep Learning) การจดจำข้อมูลภาพ (Image Recognition) และเทคโนโลยี AI และยังได้มีการสาธิตกรณีศึกษาที่ประสบความสำเร็จหลายโครงการ อาทิเช่น หนึ่งในการนำเสนอของหัวเว่ย คลาวด์ ให้เห็นถึงรูปแบบการทำงานของคลาวด์แพลตฟอร์มที่ช่วยหน่วยงานท้องถิ่นบริหารระบบสัญญาณไฟจราจรตามแยกต่างๆ โดยมีมาตรการที่สามารถควบคุมความแออัดของการจราจรได้อย่างมีประสิทธิภาพ โซลูชั่นบริหารจัดการสามารถควบคุมการตั้งเวลาสัญญาณไฟจราจรได้อย่างชาญฉลาด มีการทำงานประสานกัน จึงช่วยลดปัญหาการจราจรบนท้องถนนได้อย่างมาก หัวเว่ย มุ่งมั่นที่จะสร้างแพลตฟอร์มคลาวด์สาธารณะแบบเปิดและสร้างสรรค์ รวมทั้งแพลตฟอร์ม AI คุณภาพเยี่ยมในราคาที่ซื้อหาได้ เรายังทุ่มเทที่จะให้บริการคลาวด์ที่น่าเชื่อถือ ไว้วางใจได้ ปลอดภัย และอัพเกรดอยู่เสมอแก่ลูกค้าทั่วโลก ทั้งนี้ หัวเว่ยได้ส่งมอบเทคโนโลยีและแพลตฟอร์มคลาวด์ต่าง ๆ ให้แก่ลูกค้ามากมาย อาทิ Peugeot SA (PSA), ธนาคาร Banco Santander และองค์การวิจัยนิวเคลียร์ยุโรป (CERN) ระบบนิเวศแบบเปิดของหัวเว่ยประกอบไปด้วยบริษัทพันธมิตรกว่า 6,000 ราย ที่ทำงานร่วมกับหัวเว่ย คลาวด์ เพื่อจัดหาโซลูชั่นแบบครบวงจร ตั้งแต่ชิพประมวลผล การให้บริการฮาร์ดแวร์บนคลาวด์ (Infrastructure as a Service — IaaS) ไปจนถึงบริการแพลตฟอร์มบนคลาวด์ (Platform as a Service — PaaS) และแอพพลิเคชั่นเฉพาะกลุ่มอุตสาหกรรม ในปี พ.ศ. 2562 คาดการณ์ว่าทั่วโลกจะมีการใช้บริการคลาวด์มากกว่าการใช้เทคโนโลยีไอทีแบบเดิม และในปี พ.ศ. 2564 เฉพาะในประเทศไทย มีการประเมินว่าบริการคลาวด์ รวมถึงฮาร์ดแวร์ ซอฟต์แวร์ และบริการต่างๆ ที่ทำงานบนระบบคลาวด์จะมีมูลค่าตลาดรวมสูงถึง 48,000 ล้านบาท ตามข้อมูลในรายงานของ IDC หัวเว่ย คลาวด์ ยังได้ร่วมกับพันธมิตรในไทย เช่น Stream IT, CallVoice และ BeTimes ในการสร้างแอพพลิเคชั่นสำหรับลูกค้าองค์กรแบบเบ็ดเสร็จ ที่มุ่งเน้นให้บริการแก่ภาครัฐ การดูแลรักษาทางการแพทย์ อีคอมเมิร์ซ และอื่นๆ มร. เจมส์ อู๋ ประธานบริหาร หัวเว่ย เอเชียตะวันออกเฉียงใต้ ได้กล่าวในงานเปิดตัวบริการคลาวด์ว่า “หัวเว่ยเชื่อว่า นี่เป็นช่วงเวลาที่เหมาะสมในการนำเสนอบริการคลาวด์สาธารณะในประเทศไทย อันแสดงถึงความมุ่งมั่นในพันธกิจของเราที่มีต่อการทำงานร่วมกันเพื่อบรรลุเป้าหมายตามนโยบาย Thailand 4.0 บริการคลาวด์ของเราสามารถช่วยให้หน่วยงานและองค์กรต่างๆ มีความคล่องตัวในการดำเนินธุรกิจมากขึ้น เร่งผลักดันนวัตกรรมและเพิ่มความปลอดภัย และด้วยเครือข่ายคลาวด์ของเราที่มีอยู่ทั่วโลก ทั้งในยุโรป ละตินอเมริกา รัสเซีย แอฟริกาใต้ และจีน ทำให้เราสามารถเชื่อมโยงองค์กรธุรกิจต่างๆ เข้าสู่ตลาดโลก เราจะยังคงนำเสนอเทคโนโลยีคลาวด์ล่าสุด โดยใช้ทรัพยากรดาต้าเซ็นเตอร์ที่ตั้งอยู่ในประเทศไทย พร้อมทีมงานผู้เชี่ยวชาญด้านเทคนิคและการค้า พร้อมพันธมิตรอีกมากมาย หัวเว่ย คลาวด์ มีความพร้อมที่จะช่วยให้ลูกค้าสามารถลดค่าใช้จ่ายด้านไอทีได้อย่างมาก และขยายธุรกิจของพวกเขาได้อย่างรวดเร็ว” จากประสบการณ์ด้านโครงสร้างพื้นฐานไอซีทีที่ยาวนานถึง 30 ปี บวกกับประสบการณ์ด้านการค้นคว้าวิจัยด้านการประมวลผลแบบคลาวด์ ในช่วง 10 ปีที่ผ่านมา หัวเว่ยจึงสามารถจัดหาโซลูชั่นที่ครบวงจรให้แก่องค์กรทุกขนาด เพื่อเตรียมรับมือกับความท้าทายในการเปลี่ยนผ่านสู่ระบบดิจิทัลและคลาวด์ พร้อมช่วยให้องค์กรเหล่านี้ขยายธุรกิจของตนได้ต่อไป
หัวเว่ยเปิดตัว บริการคลาวด์สาธารณะ ในประเทศไทย “หัวเว่ย คลาวด์ ไทยแลนด์”
0
หัวเว่ยเปิดตัว-บริการคลาวด์สาธารณะ-ในประเทศไทย-หัวเว่ย-คลาวด์-ไทยแลนด์-1dbbf2c97ec3
2018-09-21
2018-09-21 09:18:15
https://medium.com/s/story/หัวเว่ยเปิดตัว-บริการคลาวด์สาธารณะ-ในประเทศไทย-หัวเว่ย-คลาวด์-ไทยแลนด์-1dbbf2c97ec3
false
220
Enterprsie IT Knowledge for IT Community
null
enterpriseitpro
null
Enterpriseitpro
suwaschai@enterpriseitpro.net
enterpriseitpro
null
Suwaschai_ITPro
Huawei
huawei
Huawei
1,229
Dearraya Naja
null
d40e6591ecfa
dearrayanaja
22
19
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-12
2017-09-12 14:59:51
2017-09-12
2017-09-12 19:36:08
0
false
en
2017-09-12
2017-09-12 19:36:08
5
1dbc5fc2a4b3
3.233962
5
0
0
Originally published at www.redventures.com on September 12, 2017.
5
Smile and Say “Big Data:” An Industry Snapshot Originally published at www.redventures.com on September 12, 2017. We’re just going to say it: The tech scene here in Charlotte is booming. In a few short years, the Queen City has emerged as a national leader in analytics and big data (and sometimes pro football). Here at RV, we’re working harder than ever to push the needle forward on the first two fronts. Weird coincidence, right? (Yeah, not really.) We put data at the center of everything we do, which means we’re constantly learning and improving. It also means our data science team has uncovered some truly brilliant insights into this fast-growing, constantly-changing, impossible-to-predict space. And they’re sharing some of those with you. Here are some data science best practices from the Red Ventures team — and your chance to adopt them* before 2018. *the best practices, not the team 1. ORGANIZE YOUR DATA When it comes to organizing data, efficiency should always be your end goal. (We know, easier said than done.) To get there, you’ll need to make sure your data strategy reflects your business strategy. Since no two business strategies look the same, we can’t tell you exactly what your journey should look like. But we can tell you what we did. Our most important assets are our people. So, our first initiative was to merge our Data Team (the ones who produce the data) and Data Science Team (the ones who crunch the numbers). Because as they say: the team that works together, wins together. 2. INVEST IN RESEARCH Totally official science fact: innovation requires constant experimentation. Sorry, Bill. Not that kind of experiment. To become a data science big shot, you’ve got to be bold. You’ve got to push the limits of what you know. Around here, we do that by constantly running different models and testing different algorithms. If we could, we’d run experiments every second of every day. But, as that’d be both super-expensive and against the constraints of the time-space continuum, we make the most of what we’ve got. How? By making our experiments cost effective. More on that in a minute. 3. GET YOUR HEAD IN THE CLOUD For most data-driven experiments, having better hardware (and less environmental maintenance) allows for quicker iteration. Quicker iteration leads to better learning. When someone says training neural nets on a local 4 core CPU is the same as training on a beefed-up, GPU-supported AWS machine. A cloud-based environment offers just that: faster, cheaper ways to test — and a higher probability for success. Given the crazy computing power and the minimal maintenance work required on cloud platforms, we think cloud will emerge as the next OS for data science. Internet giants like Amazon, Google, and Microsoft all provide cloud services — and they make big bets in developing analytics and machine learning platforms. As big data becomes more accessible, you’ll see startups and small companies investing in these new platforms rather than new tech. 4. BUY SIMPLE ANALYTICS SOLUTIONS When it comes to analytics solutions, you’ll come across this question pretty often: is it better to build ’em or buy ‘em? Most out-of-the-box data science platforms provide advanced functionalities like clustering, classification, and regression. Sounds like one powerful toolbox, right? Yep. But, sometimes power isn’t all you need. Too. Much. Power. With a well-structured dataset and a clear sense of what to do with it, pre-made platforms produce useful results, fast. However, when you’re working with a complex business problem, diverse datasets, or a multi-faceted goal in mind… it’s better to start with a blank slate. 5. BUILD COMPLEX SOLUTIONS YOURSELF Our tech team often faces challenges that require multiple rounds of strategizing, prototyping, and refinement before we’re ready for production. That’s because our data science solutions need to be integrated with our own tech stack, tailored to our overall business strategy, and maintained within our own expertise. Building analytics solutions in-house allows us to create custom-fit models that are integrated with our business strategy and technology stack from the start. With a solid “made-for-RV” foundation, we have control throughout the whole process — that makes it easier to augment operations for our different verticals. 6. BEND THE CURVE Perhaps the most important (and enduring) trend in Data Science can be summed up through a single, elegant catchphrase: “bend the curve.” In case you aren’t aware of graphs and their intricacies, this concept is all about changing the conditions of a problem to improve the output over time. We’ll attempt to illustrate with a gif: Whoa, trending up! At Red Ventures, we’re constantly re-thinking our inputs. It’s how we avoid complacency. It’s why we’re able to bring advanced digital platforms to many different industries. Most importantly, it opens the door to a whole new world of data. And you know what you get with more inputs? A higher probability for successful outputs. *Math drop.* Did you already know everything you read? Okay, genius. We like your style. Learn something new about our award-winning data science team and then check out tech positions we’re hiring for RIGHT NOW. (…unless you already know about those, too. In that case, go ahead and add “ESP” to the skills section of your resume.)
Smile and Say “Big Data:” An Industry Snapshot
73
smile-and-say-big-data-an-industry-snapshot-1dbc5fc2a4b3
2018-05-23
2018-05-23 02:39:52
https://medium.com/s/story/smile-and-say-big-data-an-industry-snapshot-1dbc5fc2a4b3
false
857
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Red Ventures
We’re a leading digital consumer choice platform and a company stacked with brilliant people. This is where they share a bit of what they know.
b33f19f5bb7f
rvcreative
555
185
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-08
2018-05-08 12:55:00
2018-05-08
2018-05-08 13:19:52
3
false
en
2018-05-08
2018-05-08 13:19:52
0
1dbcb318bbf5
1.391509
2
0
0
On 1st May 2018, the UAE Developer Experience hosted a IBM Watson workshop at Amity University Dubai. This was a part of ATOM 2018, which…
4
IBM @ ATOM 2018 On 1st May 2018, the UAE Developer Experience hosted a IBM Watson workshop at Amity University Dubai. This was a part of ATOM 2018, which is an annual week long techno-festival at the university. The main aim of the workshop was to introduce the students to the IBM Cloud platform and demonstrate how to create a simple voice enable chatbot using IBM Watson services. I started the workshop by giving an introduction to IBM Cloud and an overview of the various services the platform has to offer. Next, I spoke about the main elements required to create a chatbot and the best practices that should be followed in order to create an effective as well as efficient chatbot. Lastly, I conducted the hands-on session where I walked the students through on how to create a voice enabled andriod chatbot. This included first showing them how to train the bot using the Watson Assistant service. Additionally, also creating instances of the Watson Text to Speech and Speech to Text service in order to make it a voice enabled bot. Finally, the services were put together in an android application at the end of the activity to create a functional android chatbot. Thanks to the Developer Advocacy team Naiyarah Hussain and Kunal Malhotra for helping in running the workshop.
IBM @ ATOM 2018
3
ibm-atom-2018-1dbcb318bbf5
2018-05-09
2018-05-09 17:06:20
https://medium.com/s/story/ibm-atom-2018-1dbcb318bbf5
false
223
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mehak Manwani
IBM ☁️
f4680451147e
Mehak.Manwani
5
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-24
2018-01-24 16:56:39
2018-01-25
2018-01-25 09:36:40
1
false
en
2018-01-25
2018-01-25 09:36:40
13
1dbccb3a2916
4.060377
6
1
0
What programming language do you use to develop Machine Learning (ML) and Artificial Intelligence (AI) systems? This is one of the most…
3
On the importance of DSLs in ML and AI What programming language do you use to develop Machine Learning (ML) and Artificial Intelligence (AI) systems? This is one of the most frequently asked question about my work. The short answer: a mix of Scala, Python and F# The long answer: DSLs are a hot topic and play a crucial role in many of the tasks Machine Learning (ML) and Artificial Intelligence (AI) systems need to tackle. 1 ) Business Logic DSLs are a powerful tool to express concisely business logic. Example: Trading Snippet source, read more (Gosh2010, Frankau2009) At the same time, ML and AI systems do not come set in stone. The underlying models reflect business and working hypothesis that might change over time Sensitivity analysis should not be only performed against model (hyper)parameters but also against business and working assumptions DSLs come in handy to fluently express complex business and working assumptions, in a language that reads like English. The example below, coded in AMPL describes effectively an optimization problem: we’d like to minimize the transportation costs related to the shipment of products to the clients of a fictional paint company. The key assumptions (what want to optimize, shipping costs, product availability at warehouses, product demand at each client) are clearly stated in a language that resembles closely the problem at hand. Snippet source, read more (Takriti1994) 2 ) Mathematics Once the problem has been formulated, it is then time to write some mathematics. DSLs for statistics and mathematics have existed for decades: Matlab and R are extremely popular in the scientific community. DSL for statistics and mathematics embedded in general purpose programming language are enjoying increasing attention. Example: Probabilistic programming (from Python) Snippet source, read more: (Patil2010) 3 ) Querying (your database) and data manipulation DSLs are handy to write queries and manipulate data in a compact and expressive language that smoothly integrates in your host programming environment. Query languages, such as Linq, retain also properties of the host languages that you might find desirable such as type safety. Example: LINQ and F# Query expressions LINQ and F# Query expressions look like our good, old, familiar SQL (Cheney2013) Snippet source Example: Data Frames Data Frames, such as Pandas, provide syntactic sugar to perform data wrangling tasks such as split, apply combine and pivots (McKinney2010) 4) Under the hood: expressing computations TensorFlow could be considered a programming system and runtime, not just a “library” in the traditional sense: TensorFlow’s graph even supports constructs like variable scoping and control flow — but rather than using Python syntax, you manipulate these constructs through an API. (Innes2017) TensorFlow and similar tools present themselves as “just libraries”, but they are extremely unusual ones. Most libraries provide a simple set of functions and data structures, not an entirely new programming system and runtime. (Innes2017) Why do we need a language to express computations? A glimpse at Apache Spark’s internals helps understand the need to have a domain specific language to reason about computations: Spark runs, under the hood, a complex execution procedure that comprises several steps: the definition of a dataflow (logical plan), the definition of a DAG describing tasks and their execution (physical plan), job scheduling, job execution with fault tolerance (Read more) The core reason for building new languages is simple: ML research has extremely high computational demands, and simplifying the modelling language makes it easier to add domain-specific optimizations and features (Innes2017) That’s not all. Model complexity is growing exponentially. The work on DSLs that allow us represent, reason and analyze computations is currently extremely hot. models are becoming increasingly like programs, including ones that reason about other programs (e.g. program generators and interpreters), and with non-differentiable components like Monte Carlo Tree Search. It’s enormously challenging to build runtimes that provide complete flexibility while achieving top performance, but increasingly the most powerful models and groundbreaking results need both. (Innes2017) When we look specifically at Deep Learning: An increasingly large number of people are defining the network procedurally in a data-dependent way (with loops and conditionals), allowing them to change dynamically as a function of the input data fed to them. It’s really very much like a regular program, except it’s parameterized, automatically differentiated, and trainable/optimizable. Dynamic networks have become increasingly popular (particularly for NLP), thanks to deep learning frameworks that can handle them such as PyTorch and Chainer (LeCun2017) This leads to argue for the birth of a new programming framework out of DSLs specifically designed to express computations. Coming from a deep learning background, Andrej Karpathy wrote: Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints. In the case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent (Karpathy2017) Conclusions Expertise in DSLs is mission critical in ML and AI systems. Acknowledgement Many thanks to Andreas Hubert and Cristian Steinert for providing feedback. References [Cheney2013] Cheney, J., Lindley, S., & Wadler, P. (2013). A practical theory of language-integrated query. ACM SIGPLAN Notices, 48(9), 403–416. [Frankau2009] S. Frankau, D. Spinellis, N. Nassuphis and C. Burgard, Commercial uses: Going functional on exotic trades. Journal of Functional Programming, 19(1), 27–45. doi:10.1017/S0956796808007016, 2009 [Gosh2010] DSLs in Action, Debasish Ghosh, Manning Publications, November 2010 [Innes2017] On Machine Learning and Programming Languages, Innes et all, https://julialang.org/blog/2017/12/ml&pl, 2017 [Karpathy2017] Karpathy, A., https://medium.com/@karpathy/software-2-0-a64152b37c35, 2017 [LeCun2017]LeCun, Y., https://www.facebook.com/yann.lecun/posts/10155003011462143, 2017 [McKinney2010] McKinney, W. (2010, June). Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference (Vol. 445, pp. 51–56). Austin, TX: SciPy. [Patil2010] Patil, A., D. Huard and C.J. Fonnesbeck. (2010) PyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical Software, 35(4), pp. 1–81 [Takriti1994] Takriti, S. (1994). Interfaces, 24(3), 144–146. Retrieved from http://www.jstor.org/stable/25061891
On the importance of DSLs in ML and AI
46
on-the-importance-of-dsls-in-ml-and-ai-1dbccb3a2916
2018-05-29
2018-05-29 07:25:17
https://medium.com/s/story/on-the-importance-of-dsls-in-ml-and-ai-1dbccb3a2916
false
1,023
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Mattia Ferrini
I am the technical lead of the KPMG AI Labs. My interests: obviously AI and Machine Learning; but also Programming Languages (F#, Python and Haskell)
a306dec16ab4
mattia.cd.ferrini
57
133
20,181,104
null
null
null
null
null
null
0
null
0
9476a924ad57
2018-08-09
2018-08-09 07:23:31
2018-08-09
2018-08-09 07:34:41
1
false
en
2018-08-09
2018-08-09 07:42:07
16
1dbd08c4647c
5.226415
0
0
0
Spark Summit Europe took place in Brussels, Belgium just about a week ago. I had a pleasure to be there for conference days where I…
5
Spark Summit Europe 2016 review Spark Summit Europe took place in Brussels, Belgium just about a week ago. I had a pleasure to be there for conference days where I attended mostly Data Science track, as this is our bread and butter in Semantive. This summit could be summarized in a couple of words: Spark 2.0 and Streaming, as it was the hottest topic around the sessions. The conference opened with a keynote from Matei Zaharia about Spark 2.0 features and how fast is it due to Catalyst Optimizer, which can greatly optimize all transformations iff they are done using SQL, DataFrames or Datasets. RDDs do not benefit from optimizations because you can execute an arbitrary code in your functions so that Spark has no idea what is going on, whereas if you use DataFrames and you filter then Spark knows about a filter operation so it can use that knowledge to optimize execution. The first keynote ended with Greg Owen showing a great demo of Spark Streaming, which processed tweets related to Brexit and performed sentiment analysis of each tweet. We looked at a popularity of tweets, which mentioned terms such as #theresa, #boris and surprisingly #marmite (a dark savory spread made from yeast extract and vegetable extract), which is loved by Brits. Marmite had indeed positive sentiment and it happened to be more popular than expected in relation to Brexit tweets. What’s interesting in the demo is that sentiment analysis was done simply in scikit-learn pipeline trained prior to the demo. This kind of mix of scikit-learn and Spark is a nice combination, where you have a trained model, which you just apply to huge amounts of incoming data. In the next session, Ion Stoica talked about a history of Spark and the future, which is all about streaming, real-time decisions and security of data. Projects which are supposed to deliver the technology are Drizzle and Opaque. More info about it here. Sessions I like the most are: Making the switch: predictive maintenance on railway switches Vegas, the Missing MatPlotLib for Spark OrderedRDD: A distributed time series analysis framework for Spark Apache Spark 2.0 Performance Improvements Investigated With Flame Graphs Extreme-scale Ad-Tech using Spark and Databricks at MediaMath Prediction as a Service with Ensemble Model Trained in SparkML on 1 billion Observed Flight Prices Daily And here are my key takeaways from each one: Making the Switch Chris Pool and Jeroen Vlek from Anchorman talked about their recent project for a Dutch railway subcontractor Strukton. Thing is, you need to keep switches which move rails working. Otherwise, you get delays and cancellations of trains. Dutch railway has a reward-fine policy so that a subcontractor gets a reward when it’s all working fine, but gets a fine if there are malfunctions. In the end, it’s better to replace or repair a potentially malfunctioning switch than to pay a fine. Anchorman had data about switches gathered from sensors. They were supposed to predict switch failure using that data, which happen once or twice a year. They came up with a simple model, but not a simplistic one, because the client wanted to actually understand all the features so that they used really simple features, which represent time-series of a flip, such as min, max, average, length per each segment in the curve of a flip. The model was also supposed to be understandable so they used a decision tree model, which is explainable and can be visualized. They used Spark for all data preparation, model training, and running predictions. They normalized data using a sliding window so that seasonality is not discarded. The whole project right now is just MS-SQL and Spark. Vegas, the Missing MatPlotLib for Spark Guys from Netflix presented a plotting library Vegas, which is declarative so that most of the goodness comes out of the box and it is made for Scala. They based their solution on Vega, which is JS plotting lib. Vegas can be used from a Zeppelin notebook or a Scala console. It has built-in time-series support and of course Spark support. Charts look good so just take a look at their docs! OrderedRDD: A distributed time series analysis framework for Spark This was a funny one, presented by Larisa Sawyer from Two Sigma Investments. She presented a library Flint for working with time-series data, which allows you effectively do temporal joins. Temporal join is a join which just matches criteria over time and it can look forward or look backward to find a match. Compared to spark-ts, Flint can work with time-series which do not fit into a single machine and it has support for streaming. TimeSeriesRDD, a new RDD that comes with Flint, preserves temporal order so that after temporal joins data stays in order so that sorts and shuffles are not required. This yields 50–100 times speed up compared to just working with a standard RDDs and time-series data. And what is funny about that? Well, it’s just how Larisa presented and made jokes about trading based on Anne Hathaway’s tweets. Apache Spark 2.0 Performance Improvements Investigated With Flame Graphs This reminded me of my time in CERN. Luca Canali presented how flame graphs can help you to investigate where CPU cycles are consumed. This technique requires some experience to know what to look at, but if you do, then you can understand what is going on. Take a look at Brendan Gregg’s resources on flame graphs to learn more. Extreme-scale Ad-Tech using Spark and Databricks at MediaMath Prasad Chalasani presented a truly impressive use of Spark for optimized bidding on ad campaigns, which maximize impressions, such as ad clicks or page visits. They have massive amounts of data, as there are over 200 billion daily ad-opportunities, 1 billion of users, millions of features and this all sums up to terabytes of data a day. Prediction as a Service with Ensemble Model Trained in SparkML on 1 billion Observed Flight Prices Daily Another large-scale presentation was given by Josef Habdank from Infare Solutions. Josef talked about predicting airfares. I want to briefly summarize his solution: There are billions of time-series and we need to predict what will be next in each of them. Given a variety of time-series, it is better to cluster them into tens of thousands of clusters and train a prediction model for each cluster. However, clustering time-series is a problem itself because it has many features, many data points in time. To solve this problem, Josef first applied Linear Regression model to each time series and used coefficients of each model, as features for clustering. This way he had a few amount of features in a clustering algorithm. He used Guassian Mixture Model for the clustering. In the end, clustering segmented the space into smaller subspaces. The next challenge is to train models for all subspaces in parallel. Josef said that simple models such as Linear Regression worked well. He also added extra features to time-series such as average price between all connections on a particular route like London-Berlin. The key functionality of Spark which allowed to train all models at the same time is collect_list, which allowed merging millions of rows into a single row with millions of values, to which an UDF function was applied that performed model training. The elegance of the solution is brilliant. Conclusion It was a great conference and I’ve learned quite a bit. I had a quick tour of Brussels, must-eat fries and waffles. Spark 2.0 is a hit, which delivers its promise of being fast with all the optimizations that come from Catalyst optimizer and project Tungsten. We will have to wait a bit till streaming connectors arrive, to fully utilize structured streaming and performance gains, but that should happen by end of the year. Merci beaucoup Databricks! Originally published at semantive.com on November 4, 2016.
Spark Summit Europe 2016 review
0
spark-summit-europe-2016-review-1dbd08c4647c
2018-08-09
2018-08-09 07:42:07
https://medium.com/s/story/spark-summit-europe-2016-review-1dbd08c4647c
false
1,332
Big data and data science services to make you data-driven organization.
null
semantive
null
semantive
contact@semantive.com
semantive
BIG DATA ANALYTICS,DATA SCIENCE,ARTIFICIAL INTELLIGENCE,BIG DATA
semantive
Spark
spark
Spark
1,375
Marek Lewandowski
CTO @ Semantive, a Big Data & Data Science consulting company.
7e4ce51f39e5
marek.lewandowski
0
6
20,181,104
null
null
null
null
null
null
0
null
0
7e157a8bdf41
2018-02-20
2018-02-20 03:12:57
2018-02-20
2018-02-20 12:41:01
4
false
en
2018-03-24
2018-03-24 03:13:48
12
1dbd953bf7e1
6.409434
7
0
0
Recently there has been a lot of concern, and press, around research indicating ethnic bias in Face Recognition.
5
Ending Racial Biases in Face Recognition AI Recently there has been a lot of concern, and press, around research indicating ethnic bias in Face Recognition. This resonates with me very personally as a minority founder in the face recognition space. So deeply in fact, that I actually wrote about my thoughts in an October 2016 article titled “Kairos’ Commitment to Your Privacy and Facial Recognition Regulations” wherein I acknowledged the impact of the problem, and expressed Kairos’ position on the importance of rectification. I felt then, and now, that it is our responsibility as a Face Recognition provider to respond to this research and begin working together as an industry to eliminate disparities. Because when people become distrustful of technology that can positively impact global culture, everyone involved has a duty to pay attention. And take action. What’s happening? Joy Buolamwini of the M.I.T. Media Lab has released research [1] on what she calls “the coded gaze”, or, algorithmic bias. Her findings indicate gender and skin-type biases in commercial face analysis software. The extent of these biases are reflected in an error rate of 0.8 percent for light-skinned men, and as high as 34.7 percent for dark-skinned women. Specifically, Face Recognition algorithms made by Microsoft, IBM and Face++ (the three commercial systems on which Buolamwini tested) were more likely to misidentify the gender of black women than white men. When such biases exist, there can be far reaching implications as a result. In my 2016 article I cited that The Center for Privacy & Technology at Georgetown University’s law school found that over 117 million American adults are affected by our government’s use of face recognition — with most of the affected American adults being African Americans. For law enforcement systems relying on Face Recognition to identify suspects using mug shot databases — accuracy, particularly in the case of dark skinned people, can mean the difference between disproportionate arrest rates and civil equality. Pilot Parliaments Benchmark (PPB) consists of 1,270 individuals from three African countries (Rwanda, Senegal, and South Africa) and three European countries (Iceland, Finland, and Sweden), selected for gender parity in the national parliaments. Credit: Joy Buolamwini Sakira Cook, counsel at the Leadership Conference on Civil and Human Rights, points out that “The problem is not the technologies themselves but the underlying bias in how and where they are deployed”. Certainly, the biases that exist in law enforcement pre exist Face Recognition technology, yet when the systems themselves are unintentionally biased due to improperly trained algorithms, the combination can be quite damaging. And as industries like Marketing, Banking, and Healthcare integrate Face Recognition into their decision making processes based on demographic insights around consumer preference, lending practices, and patient satisfaction — it is crucial that the systems delivering the metrics used to generate these insights, be as precise as possible. Solving this problem will have a ripple effect on the technology The immediate importance of rectifying this problem is obvious. Yet there is a secondary, and remarkably important ramification which hinders on the resolution of these biases. Public sentiment around Face Recognition Hate it or love it, AI and machine learning driven Face Recognition and Human Analytics are becoming the standard for establishing metrics to be used for insights across industries around the world. The absolute need to have honest dialogue around the troubling inefficiency of algorithms to properly identify women/people of color, anyone, is the exact catalyst needed to effectuate change. This is so critical, because the positive, culture changing effect that Face Recognition will have globally is profound beyond practical application. As mainstream adoption of the technology for business and personal use like the Apple iPhone X is being fast tracked, the subsequent social response, as in the discussion of bias we are now having, is creating a global framework for 21st century ethnic classification expectations and standards. At Kairos, we were made very aware of this evolution after the release of our Diversity & Ethnicity app. “User response has ranged from amazement and praise, to displeasure and offense. And we totally understand why. While most users will get a spot-on result, we acknowledge that the ethnicity classifiers currently offered (Black, White, Asian, Hispanic, ‘Other’) fall short of representing the richly diverse and rapidly evolving tapestry of culture and race.”— Diversity Gone Viral, 2017 Our Ethnicity & Diversity app was used by millions of curious people from all parts of the world. We highlighted some of what we learned from them in “Diversity Gone Viral”. The information users shared with us ranged from praise and extreme critique of our app, to letting us know how they identify and prefer to be addressed in terms of ethnic classification. It was amazing. Example result from the Kairos Diversity Recognition app. At Kairos, we LIVE for this kind of feedback and use it to improve our product, thereby improving public sentiment around a technology that is still in the process of earning trust. And I would also like to be clear about the fact that we don’t find it any less concerning when Face Recognition fails to identify a man of mixed Asian and European descent, or of ANY ethnicity — than when it fails to identify a woman of African descent. In any case of misidentification, the person expecting an accurate result may be left feeling offended, and perhaps more troublesome, misclassified in a database. What can be done? Fortunately, the matter of algorithmic ethnic bias, or “the coded gaze” as Buolamwini calls it, can be corrected with a lot of cooperative effort and patience while the AI learns. If you think of machine learning in terms of teaching a child, then consider that you cannot reasonably expect a child to recognize something or someone it has never or seldom seen. Similarly, in the case of algorithmic ethnic bias, the system can only be as diverse in its recognition of ethnicities as the catalogue of photos on which it has been trained. That said, offering the algorithms a much more diverse, expansive selection of images depicting dark skinned women, various other shades of color, and individuals identifying as “mixed” (which includes MANY ethnicities) will close the gap found in the M.I.T Media Lab research. “…every tool ever invented is a mixed blessing. How things will balance out is a matter of vigilance, moral courage, and the distribution of power.” — ‘The Cult of Information’, by Theodore Roszak, Author and Professor. Trust the process While we are having the conversation about the shortcomings and necessary improvements in Face Recognition, I feel it’s important to remember that this technology is relatively new and ever evolving/improving in terms of expansion into areas like ethnicity identification. Not to mention, there is a margin for error in any system. Including the human eye! Improve the data Why not implement a “bias standard” by requiring a proactive approach to gathering, training, and testing data from a population that is truly representative of global diversity. In combination with examples of natural poses, variant lighting, angles, etc — this “standard” will insure systems operate with more precision and reliability around ethnicity. Seek constant feedback The industry has got to get out of the lab and stop building things in isolation. Being inclusive and engaging wider communities will enable us to see how our AI is being perceived in the real world. It’s going to take more effort (and cost) to approach these solutions the right way — and I’m not just talking about dodging bad PR. It breaks the functionality of the application, and what we are trying to achieve from a business POV. A global, connected economy DEMANDS we fulfil the promise of the technology we build to be respectful, trustworthy and inclusive. MEET US AT SXSW This March, Kairos returns to South by Southwest; the annual film, tech, and music festivals and conferences in Austin, Texas, USA. Brian will be on the panel for ‘Face Recognition: Please Search Responsibly’ Looking forward… As an educated culture we have a social responsibility to think rationally and reasonably about technology. Clickbait titles like “Facial Recognition Is Accurate, if You’re a White Guy” [2] as circulated by credible publications like The New York Times, are counter productive, sensational, and dangerous. The implication that there is a “hidden bias” in how today’s data sets are gathered further fuels the mistrust that people have for the technology. Let’s simply acknowledge that the selection of current data sets lack representation of segments of the population, and move ahead, together, in getting these algorithms properly trained to be as inclusive of all of the ethnicities represented on our planet, as possible. We are deeply appreciative of the M.I.T study and how it has reinvigorated Kairos’ commitment to being a responsible participant in this discussion, as we determinedly strive to be a leader of change and betterment in our industry. We invite Joy Buolamwini and her team to join us in establishing “bias standards” so that the future of Face Recognition, includes all faces. As always, we are here, and open to answer any questions about our technology and practices. UPDATE: Our announcement from SXSW 2018 Doubling Down on Diversity in Face Recognition AI Kairos’ opportune moment at SXSWmedium.com [1] Gender Shades [2] Facial Recognition Is Accurate, if You’re a White Guy Kairos’ mission is to make it easy for any business to benefit from face analysis, enriching the experience between humans and machines, and being the premier partner for anything to do with facial recognition. Originally published on www.kairos.com
Ending Racial Biases in Face Recognition AI
76
ending-racial-biases-in-face-recognition-ai-1dbd953bf7e1
2018-05-23
2018-05-23 04:49:16
https://medium.com/s/story/ending-racial-biases-in-face-recognition-ai-1dbd953bf7e1
false
1,513
Serving Businesses with Face Recognition
null
kairossoftware
null
Kairos
hello@kairos.com
lovekairos
COMPUTER VISION,ARTIFICIAL INTELLIGENCE,FACIAL RECOGNITION,MACHINE LEARNING,API
lovekairos
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Brian Brackeen
CEO of www.kairos.com- Serving Businesses with Face Recognition
1fe105e24337
BrianBrackeen
1,535
187
20,181,104
null
null
null
null
null
null
0
g_loss_G_cycle = tf.reduce_mean(tf.abs(real_X — genF_back)) + tf.reduce_mean(tf.abs(real_Y — genG_back)) g_loss_F_cycle = tf.reduce_mean(tf.abs(real_X — genF_back)) + tf.reduce_mean(tf.abs(real_Y — genG_back)) Gz = generator(z_in) #Generates images from random z vectors (noise) Dx = discriminator(real_in) #Produces probabilities for real images Dg = discriminator(Gz) #Produces probabilities for generator images #These functions together define the optimization objective of the GAN. d_loss = -tf.reduce_mean(tf.log(Dx) + tf.log(1.-Dg)) #This optimizes the discriminator.
3
a7232e0b717b
2018-06-07
2018-06-07 10:55:35
2018-06-07
2018-06-07 13:13:29
8
false
en
2018-06-07
2018-06-07 13:13:29
2
1dbdb8fbe781
3.654088
15
1
1
In this blog post, we will explore a cutting edge Deep Learning Algorithm Cycle Generative Adversarial Networks (CycleGAN). But before…
4
Introduction to CycleGANs In this blog post, we will explore a cutting edge Deep Learning Algorithm Cycle Generative Adversarial Networks (CycleGAN). But before heading on, let’s first look at the results it has been able to achieve. horse2zebra If that’s not enough to blow away your mind, I don’t know what is. Now that you have developed interest in the topic, let’s start with the same. Brief Introduction to GAN “The coolest idea in deep learning in the last 20 years.” — Yann LeCun on GANs. GANs belong to the set of algorithms named generative models. These algorithms belong to the field of unsupervised learning, a sub-set of ML which aims to study algorithms that learn the underlying structure of the given data, without specifying a target value. Generative Adversarial Networks are composed of two models: The first model is called a Generator and it aims to generate new data similar to the expected one. The Generator could be asimilated to a human art forger, which creates fake works of art. The second model is named the Discriminator. This model’s goal is to recognize if an input data is ‘real’ — belongs to the original dataset — or if it is ‘fake’ — generated by a forger. In this scenario, a Discriminator is analogous to the police (or an art expert), which tries to detect artworks as truthful or fraud. The Loss Equations for GANs are given as : The gradient ascent expression for the discriminator. The first term corresponds to optimizing the probability that the real data (x) is rated highly. The second term corresponds to optimizing the probability that the generated data G(z) is rated poorly. Notice we apply the gradient to the discriminator, not the generator. The gradient descent expression for the generator. The term corresponds to optimizing the probability that the generated data G(z) is rated highly. Notice we apply the gradient to the generator network, not the discriminator. CycleGAN After seeing the horse2zebra gif above, most of you would be thinking of a following approach : Prepare a dataset of Horses and Zebras in the same environment, in exactly the same locations and then create some kind of a mapping between the two with the help of a Neural Network. But that’s not how it works because it would be close to impossible to get such a dataset. The beauty of the algorithm lies in achieving the same result in a smart and easy way with a dataset containing just the images of Horses and Zebras. Architecture Basic Architecture of CycleGAN It consists of : Two mappings G : X -> Y and F : Y -> X Corresponding Adversarial discriminators Dx and Dy Role of G: G is trying to translate X into outputs, which are fed through Dy to check whether they are real or fake according to Domain Y Role of F : F is trying to translate Y into outputs, which are fed through Dx to check if they are indistinguishable from Domain X Loss Functions The real power of CycleGANs lie in the loss functions used by it. In addition to the Generator and Discriminator loss ( as described above ) it involves one more type of loss given by : Cyclic-Consistency Loss This kind of loss uses the intuition that if we translate a sample from Domain X to Y using mapping function G and then map it back to X using function F, how close are we from arriving at the original sample. Similarly, it calculates the loss incurred by translating a sample from Y to X and then back again to Y. This cyclic loss should be minimised. Total Loss Total generator loss is given as : g_loss_G = g_loss_G_disc + lambda * g_loss_G_cycle g_loss_F = g_loss_F_disc + lambda * g_loss_F_cycle Here, g_loss_G_disc & g_loss_F_disc are Generator Losses, so that the generator is able to generate fake images, which discriminator identifies as real ones. Apparently, Cyclic Losses are so important that they are multiplied by a constant lambda (in the paper the value 10 was used) The Total Discriminator loss is same as that of simple GANs :
Introduction to CycleGANs
152
introduction-to-cyclegans-1dbdb8fbe781
2018-06-21
2018-06-21 06:43:40
https://medium.com/s/story/introduction-to-cyclegans-1dbdb8fbe781
false
668
India's premier Software Bootcamp, located in New Delhi
null
codingblocksindia
null
Coding Blocks
info@codingblocks.com
coding-blocks
CODING,PROGRAMMING,SOFTWARE DEVELOPMENT,COMPUTER SCIENCE,CODING BOOTCAMPS
codingblocksIN
Machine Learning
machine-learning
Machine Learning
51,320
Suransh Chopra
null
9f3896e1c694
suransh2008
15
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-30
2018-01-30 04:15:11
2018-01-30
2018-01-30 04:46:23
4
false
en
2018-01-30
2018-01-30 04:51:55
1
1dbeca6801
2.330189
17
1
0
Dialogflow is a great Natural Language Processing AiOS. One of its most used feature by Ai developers are Small Talk. What does Small Talk…
3
My Dialogflow Small Talk Entity Model credit: giphy.com Dialogflow is a great Natural Language Processing AiOS. One of its most used feature by Ai developers are Small Talk. What does Small Talk module allow developers to achieve? Small Talk in a Nutshell It allow them to create the frequent asked questions such as questions about the agent, Hello/Goodbye type of questions and so on. It is really usefull when creating machine learning agent since some of these questions create repetitive coding which can be automated with a little creativity. However Small Talk in its current form allow developers to makes variations only for the anwers part not for the question part. This is why I created my own set of Small Talk entities. So today I will give you a freebie of my Small Talk entity system. credit: giphy.com Even Will Smith is somehow excited about that but I don’t why but thanks for the support buddy. 👍 In order to use my template you will need to get the following notepad file from my dropbox in order to use my template: http://bit.ly/2rTchh3 Trust me it is virus free but you can scan it with your favorite anti virus sandbox if you want to be sure. credit: giphy.com The Steps In order to use this method you will need to use the following 4 easy steps: Step number 1️: download the text file; Step number 2: go in your Dialogflow agent create an entity and switch to the raw mode (click on the right on the three little dots); Step number 3: copy paste the small talk about agent entity in the raw mode editor; Step number 4: click the save button and get back to the editor mode. You can personalize the existing template to your taste by adding more synonyms to each of the Why You Should Use this Method credit: gihpy.com But you will say: Why Carl should I be using your method instead of the traditional Small Talk module 🤔❓ Because of the following list of reasons 1. You get to clone the small talk entity to any other agent 2. You can make your small talk entities much more personal to your projects need 3. You can’t add answer to traditional Small Talk but you can’t change the question, with my method you can change the questions as well. I hope you will like it 😀 and if you would like me to provide other templates or if you have any comment please use the comment section below. credit: giphy.com If you like this article please give me a couple clap with the clapping hands below. 👏👏👏👏
My Dialogflow Small Talk Entity Model
134
my-dialogflow-small-talk-entity-model-1dbeca6801
2018-05-09
2018-05-09 08:18:29
https://medium.com/s/story/my-dialogflow-small-talk-entity-model-1dbeca6801
false
432
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Carl Dombrowski
The Startup & Toward Data Science Medium Writer, Ai, ML & NLP coder. CEO of WeBots
7788560c42c7
carldombrowski
316
239
20,181,104
null
null
null
null
null
null
0
from mxnet.gluon.model_zoo import vision net = vision.squeezenet1_1(pretrained=True) $ virtualenv ~/mxnet-gluoncv $ source ~/mxnet-gluoncv/bin/activate $ pip3 install mxnet gluoncv --pre --upgrade $ python3 >>> import mxnet, gluoncv >>> mxnet.__version__ '1.2.0' >>> gluoncv.__version__ '0.2.0' $ git clone https://github.com/dmlc/gluon-cv $ python3 demo_imagenet.py --model resnet50_v2 --input-pic kreator.jpg The input picture is classified to be [electric_guitar], with probability 0.671. [drumstick], with probability 0.103. [stage], with probability 0.076. [banjo], with probability 0.024. [acoustic_guitar], with probability 0.016. $ python3 demo_ssd.py --network ssd_512_resnet101_v2_voc --images room.jpg
4
null
2018-05-28
2018-05-28 07:26:39
2018-06-01
2018-06-01 11:25:55
5
false
en
2018-06-03
2018-06-03 07:44:34
19
1dbf6389acc5
3.116352
12
0
1
Apache MXNet is an open source library for Deep Learning, supporting both symbolic and imperative programming. The latter is implemented by…
5
Gluon CV: add image classification, detection and segmentation to your application in minutes Apache MXNet is an open source library for Deep Learning, supporting both symbolic and imperative programming. The latter is implemented by the Gluon API, which we discussed before. Gluon: building blocks for your Deep Learning universe Launched in October 2017, Gluon is a new Open Source high-level API for Deep Learning developers. Right now, it’s…medium.com One of the cool features of Gluon is its extensive model zoo, where you can grab a large number of pre-trained image models as easily as this: Guess what? It just got better! The Terminator: real-time classification and segmentation. He*did* come from the future! Gluon CV Gluon CV (Computer Vision) is a brand new project which extends the model zoo to: More image classification models trained on ImageNet and CIFAR-10: ResNet v1 and v2, MobileNet v2, WideResNet and RestNext. Single-shot detection models trained on the Pascal VOC dataset: VGG16 300x300, VGG16 512x512 and ResNet 50 512x512. Segmentation models trained on Pascal VOC: ResNet 50 and ResNet 101. Similar models were previously available on Github (such as these), but using them wasn’t always straightforward. A while ago, I also showed you how to use pre-trained models with the symbolic API, but Gluon makes it much simpler. Gluon CV also includes: Utility APIs to transform and display images, Tutorials, Prediction, training and fine-tuning scripts! Let’s try this thing. Installation Gluon CV is still a very young project. If you want to enjoy the latest features and bug fixes, I’d recommend installing the latest MXNet and Gluon CV packages. You might want to do this in a virtual environment to avoid messing up your Python environment :) Good to go. Image classification Let’s first try to classify this image with demo_imagenet.py. Looking at the script itself, all it really takes is about 5 lines of code! Load a pre-trained model. Read and transform the image (resize, crop, normalize colors). Predict the image and display the top 5 categories. Let’s try image detection. Image detection The 20 Pascal VOC classes are: person, bird, cat, cow, dog, horse, sheep, aeroplane, bicycle, boat, bus, car, motorbike, train, bottle, chair, dining table, potted plant, sofa, tv/monitor. Let’s grab a picture displaying some of these objects and run a SSD with demo_ssd.py. This is pretty good! The painting was even detected as a person, which makes sense I think. Looking at the script itself, once again this takes less than 10 lines of code. Image segmentation Last but not least, let’s use the demo_fcn.py script to segment this image. Not bad at all! The three cars were picked up, as well the most visible people. Not sure what the yellow thing in the lower left corner is, though :D That’s it for today. Gluon CV really makes it very simple to use state of the art pre-trained models. Please take a look at the code and try it with your own apps: it’s much easier than you probably think. Happy to answer questions here or on Twitter. For more content, please feel free to check out my YouTube channel. I approve this message. Give’em hell, ladies \m/
Gluon CV: add image classification, detection and segmentation to your application in minutes
28
gluon-cv-add-image-classification-detection-and-segmentation-to-your-application-in-minutes-1dbf6389acc5
2018-06-16
2018-06-16 22:12:39
https://medium.com/s/story/gluon-cv-add-image-classification-detection-and-segmentation-to-your-application-in-minutes-1dbf6389acc5
false
605
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Julien Simon
Hacker. Headbanger. Harley rider. Hunter. https://aws.amazon.com/evangelists/julien-simon/
4ffe14103b7a
julsimon
3,230
31
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-21
2018-02-21 06:54:17
2018-02-21
2018-02-21 12:28:50
8
false
en
2018-02-21
2018-02-21 12:34:07
3
1dc022c506db
3.446541
1
0
0
The Super Bowl games are nothing short of spectacular. One of the most dominating side, The Patriots were surprised by the Philadelphia…
5
We analyzed historical Super Bowl data and found 7 interesting insights for you Super Bowl LII — Lorie Shaull The Super Bowl games are nothing short of spectacular. One of the most dominating side, The Patriots were surprised by the Philadelphia Eagles in the Super Bowl XLII, who won their first ever Super Bowl title. Apart from the intriguing games, there are pre-game performances and at the halftime popular artists perform. Apart from that some of the best advertisements are played during the commercials airtime, and attracts large attendance & millions of viewers. The amount of data generated by such a big sporting event is huge. We analyzed Super Bowl’s historical data to unveil some interesting insights that you would love to know. The data set includes information like date, attendance, winner, MVP, winning pts, loosing pts, stadium, etc. 1. Most number of MVP won by a player Tom Brady has won the most number of MVPs (Most Valuable Player). It goes without saying that him along side his coach have been dominating other teams for a long time. Joe “The Comeback kid” Montana, has won the second highest number of MVPs. Bart Starr of the Green Bay Packers has won 2 MVPs. 2. Most number of wins by a team The Pittsburgh Steelers have won the Super Bowl title 6 times, making them the team with highest number of super bowl victories. The Dallas Cowboys and The New England Patriots have both won 5 titles each. 3. Top winners by average of point difference The Chicago Bears have won most matches with highest point difference (36) compared to rest of the teams. The Seattle Seahawks rank second with the average of 35 points followed by the Los Angeles Raiders who have an average of 29 points. Fun fact: In Super Bowl XXIV, The San Francisco 49ers won the match against Denver Broncos with a 45 point difference, which is the highest till date. 4. Average match attendance by state The state of California has witnessed the highest number of attendees with an average attendance of 85,634. New Jersey has an average of 82,529 attendees and Texas ranks 3rd with an average of 79,358 attendees. The Super Bowl XIV had 103,985 attendees, which is the highest till date. The match was played in the Rose Bowl stadium of California. 5. Attendance vs Years The Super bowl had the best attendance between the year 1976 to 1988. The year of 1977, 1980, 1983, and 1987 had an average attendance of 103,038. It is observed that every year, the average attendance decreases by about 196. 6. Top 5 stadiums to host most number of super bowls The Louisiana Superdome, the Orange Bowl, and the Rose Bowl have each hosted 5 Super Bowls. This ties them for the top stadium spot. The Tulane stadium has hosted 3 Super Bowls till date and the Georgia Dome has hosted 2. Out of the top 5 stadiums, the Louisiana Superdome and the Tulane stadium are from from the state of Louisiana. 7. Most number wins by a quarterback Tom Brady became the first quarterback ever to win 5 Super Bowls. Terry Bradshaw holds the record of second highest Super Bowl wins with 4 victories. Joe Montana and Troy Aikman both have won 3 Super Bowls each, and Brat Starr ranks fifth with 2 Super Bowl wins. Just like the on & off-field Super Bowl extravaganza, the data from the event also offers equally interesting insights. These are just few examples of what can be easily achieved by analyzing large data sets from different sporting events. Have you encountered similar findings from Super Bowl? Share it in the comments below and let us know!
We analyzed historical Super Bowl data and found 7 interesting insights for you
5
7-interesting-insights-from-super-bowl-data-1dc022c506db
2018-02-21
2018-02-21 12:34:08
https://medium.com/s/story/7-interesting-insights-from-super-bowl-data-1dc022c506db
false
613
null
null
null
null
null
null
null
null
null
Super Bowl
super-bowl
Super Bowl
2,660
PromptCloud
PromptCloud is a web scraping service provider catering to the big data requirements of enterprises. Making big data small!
a18390ab4d0f
promptcloud
180
186
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-18
2018-03-18 20:02:27
2018-03-18
2018-03-18 20:42:39
5
false
en
2018-03-18
2018-03-18 20:42:39
1
1dc08638cb7d
7.395597
0
0
0
With the emergence of any powerful, influential technology, it’s important to ask ourselves and its creators why it exists, what purpose it…
5
Why AI? — Part 1 With the emergence of any powerful, influential technology, it’s important to ask ourselves and its creators why it exists, what purpose it serves and what consequences it may lead to in the future. AI is undoubtedly the most important technology being developed in modern times, able to dramatically change our professional and social landscape in various ways, therefore we should ask not only “how does it work?” and “how do we improve it?”, which seem to be the main topics at the moment, but also “why does it exist?”, “what are its ultimate objectives?” and “how can we use it to maximize the human experience?” These questions will help us better understand the effects the technology has on our lives and will give us more agency in shaping the futures we desire. These questions are especially important in AI, because it, like no other globally popular technology before it, has brought hand-in-hand with its promises of an amazing new future the possibility of human extinction. Statements like “AI may wipe out humanity” are casually thrown about, not only by paranoid extremist bloggers or our traditional relatives living in faraway lands but by geniuses and millionaires alike, highlighting the question of “well then… why?” If the potential costs are so great, what could possibly be a good enough benefit to propel this risk forward? The other day I attended an Artificial Intelligence event and asked a group of young attendees their thoughts on this subject. Specifically, I asked them — “why are you, and why do you think other people are, so interested in AI?” Their answers summarize what I believe are some of the most common responses to this question. I’m going to add them to a growing list of reasons titled: “Why AI?” Here are the first 5: Because AI is the future and if we want to be relevant and find work we have to be well-versed in it Because humanity constantly strives to make things easier and more convenient and AI allows us to do just that in the most effective way possible Because AI will save businesses tons of money and that’s a primary objective for any CEO who wants to stay afloat in a competitive market Because the optimization of any project (and therefore growth and development in general) relies on having the best possible tools to do it with, and AI is the best tool there is Because people want to play God In this blog post I will consider the first 3 reasons in a bit more depth, and in the follow-up blog post I will write about the final 2. If you would like to contribute a reason to this list, please comment below or on the blog. 1. AI is the future and we have to be ready AI is indeed the future, a future that has been in the making for the past 70 years, laying its foundations and building up. It began in the 1950s with Alan Turing, and after some bumps in the road that led to the infamous AI winters during which funding and interest plummeted, at the moment it seems to be going strong. AI is being introduced into a growing number of fields from medicine and law to autonomous driving to personal assistance, smartphones and more. Its ability to “learn” and the vast amounts of data now available to “teach” it means that it is constantly improving. And once it improves enough to take the job you studied for at university, you’re going to have to quickly adapt and find a new role in an AI-driven society. It is therefore definitely a good idea to keep track of how AI is developing and what changes it is leading to. Those who can understand and program AI will have a lot of power and control in a future where AI has created an important presence in numerous fields and AI-related skills will be in great demand. But this still leaves the question as to why AI has become so popular and what its creators’ ultimate objectives are. 2. AI effectively achieves heightened comfort and convenience Humans ❤ speed and convenience. We love saving time, money and effort and anything that allows us to do that is highly appealing. What did we do before GPS and online shopping? How did we manage without voice-controlled personal assistants and scheduled reminders? Convenience has become so intrinsic to modern life that simple tasks we did a mere 20 years ago now seem like a huge hurdle to overcome should our batteries die or the power go out. Drive thrus, remote controls, microwave dinners, clap on lights, anything you see on infomercials, etc. — if it’s convenient and feasible, it’s on the market or coming soon. But sometimes the search for ultimate convenience can cross a line that makes you say, really? Was it so difficult to type a 6 digit password or draw a simple design in order to access our private phones that we had to opt for fingerprint recognition? And was it then such a time-wasting struggle to place our thumbs on a fingerprint sensor and wait .6 seconds for it to open that millions of people replaced their old iphone with one that has facial recognition? How tiny can the perceived added convenience value become and still continue to capture the hearts and paychecks of the public? How much money and scientific effort go into creating technology that will cater to our superficial whims? And perhaps the biggest question — what are we giving up in order to satisfy this search for ultimate convenience? How much personal information are we willing to sacrifice so that we can save .5 seconds of our time or add another app to our collection? 3. AI will save businesses tons of money Yes, it will, but at what cost to society? AI will save businesses money because it will eliminate the need to train, pay and cater to a human workforce. In other words, and I know you’ve heard this a million times before, it will take many people’s jobs. One of the people I spoke with at the AI event scoffed at this. “Did horses take the jobs of people pulling carts around?” he said. Well, yes, they did, but I see his point. During the industrial revolution many jobs were “taken” by machines but they were very unpleasant and even dangerous jobs that people hated doing. However, people did this unpleasant work because they needed money and purpose. Working meant being a productive member of society and a provider for the family which gave people a sense of autonomy and control. But new ways of creating goods and services during the industrial revolution actually created many new jobs. However, in this new revolution, it’s hard to think of a similar number of new jobs being created when most of the existing low-qualified work — both physical and cognitive — will be done by robots. Sure, new jobs for engineers and programmers will open up, jobs requiring great creativity and artistic skill may stick around. But the majority of the population isn’t qualified to do that kind of work, nor do they have the potential to become qualified in it. What will happen to those people? They also need a reason to get up in the morning and to feel the satisfaction of providing for their families through personal effort and not government handouts. However, this kind of charity — humoring people by letting them play robot so they can earn money when it would be much more efficient to use a machine — seems like a foolish reason to hold back progress. But that’s why the question arises — what is the objective of AI on a larger scale? What is it aiming to accomplish in the long run? Because if its purpose is to improve people’s lives, then we must seriously think about the psychological and economic consequences that will accompany the loss of millions of jobs. Many say (like the CEO of a chatbot company at a conference I attended this week) — “but robots will take boring jobs, leaving people to invest their time doing things they like”. But…what exactly does that mean? To be honest, this common argument sounds somewhat childish. Adults have more complex needs and motivations than simply playing video games all day, drinking frappuccinos and visiting Disney World on the weekends (don’t they? This is up for discussion). And of course, I understand the appeal of a life full of exotic travel, tropical resorts and fancy parties. But let’s be honest, whatever payment system is created to support the people who lose their jobs to robots will not be conducive to a very high quality lifestyle that would satisfy the need for stimulation of the average person. So what exactly will people do with all the extra time they’ll save from not working and with the limited resources they’ll have for investing in the consumption of products made by their robot replacements? Will they feel happy, fulfilled and confident while not working or contributing? Will this arrangement increase the well-being of society as a whole or will it create an even bigger gap between a highly successful populace that controls the machines and make tons of money and the under-qualified people who are pacified with government handouts and cheap, mind-numbing entertainment tactics? The realistic prospects feel less utopian and more Black Mirror-esque at the moment, but who knows, hopefully as people begin addressing these topics more and more, better solutions will arise. That concludes my initial thoughts on the first 3 reasons given to the question of “Why AI?” If you would like to contribute to this discussion or submit further reasons to be included in the list, please comment below or on the blog. This is a long topic and there’s much to say about it, but what’s important is beginning to think and talk about the various possibilities that accompany new tech like AI so that we can be prepared for the different scenarios that may one day become reality and have ideas ready about how to approach them. If you are interested in the topic of how psychology and technology interacts, check out Psybertronic.com. As always, thank you for reading.
Why AI? — Part 1
0
why-ai-part-1-1dc08638cb7d
2018-03-18
2018-03-18 20:42:40
https://medium.com/s/story/why-ai-part-1-1dc08638cb7d
false
1,739
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
T.Panova
PhD student and patent holder researching the interaction between technology and psychology. Creator of psybertronic.com
f6f94d222503
TPanova
16
34
20,181,104
null
null
null
null
null
null
0
null
0
e09a49d2d3b5
2018-07-30
2018-07-30 18:51:01
2018-07-30
2018-07-30 18:52:48
2
false
en
2018-07-30
2018-07-30 18:52:48
4
1dc0892a0557
4.57956
0
0
0
A few times a year, machine learning will have a breakthrough in translation capabilities. Recently it has been Google’s learning…
5
This is Why Translation Software Can Never Replace Humans A few times a year, machine learning will have a breakthrough in translation capabilities. Recently it has been Google’s learning algorithms that are increasing accuracy in translations. But even with technological breakthroughs, is translation software the better choice for your translation needs? To answer your question, you first need to know how translation software and machine learning work. Statistical Machine Translation (SMT) SMT uses a large, existing data pool of human translations, commonly referred to as a “bilingual text corpora.” Statistical Machine Translation software need huge amounts of human-translated text in both the input language and whatever language you are trying to translate into. There are several versions of Statistical Machine Translation; there is a Rules Based Translation process (RBMT) and a newer Phrase Based Translation process (PBMT). Rules Based Translation Rules Based Translation was one of the first machine learning techniques used to translate text. Following the rules of linguistics, this system learned to read and understand each word and, based on linguistics, had the ability to move words around based on whatever context the machine thought the word had. However, looking at one word at a time — and never a whole sentence — how accurate could the replacement of a word based on “context” be? Phrase Based Translation Cue in Phrase Based Translation. Phrase based software creates “phrases” of text that are learned from the large corpora data sets in both languages. These phrases, however, are not linguistic phrases, just phrases of words that appear often, based on the large corpora data pools that the software taught itself from. The goal of Phrase Based Translation is to translate whole phrases of words in order to reduce mistakes that were being made in Rules Based Translations (one word at a time). If you are choosing a software that does either type of SMT, be cautious of the size and quality of the data pool. The larger the corpora size the better. With these data pools being collections of bilingual human translations, the quality of these translations must be accurate to ensure the best possible machine translation. But even then, segmenting your text into non-linguistic phrases still leaves the chance that your translated text won’t be contextually correct. Neural Machine Translation Neural Machine Translation (NMT) is the translation software industry’s newest answer to the mistakes found in both types of Statistical Machine Translation. Instead of word-for-word translation or phrase translation, NMT uses full sentence translation. Google started using this method in 2016 and it uses its own Deep Learning technology to better translate your text. Deep Learning is a set of algorithms used to decide what you might watch, buy or search for next. It helps predict what you type. It’s the reason why when you start to search for “how to” in Google you get a dropdown list of other popularly searched “how to’s.” Deep Learning algorithms feed translation software huge amounts of data that it sifts through. What the data translation software is fed is what the software learns from, and based on all the data it gets fed, it starts to make decisions based on what it “knows.” When the software thinks it has learned something new, it will go back and apply that to all it had previously learned. Unlike Statistical Translation, there is not a finite set of data to learn from. There is also no set numbers of patterns and rules to follow based on human input, so “learning” is never over. Deep Learning allows translation software to ask itself true/false questions, catalog the answers and eventually it will start to form a functioning system. When it sees a word or phrase or entire sentence, it puts the text through a series of questions and gives a more accurate output. Jay Marciano, a leader in machine translation and Director of Machine Translation at Lionbridge, says that deep learning and Neural Translation can “identify complicated patterns and associations among these patterns, in ways that are beyond human ability to recognize.” Neural Machine Translations are certainly the future of machine translations. But is it better than a human? Even though it may be able to identify patterns humans cannot see, it still cannot fully understand the nuance and meaning of the written word. A Translation Test Back in January of 2017, Sejong Cyber University in South Korea and the International Interpretation and Translation Association of Korea hosted the ultimate translation battle with three machines. The three machines were Google Translate (a Neural Machine translator), Systran Translation Program (a Phrase Based translator) and an app, Papago (a Phrase Based translator). The texts given to each machine were four different pieces of writing. A Fox News article, a Korean language opinion piece from a local paper and two excerpts from a book, one English based and one Korean based. There were three scoring criteria: accuracy, language expression, and logic and organization. Five points max per category for a total of 15 points that were then totaled together for a total score of 60 possible points. How did the machines do? Last place: Systran 15/60 Second place: Papago 17/60 First place: Google Translate 28/60 Surprised by the results? While machine translation services have come a long way with deep learning technologies, they are still a long way off from replacing a human translator, who scored a 49 on the same translations. “No matter how fast the translation programs are, many [people] will doubt they can perfectly translate subtle expressions of emotion in literature.” — International Interpretation and Translation Association Chairman Kim Dong-ik. Neural translation backed by deep learning is certainly the future, but it still has a long way to go. Even something as small as a syllable can trip up Google Translate, as one chef found out the hard way. During the first official day of the 2018 Winter Olympics, Norway’s chef Stale Johansen needed 1,500 more eggs and drafted his order in Korean with the help of Google Translate. He was surprised to find 15,000 eggs delivered the next day. Only one syllable separates 1,500 and 15,000, a very small nuance missed by what is supposed to be one of the smartest translation softwares out there. Trusting Translation Software It is tempting to put your content into a computer program and have it spit out a translated copy in minutes. But to get a reliable translation you would want to run it through a Phrase Machine Translator and a Neural Machine Translation, and even then, you still might not have a full and accurate translation. ****************************************************************** Originally published at ivannovation.com on July 17, 2018.
This is Why Translation Software Can Never Replace Humans
0
this-is-why-translation-software-can-never-replace-humans-1dc0892a0557
2018-07-30
2018-07-30 18:52:49
https://medium.com/s/story/this-is-why-translation-software-can-never-replace-humans-1dc0892a0557
false
1,112
A premium translation and localization provider.
null
ivannovation
null
IVANNOVATION
null
ivannovation
TRANSLATION,LOCALIZATION,INTERNATIONAL BUSINESS,USER INTERFACE
ivannovation
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Gisel Paola Olivares
null
818c8ad52266
giselpaolaolivares
24
23
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-07
2018-06-07 17:21:46
2018-06-07
2018-06-07 17:44:39
1
false
en
2018-06-07
2018-06-07 17:44:39
5
1dc0f66e7c2f
3.803774
57
1
0
June 6th, John Zhu, MATRIX Senior Vice President, presented a featured talk “Enabler of the Blockchain Era” at the 2018 Global Blockchain…
4
MATRIX unveils partnership-driven development roadmap at GBLS June 6th, John Zhu, MATRIX Senior Vice President, presented a featured talk “Enabler of the Blockchain Era” at the 2018 Global Blockchain Leadership Summit (GBLS) in Hangzhou. The summit brought together over 5,000 attendees from China’s blockchain community, including industry thought leaders, investors, developers, leading tech and finance media, and nearly one hundred exhibiting companies. Zhu presented a roadmap and milestones for the MATRIX project to surpass the Ethereum network in key areas of functionality and utility. He also outlined the scope and major components of the partnership with IDA International Digital Asset Management for MATRIX’s first Belt and Road Initiative as blockchain technology partner. The project will focus on the digitization, management and exchange of tangible assets from the timber industry in Laos, providing a shared public-private platform advancing environmental protection, cross-border supply chain, regulatory oversight between China and Laos. At the closing ceremony of GBLS, the organizing committee awarded MATRIX with the “Best Technology” award at the 2018 conference. The MATRIX technology roadmap Zhu shared five key development directions in MATRIX’s roadmap to surpass the Ethereum network by lowering barriers to entry, substantially enhancing the system capabilities, interoperability and trust. With the aim of broadening the ecosystem of users and supporting advanced functionality, these are five broad development imperatives: Zero code smart contracts: The scarcity of talent capable of coding smart contracts is a major barrier to the wider adoption and application of blockchain technologies. MATRIX has made substantial improvements to Ethereum-based smart contracts by applying advanced natural language processing (NLP), AI and Deep Learning technologies to agreements drafts in everyday languages starting with English and Chinese. By focusing on a zero-coding solution to implement executable smart contracts, MATRIX is building in a defensible competitive advantage for the ecosystem. Realizing 1 million TPS: An essential aspect of blockchain 3.0 delivering on its promise of scale and supporting social and industry applications, is transactional throughput. MATRIX has already achieved 50,000 transactions per second (TPS), overtaking the current average TPS in the global Visa network. MATRIX has a plan to widen it’s dominance in this regard with a development roadmap to realizing one million TPS. MATRIX is taking two-pronged approach, combining patented algorithmic optimization and custom-developed hardware, which is currently in the working prototype phase. Total interoperability: One of the primary value propositions of blockchain 3.0 is creating a shared platform for different kinds of entities to transact securely in a “trustless” environment, while offering the flexibility to combine legacy systems. There are substantial implementation challenges in building a stable open-source blockchain platform that supports public/private chains, token-on-use across chains and averts hard forks. MATRIX has built a fully-functional hybrid chain, which is already in trial use with partners. World class partners are integral to helping MATRIX build critical mass of users on the platform, as well as providing a range of real-world implementation challenges to drive robust development. Ecosystem partnerships: MATRIX is working with the Belt and Road Initiative Center for Strategic Development and International Digital Asset Manager as technology provider on the Belt and Road Initiative. MATRIX is working with Bit.Game and the Global Blockchain Games League for tokenized online games. Research partnerships: The collaboration with the Bayesian Computing Lab at Tsinghua University, has been key to public service applications of the MATRIX green mining mechanism, including the partnership with Beijing Cancer Research Hospital to use machine learning models to improve the speed and accuracy of cancer diagnosis. MATRIX is collaborating with Huobi Labs on capacity building in the blockchain developer and applications community. The MATRIX team also actively works with the Linux Foundation and Hyperledger as a bridge between the international community and China. Technology partnerships: MATRIX has signed agreements with Meta alliance, Smartmesh, Beiyou and Xidian to provide ongoing custom blockchain and AI solutions for projects that are still under development and expansion. Automated Security one of the key differentiating factors of MATRIX’s approach to blockchain is that artificial intelligence is integral to every level of system for intelligent speed, optimization and risk assessment. Artificial intelligence is deployed on an ongoing basis to detect bugs, adapt and optimize the system and conduct verification — all without the need for direct oversight, but with built in transparency for human auditing. MATRIX’s support to developers and users Zhu detailed the four pillars of MATRIX’s end-to-end support for ecosystem stakeholders, from developers building new capabilities, to use-value for platform users. Easy-to-Use Facilities for Developers - User-friendly development UI. - Completed IoT / Security facility and solutions. - Total blockchain solutions for enterprise customers. - Easy-to-Develop smart contract enhanced by AI technology. AI Technology Support - Projects employing AI technology in their applications can get support from MATRIX project team experts. Oversea Promotion of Projects - Local political and legal consulting - Oversea promotion activities, including road shows and exhibitions. - Connecting with mainstream media overseas MATRIX Investment Foundation - Founded by MATRIX team - Providing funding for outstanding projects Zhu closed his presentation at GBLS sharing updates on two of MATRIX’s ecosystem projects with IDA (International Digital Assets) and Bitgame. MATRIX signed its partnerships with IDA on May 26th to build a digital asset management and trading platform on the MATRIX main net. The partnership with IDA will flesh out a technology solution for large-scale asset digitalization and transaction on IBR projects. Zhu also shared details about MATRIX partnership with BIT.GAME to develop AIDEX as a next-generation decentralized exchange based on AI technology. BIT.GAME will also develop an integrated wallet with MATRIX AI Network to support decentralized data and make improvements to content. Reference: Website | Telegram | Twitter | Reddit | Technical White Paper
MATRIX unveils partnership-driven development roadmap at GBLS
1,333
matrix-unveils-partnership-driven-development-roadmap-at-gbls-1dc0f66e7c2f
2018-06-17
2018-06-17 13:50:25
https://medium.com/s/story/matrix-unveils-partnership-driven-development-roadmap-at-gbls-1dc0f66e7c2f
false
955
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
MATRIX AI NETWORK
An open source public intelligent blockchain platform
ad51c60ef692
matrixainetwork
1,003
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-21
2018-02-21 11:22:17
2018-02-21
2018-02-21 19:32:38
4
true
en
2018-02-26
2018-02-26 02:01:19
1
1dc12f27ddac
6.477358
12
0
0
Got lost then found my Passion in Artificial Intelligence.
5
How luckily I Failed Got lost then found my Passion in Artificial Intelligence. About two years ago, i found myself in a curriculum that didn’t excited me anymore. I was one year ahead from graduating from a master degree in banking and preparing myself to build my career. Luckily for me, An amazing thing happened: I FAILED! I applied to 12 Master programs and got rejected from all of them. for my lack of experience (really? I was applying to a master degree 😊), at first, as everyone would be, I was angry, ashamed and scared. Angry because all my classmates got accepted, even those having lesser grades. Ashamed because I’m a foreigner student, and one more year means more 15K euros which could be fine if my family were rich enough, but that wasn’t the case. Scared about my future, how am I going to pass this? Am I going to return to my Country? All those sacrifices, bad nutrition, learning and growing for nothing? Is it the end for my career that didn’t started yet? This is the kind of question that streams in our heads in those situations, and frankly nothing hurts more than our own THOUGHTS. Everything changed the day I watched a TED video made by Carol Dweck, a world-leading Stanford University psychologist. She’s one of the world’s leading researchers in the field of motivation, achievement and success. P.S. Watch this video and feel free to share it. It’s probably worth gold for someone. After watching it, i was deeply affected. I managed to download her book because she really got my interest. And that night, like every other night before, I couldn’t sleep and kept my head busy as long as I could, but the inevitable happened. The flow of questions… Without knowing it, something inside of me had changed. As I was basically trying to watch the flow of thoughts streams in my head, I had a couple of briefs moments of clear vision. Seeing my thoughts pass rather than being consumed by them, judging them or feeling bad about them (aka’myself). Well, I figured out some time later that I was MEDITATING. I learned everything I needed that night to succeed in everything in life, literally everything. A thought is painless unless we believe it. We tend to overestimate bad things and underestimate good things. Change seem always harder from the inside than it’s really is. It’s only a thought, and thoughts can be changed 😊 You finally realize: YOU ARE NOT YOUR THOUGHTS. You can Fail, but not be a Failure. Failure is the most powerful human incentive. Growth mindset: IQ is bullshit and grades don’t matter. A growth mindset means that you believe intelligence can be developed, and you have a passion to learn. This way you embrace challenge, learn from criticism, keep going when things get tough and get inspired by the greatness in others. FAILURE is just a thought, a word describing a concept which means trying something with a not achieving it. Since we were child (in our cultures), we learned to hate mistakes. No one loves to be wrong, kids may laugh at you if you get it wrong, we try to avoid making them. So we teach our children to stop trying. That either they have an A or they are stupid. That giving up is better than fooling yourself. You keep trying all your best to avoid doing mistakes and not be called a failure like your cousin Steve. (All parents have their Failure example) We are killing the curiosity and creativity of our kids without noticing. Anyway, note that a DREAM is also a thought, it remains a dream unless you make a DECISION to achieve it. So I Decided to lookup for my DREAM first, then achieve it, but this time with a different approach. Step by step, incremental growth, even if I don’t see yet the end of the tunnel or which one it’s going to be. That was my first Decision Yaay 😊 The second decision was to unleash my potential by switching to a GROWTH MINDSET. Meaning that I’ll become a child again, eager learner that want to understand everything. (it’s quite funny the reaction of people when you ask them a deep question using the baby approach 👶) What? yeah I’m a Baby, so what? 👶👶👶 The next thing was to figure out my PASSION. ’Cause when you are passionate, curiosity is effortless and motivation is at it’s highest. I spent about a month feeding my curiosity of what possible path should I take? what DREAM JOB should I do? I knew Programming was part of it, if you want a career that last +30 years, programming is not an option. I had good understanding of finance, statistics, econometrics and basic level in programming. The next action was to become a better programmer. Remember that change seem always harder from the inside than it’s really is. So as programming! Few options and careers paths emerges: I figured out what I care most about by looking elsewhere. I understood better my inner interest in finance. I was always amazed by the power of econometrics and time series. Model and predict the future, improving financial executives understanding, enhancing their decision making, reducing risk… Using DATA. And that was it, i found my new Passion : Data Science. Next thing I had to do was to look for how to become a Data Scientist. I was astonished by the popularity of the Job: The sexiest job of the 21st century. Really !!! why? The half of the work was done by then. I immediately looked for road maps, found a clear path to follow and understood better why Data Scientist was the new superstar. Just like quant’s and traders were back in the 80’s and 90’s. Take a look at the picture bellow: The first time I saw this, I looked at it thinking, OMG it’s going to take forever. I have holes everywhere. I’m far away from having the skills to become a Data Scientist. Well, i think you know what was happening, right ? It started again, that little voice inside trying to discourage us. Except this time, i was no longer believing my thoughts, I was IMMUNE! Instead of listening to them, i took a deep breath then I said to myself. “It looks like a subway plan, I’m the driver, the stations represents all the steps to achieve my goal, my DREAM. NO more excuses ! New way of thinking: Growth mindset and incremental learning. Step by step, leveraging the power of Tiny Gains. So let’s summarize, there was a whole world out there I needed to learn all about, and had about 8 months left before applying to my final year of Master. I took another deep breath and from that moment, all my thoughts were focused on learning everyday something new about my Passion, Data. Don’t make assumptions, give it a try instead. Once you fail, you’ll know better. By the end of the year, following my kid curiosity and motivated by my new passion, the results were unbelievable. I managed to refresh my statistics and mathematics, learned python and R, start machine learning courses. Got an apprenticeship in Data mining and got admitted into a master degree in Actuarial Science (The same that rejected me one year before). — One year later, I had a solid foundation in math, statistics, data mining, machine learning and some knowledge of computer science. I handled millions rows of data and automated some business processes. Helped and grow in a great team, had my Master degree and completed my road map. all those noisy questions for nothing I’ve also found a new path, because like I said earlier, i’ll always be a kid and kids never stops learning. Now my passions include Data Science & Big Data, Artificial intelligence & Cognitive Science. Data became bigger, and companies like Google or Facebook Open sourced big data framework to ease the handling of Big Data. For those of you who didn’t get what a Big data framework is. It’s just the infrastructure, the languages and the tools needed to store and process Terabytes of data efficiently to find insights. (one laptop is not enough anymore, think about the cloud) I believe that AI is the 4th revolution. It’s already transforming all industries and all aspects of our lives (working, learning, buying, driving…). But that’s a story for another day, for my next article, i’m going to dive more into Artificial Intelligence. For those of you who are steel looking for their passion, i invite you to take a look at ikigai, a Japanese concept meaning “A reason for being”. The Japanese believes that by finding your ikigai, you can achieve a happier and more balanced life. So what are you waiting for ? Find your dream and choose happiness. Feel free to share your thoughts, experiences and passions. And don’t forget to connect with me on LinkedIn, i’ll be more than happy to help you or talk about whatever interests you ! #Neverstoplearning #GrowthMindset #MachineLearning #Failure #Passion #FindyourIkigai #ChildMindset 👶
How luckily I Failed
309
how-luckily-i-failed-1dc12f27ddac
2018-05-11
2018-05-11 15:07:44
https://medium.com/s/story/how-luckily-i-failed-1dc12f27ddac
false
1,531
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Redouane Chafi
Geeky,excited by AI ,Big Data, Machine Learning, and Technology… Interested in creating state-of-the-art solutions using ML to help transform our Society :).
933ce03fa212
redouanechafi
19
58
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-28
2018-07-28 16:05:15
2018-07-28
2018-07-28 16:11:20
1
false
en
2018-07-28
2018-07-28 16:11:20
0
1dc165f1338
5.50566
1
0
0
I’ve been teaching in Business Schools for almost 10 years, concretely in Esade, before that I spent 4–5 years in technical universities…
4
Teaching in a Business School — A personal experience. I’ve been teaching in Business Schools for almost 10 years, concretely in Esade, before that I spent 4–5 years in technical universities and before I had a life in IT in the private sector for 20+ years. Learning how to teach is always a long, demanding process, asking for passion. When I started, I was very curious to know how teaching in a Business School was different than teaching in any other discipline. I guess that maybe some of you are equally curious about it. This is a story about my personal trip in this fascinating world of teaching. Probably when anybody mentions teaching in Business Schools you think of cases. The link is almost immediate. Are we still teaching with cases? What are cases anyway? I, as anybody else, learned to teach with cases, in HBS in the Participant Centered Learning course. There, I was fortunate enough to have people such as Clay Christensen as professors. And yes, I do use cases, however less and less. Cases are nothing more than short stories that allow us to situate the students in a problem and from there look for solutions, as if this problem was a real one. The main tool that we use in cases is a powerful resource that all humans master: our imagination. We put ourselves in the shoes of the decision maker and we live her fears, ambitions and opportunities. There are three things when teaching cases that are really important, but difficult to master: 1. Capture the imagination of participants and put them in the shoes of the decision maker. 2. The questions. The discussion articulates through questions that inspire and have controversial answers. 3. The facilitation process. The dynamics and timing are decisive. The objective is to touch all the points in the agenda and to generalize them using the case as the conducting narrative. Cases have been working wonderfully, however they are not without problems. Sometimes related to the contents, teaching with cases assumes that students know the contents, however sometimes you must face highly dense contents such as when I teach AI or Machine Learning. Also, you may need to present a roaster of alternatives (e.g. uses of AI in business) that fits badly in the narrative of a single case. And sometimes, you need to teach skills (e.g. programming in Python), then you cannot teach how to bike just by discussing about it. But, maybe the biggest problem that you face when teaching with cases is its biggest strength. You use the imagination as a resource, but we humans must do to learn, imagine it is not enough. Therefore, learning is becoming increasingly experiential as knowledge becomes commonplace. Then, classes become co-created with students and lectures are translated to videos or materials to be worked out individually. It’s a simple idea, it’s called flipped-learning. My first contact with flipped-learning was around 2010. At that time, we won several EU projects with Margarita Romero (now a full professor in Montreal) around gamifications. What was my surprise when some years later Esade adopted the methodology as its primary way to teach. I would like to show to you my particular version of flipped-learning and nothing better for that than a real example. Next September we start a new edition of the MIBA, our Master in Business Analytics. MIBA is our most technical program, you’ll find there Python, R, AI, Machine Learning, Fintech, the AWS Associate Certificate, Hadoop, Spark, … The first course of the MIBA is Business in the Era of AI & Cloud. Its objective is to explore how Big Data, AI and cloud are changing not only the way organizations compete but also its own fabric. It’s not a simple program, with 80+ participants from all over the world, little technical knowledge and high expectations. This course is done in an intensive format, from 9am until 5pm, during five consecutive days. This format facilitates to maintain the tension and the continuity but leaves little space to individual work. Therefore, it doesn’t work well for technical courses based on challenges. In this course we deal with: 1. Business in the Era of AI & Big Data. How AI & Big Data are changing organizations, the hype-loops that scale the change and the implication of these changes for the organizations themselves. 2. Innovation & Design Sprint. What is innovation? What is the role of technology in innovation? Who innovates? We use Design Sprints as a tool for innovation. 3. Platform Revolution. Platforms are changing the world. We look at them and particularly to their growth. We use the growth model to decipher their growth opportunities. 4. The Agile Revolution. Agile is changing not only software engineering but project management and the way that organizations are structured. 5. Infrastructure as Code. Cloud is changing everything; stablished companies and the way startups operate. We discuss agility, technical debt, IT infrastructures and serverless computing and how all this is changing organizations. A basic element in course design is how much of the contents goes to the session and how much is flipped before the session. This changes from session to session, but there is always some contents flipped. I normally use videos, either to present the contents that participants have to read/watch/work before the session or to present the contents themselves. I strongly belief that present the contents is important. Articles always relate to a context that in many ways differs from today and elaborate conclusions that many times today are taken for granted. But, it is not only this, there is also an important aspect of being close and connect with participants. That’s why video presentations of the contents are important. I normally habilitate a forum in order to enable participation and discussion of the contents and a quiz to promote assimilation (reading to remember vs reading to forget). Both count for the participation grade of the class. In the session we go through the contents, sometimes briefly sometimes with more depth. Because participants have worked with the materials previously, discussion flows easily. During the second part of the session we either have a speaker or we do a group project. When we do a group project, I normally divide the time in four parts. The first part is devoted to the project itself. The second is a peer-to-peer evaluation. In the third part the best groups present and during the fourth we engage in a de-briefing. This works well in the case of business subjects, but it doesn’t when you have to teach more technical ones or when projects cannot be finalized in the short space of one hour (e.g. most machine learning challenges). The format is not always the same, it changes from session to session depending on the contents, its difficulty and the feasibility of doing the challenges in class. In our case in Business in the era of AI & Cloud and Platform Revolution there is a lot of contents in class, while in Innovation & Design Sprint or The Agile Revolution most of the contents is flipped. Sometimes, such in the case of Infrastructure as Code where exercises in class are relatively small, I do a quiz and we commented in class to wrap-it-up. This is of course a very Business schema. When I teach Machine Learning, AI, Python or AWS it changes a lot because the objective there is to build skills challenging participants with project of increasing difficulty, you can only learn to bike biking. However, the objective is always the same: participants should be able to confront problems by themselves, assimilate the methods developing their own approximation to the subject. Personally, I try to design effective sessions where at the end of the session we have been able to capture as much knowledge and assimilate as much of the skill as possible. We have to avoid the learning to forget sensation that results of many sessions and transform it in sessions where we assimilate the contents and make it our own in the most effective way. At the end of each session participants should be able to answer without any doubt to the simple question of what did we learn today? And be happy with the answer. The happier they are with the answer, the happier I could be with my work. estevealmirall.com esteve.almirall@esade.edu
Teaching in a Business School — A personal experience.
1
teaching-in-a-business-school-a-personal-experience-1dc165f1338
2018-07-28
2018-07-28 16:11:20
https://medium.com/s/story/teaching-in-a-business-school-a-personal-experience-1dc165f1338
false
1,406
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
esteve almirall
Data Science & Innovation prof @Esade
dc4e1e8e390f
ealmirall
357
276
20,181,104
null
null
null
null
null
null
0
null
0
fcbf0f756e2c
2017-10-10
2017-10-10 22:28:12
2017-10-11
2017-10-11 22:29:35
12
false
en
2017-10-12
2017-10-12 16:07:54
18
1dc185b331ae
11.516038
26
3
0
Social media is overwhelmed with toxic trolls and humans are failing to keep them at bay. It’s time for AI to help them.
5
How To Empower Artificial Intelligence To Take On Racist Trolls Social media is overwhelmed with toxic trolls and humans are failing to keep them at bay. It’s time for AI to help them. White supremacists march with torches through the UVA campus in Charlottesville, VA- Friday, Aug. 11, 2017 (Mykal McEldowney/The Indianapolis Star via AP) Many working in a mathematics heavy field have a similar vice. We want to quantify everything, especially if the quantification process is going to be an extremely complicated and imperfect one. In fact, the level of difficulty is the main draw because it forces us to think about what makes up the very thing we’re trying to quantify and how we can objectively define and measure it in the real world. And when it comes to quantifying bigotry that’s exploding on social media, this isn’t an abstract problem for the curious. As social networks have become a global phenomenon with billions of users, human moderation is failing to scale with the explosion of content and on top of the human toll, creating major business and public relations problems for the companies that built them. Just ask Twitter. After years of hemorrhaging cash, it’s been looking for a buyer interested in monetizing its users for its own devices and willing to absorb the losses for a flood of new sales. But despite some interest and a few bids, the deals went nowhere for one simple reason: Twitter’s troll problem. And as the problem spreads to Facebook and comment sections of news and blogs, Google tried using its artificial intelligence knowhow to help flag bigotry, but when used against actual hate, its system came up short on many counts since it has to rely on keywords and the sequences in which they have to be used to know how toxic they are. It’s the fundamental principle by which neural networks used for such problems are built, and they’re rather limited. For example, let’s say someone posts a comment that says “all black people are thugs” which is obviously racist as hell. Google’s neural net learned by analyzing over phrases containing slurs like this and their intended targets again and again until it sunk in that the keywords “black,” “people,” and “thug” put in close verbal and logical proximity to each other are, say 90% toxic. So far, the system works, but let’s set the complexity bar higher. Let’s consider another hypothetical post that says “black people should just play basketball” which definitely has a racist connotation, but doesn’t have slurs and obvious negatives for the system to react to. It sees nothing wrong in a combination of “black,” “people,” and “basketball,” yet the quote is obviously saying that black people should just be athletes, implying other careers to be off limits, and not just any athletes, but in a sport designated for them. It’s a solid 90% or higher on the toxicity scale, but the algorithm sees little to be suspicious about other than the word “just” and flags it as 60% toxic at the very most. Simply looking at sequences of words and their logical distances from each other in the phrase has some problems as a reliable method for a bigotry detector. But how exactly do we remedy these glaring shortcomings? The Problem With Dog Whistles To try and answer that, we need to step way, way back and first talk about bigotry not as an algorithm, but as social entity. Who exactly are bigots and what makes them tick, not by dictionary definition one would expect to find in a heavily padded college essay, but by practical, real world manifestations that quickly make them stand out. They don’t just use slurs, or bash liberal or egalitarian ideas by calling them something vile or comparing them to some horrible disease, which means the bigots in question will quickly catch on to how they’re being filtered out and switch to more subtle or confusing terms, maybe even treating it like a game. Just note how Google’s algorithm goes astray when given quotes light on invective but heavy on the bigoted subtext and what’s known in journalist circles as dog whistles. Sarcasm adds another problem. How could you know on the basis of one comment that the person isn’t just mocking a bigot by pretending to be them, or conversely, mocking those calling out his bigoted statements? Well, the obvious answer is that we need context every time we evaluate a comment because two of the core features of bigotry are sincerity and a self-defensive attitude. Simply put, bigots say bigoted things because they truly believe them, and they hate being called bigots for it. Only sociopaths and psychopaths are perfectly fine with seeing themselves as evil, ordinary people don’t think of themselves as villains or want others to consider them as such. Even when they say and do terrible things we will use as cautionary tales in the future, they approach it from the standpoint that they’re either standing up for what they know to be right, or just doing their jobs. Even when confronted with irrefutable evidence of their bigotry, sexism, or evil deeds, they’d go as far as to say that they were driven to it because they were criticized so much, as if “I only started using ethnic slurs and calling for mass deportations because you called me a racist” is a legitimate defense. It’s a phenomenon explored in the famous Holocaust treatise The Banality of Evil, which argues that what we think of as evil on national and global scales can’t be explained by greed, jealousy, or even religious fundamentalism, but by a climate in which everyone is a cog in a machine the, stated goal of which is some nebulous “greatness.” No, this is not to draw a direct parallel between Trumpism and Nazism because they have fundamentally opposite goals. The latter was based around ethnic cleansing and global domination, the former is based on isolationism and seems fine with cultural homogeneity and forced assimilation. But those who were taken in by Trumpism really don’t want to be reminded that this is still bigotry. In fact, the common message given to tech businessman Sam Altman on his interview tour of Trump’s America was that they detest being called bigots, bad people, or xenophobes, and warn that they will cling closer to Trump if they keep being labeled as such. I have no doubt that they don’t think they are bigoted or xenophobic, but it’s hard to take their word for it when it gets followed by a stream of invective about immigrants destroying culture, bringing crime and disease with them, and describing minorities as getting fortunes in government handouts while “real Americans” like them are just tossed by the wayside by “un-American” politicians. Social Scores And Shadow Ban Solutions It’s the classic rule that any statement beginning with “I’m not racist, but” will almost always end up being bigoted because the conjunction pretty much demands something not exactly open-minded to be said in order for what will be said to make sense. This is very likely how the aforementioned algorithm knows to start to raise its toxicity score for the argument: it detects a pattern that raises a red flag that something very, very negative is about to make its appearance because it has seen enough examples in its training set. And this is ultimately what a successful bigotry-flagging AI needs: patterns and context. Instead of just looking at what was said, it needs to know who said it. Does this person frequently trip the bigot sensor, pushing it into the 55% to 65% range and above? Does this person escalate when called out by others, tripping the sensor even more? What is this person’s social score as determined by feedback from other users in their replies and votes and likes? Yes, the social score can be brigaded, but there are tell-tale signs which can be used to disqualify likes and votes, signs like large numbers of people from sites known for certain biases coming in to engage a certain way, correlations between some of these sites posting and a rush of users heavily skewing one way, and floods of comments that trigger the sensor, so these are well understood problems that can be managed already. We should also track from where the users are coming on the web. Are they coming from sites favorited and frequented by bigots to post stuff that trips the sensor? That’s also a potential red flag. A flow that tracks where the user came from, their reputation, their pattern of comments, and how they handle feedback won’t be a perfect system, but it’s not supposed to be. It will give users the benefit of the doubt, then crack down when they show their true colors. In the end, we should end up with a user with a track record and a social score reflective of it, and if that score is very problematic, the best practice would be to shadow ban this person. Photo credit: iStock You will also be able to model the telltale signs of a verbal drive-by over time to flag it before anyone sees it and take appropriate automated action. Again, it would be impossible to build a perfect anti-abuse system, but with a flow of data moderated by several purpose-built neural nets will definitely give you a leg up on toxic users. And certainly, for some users it will almost be a kind of perverse challenge to see how far they can push the system and become a commenter with the lowest reputation or the highest offense score. But for a number of others, it could actually be an important piece of feedback. These bigots may have thought about themselves as sober skeptics who worry more about facts than feelings, but immersing themselves in the Trumpist bubbles were led to embrace bigotry through distorted, misleading data, and outright lies. They still think of themselves as upstanding people without a hateful bone in their bodies. But a computer which can show them when what they said tripped a bigot sensor, how often, and the severity and degree of their rants might show them that no, they’re not the nice people they thought. Screen capture of Reddit threads for “white” And being able to transparently present this feedback may be just as key to good anti-troll AI as monitoring sources of traffic, the actual content, users’ histories, and learning how to flag dog whistles from those histories and the input of other users and administrators. We don’t want something only able to flag abuse if we don’t know how it works, we want something that shows us an audit trail to inform users and the programmers what happened, and use the same process we use to identify bigots: over time, in context, giving time and opportunity for the hood to slip and reveal what’s beneath. Then we can mute, quarantine, and provide feedback to users who leave toxic or bigoted comments what we find so objectionable and why. It’s true there’s no law against hate speech or racism, but social media is not a government ran enterprise which must respect their first amendment rights and cannot do anything about their speech as not to violate the law. Trolls can, and do, build their own networks where they can exist in an anything-goes-I-live-to-offend environment, and their disappointment that they can’t harass “normies” does not have to be our problem. Former Breitbart, Editor Milo Yiannopoulos (Instagram) Social Media Needs To Take Out The Trash Social media was created and is maintained by private companies that don’t have to give bigots a major platform, and its users are fed up with trolls who sincerely believe not only that their opinions are only offensive to “libtards, cucks, and kikes,” but any disagreement and consequences for their actions and words violates their right to free speech. Since it doesn’t, we can finally do something about the popular refrain that the comment section is where a misanthrope goes to reaffirm his hatred of humanity, and reason along with civil discourse go to die a horrible death by a thousand insults. Google’s new Perspective algorithm is a good start, but it’s just one piece of the puzzle we can’t solve with the data points from a single comment, even with the most well trained recurrent neural networks. Ultimately, we need to teach computers to follow a conversation and make an informed opinion of a person’s character, something that can’t be done by a single neural net heavily reliant on parsing language. Understanding how to do it may be one of the most important technical issues we tackle, or lose the web to armies of trolls, bots, and people really into goose-stepping to a strongman’s tune. Again, yes, the AI won’t be perfect. There will be false positives and sarcasm will be flagged as racism while actual racism gets the occasional free pass. If humans have trouble telling the two apart sometimes, a computer is bound to make mistakes too. But we’re not aiming for perfect. We’re aiming for parity with human moderators who understand what bigotry is, what it sounds like, and don’t make exceptions or follow arbitrary, absolutist rules like Facebook set up for its moderation team. We won’t need to flag everyone who said an objectionable thing in public, we just need to catch enough of the absolute worst offenders to start making a dent in their advance. There’s also a debate to be had for each network how to handle the system’s output. Should they digitally coral bigots into their own corners and then tag them as toxic, like Reddit and some gaming communities? Should they be shadow banned or booted entirely? How to handle repeat offender who were able to figure out how to game the system’s latest iteration? None of these are technical questions, they’re philosophical debates each social media company will have to have on its own. But they’ll need to have them. And soon. This article originally appeared on [ weird things ] on 03.01.2017 and has been expanded and slightly updated. If you’d like to help support projects like these, please consider helping us fund our operation by joining our community What We’re Offering $3 — Make A Difference! Support Independent Journalism, get exclusive posts, and become a founding member of Rantt, Inc. $5 — Do you hate us but want to keep us in business to continue your hate? Become a hatron so we no longer have to rely on George Soros’ funding. $10 — Join The Team! 24/7 access to Rantt’s #Newsroom on Slack. Here you’ll be able to spectate & speak with the writers, editors, and founders as we discuss news and current events in real-time. $20 — Get Published! Work one-on-one with your own personal Rantt Editor to develop and write stories that you’d like to see published on our site. You come up with the story and we’ll make sure it’s Rantt ready in no time! $100 — Become An Expert! First-time writer? No problem. You’ll get a team of your own personal Rantt Editors going line by line to help you find your voice and construct your articles. We’ll make sure your articles are not only hard-hitting but far-reaching. Includes all of the above and training webinars. $200 — Become An Influencer! We’ve organically grown our company Twitter’s audience to reach 3 million people monthly without spending a dime. We guarantee you our methods will garner you a minimum of 1,000 REAL followers a month! We’ll go step by step, teaching you how to grow your following. Includes all of the above, training webinars, and access to a slack channel with our creative team. To see more of our offerings, check out our Patreon. JOIN US Rantt News is creating News Analysis and Investigative Journalism | Patreon Become a patron of Rantt News today: Read 4 posts by Rantt News and get access to exclusive content and experiences on…www.patreon.com Thanks again for everything. We love and appreciate you! Sincerely, The Rantt Team Stay in the know and subscribe to our newsletter by following us on Rantt Follow us on Twitter: @RanttNews Join us on Facebook: /RanttNews
How To Empower Artificial Intelligence To Take On Racist Trolls
425
how-to-empower-artificial-intelligence-to-take-on-racist-trolls-1dc185b331ae
2018-05-21
2018-05-21 23:07:21
https://medium.com/s/story/how-to-empower-artificial-intelligence-to-take-on-racist-trolls-1dc185b331ae
false
2,694
Speaking Truth To Power
null
RanttNews
null
Rantt Media
beheard@rantt.com
rantt
DONALD TRUMP,JOURNALISM,POLITICS,NEWS,MEDIA
RanttMedia
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Greg Fish
techie, Rantt staff writer and editor, computer lobotomist
bca0b478216d
GregAFish
8,882
65
20,181,104
null
null
null
null
null
null
0
null
0
dabbecd89642
2018-01-04
2018-01-04 20:01:04
2018-01-05
2018-01-05 19:10:58
0
false
pt
2018-01-06
2018-01-06 01:44:04
8
1dc3e12e5205
4.324528
1
1
0
Recentemente, o Tostão, um dos meus ídolos e colunista da Folha, publicou uma coluna com o seguinte título: Rótulo de excepcional na…
4
O debate público necessita de uma dose alta de lógica fuzzy Recentemente, o Tostão, um dos meus ídolos e colunista da Folha, publicou uma coluna com o seguinte título: Rótulo de excepcional na vitória e péssimo na derrota precisa acabar. No futebol, para quem não sabe, há uma cultura velha da dicotomia, na qual quando se perde, é um desastre e os problemas vêm a tona; quando se ganha, mil maravilhas, até a maior das atrocidades. “Isso não significa que foram excepcionais nas vitórias e péssimos nas derrotas. Há dezenas de fatores envolvidos nos resultados.” Tostão, genial como é, identifica algo já recorrente no mundo contemporâneo: agarrar-se a posicionamentos bem estabelecidos, carregados de certeza, é um comportamento comum no debate público, principalmente quando há enviesamento de conteúdo das redes. O debate público tem a necessidade de conhecer um conceito fundamental: lógica fuzzy, uma forma de raciocínio que quebra dicotomias e abarca um número maior de possibilidades de soluções. Variáveis binárias como a base do debate atual O debate contemporâneo, alavancado pelo poder de recomendação de conteúdo com base em similaridades, tem segmentado grupos e enviesado posicionamentos políticos extramente divergentes. As pessoas tendem optar por raciocínios focados em fla-flu, A ou B, ou o famoso 8 ou 80, no qual se opta por algum dos lados da discussão, de maneira cega e arbitrária. Esse raciocínio geralmente seguem valores binários, ou seja, entre 0 e 1, considerados, respectivamente, falsos e verdadeiros. Seguem, portanto, a lógica booleana, que utiliza-se de operadores lógicos — not, and e or — para definirem resultados verdadeiros ou falsos. Na prática, as pessoas geralmente pegam temas polêmicos como o aborto ou independência do Banco Central, e já esperam posições favoráveis ou contrárias, e não entendem que o buraco é mais embaixo. Lógica Fuzzy como uma nova forma de se pensar os problemas públicos As consequências da lógica booleana para o debate público são as conclusões precipitadas que alguns consensos possam causar. Optar por reformar a Previdência Social ou não, por exemplo, não necessariamente trará as melhores soluções, pois dentro da reforma há questões mínimas que são complexas para o sistema como um todo. A idade mínima, regras de transição, aposentadoria rural, indexação ao salário mínimo e tanta outras questões devem ser decididas com muita cautela. Quando comecei a estudar machine learning acabei me deparando com a lógica fuzzy — também chamada de Difusa e Nebulosa — que é basicamente a segmentação dos resultados entre verdadeiro ou falso, que poderão ser classificados de n formas, geralmentre entre 0 ou 1. Assim, tomando que 0 seja falso e 1 seja verdadeiro, um resultado de 0.5 pode representar meio verdade, e os resultados 0.9 e 0.1, representam quase verdade e quase falso, respectivamente. Alguns exemplos práticos da lógica fuzzy no debate público Um debate recorrente nos dias atuais é a legalização da maconha, muitas vezes visto como mecanismo de amenização da violência pública ou, por outro lado, acentuação dos problemas de saúde envolvendo entorpecentes. Dificilmente se vê as drogas como um mercado, extremamente complexo, onde temos canais de aquisição e de distribuição que precisam ser cuidadosamente analisados por agentes que queiram fazer um política pública na área. Saber os índices de incidência de usuários com base em toneladas drogas que perpassam por um local, entender as cadeias de comando desses mercados que possuem ramificações internacionais e podem mover peças em uma possível legalização, entender a estrutura de capital para garantir uma transição segura e legal de fornecedores de drogas, e entre outros mecanismos que favoreçam o diagnóstico é fundamental para construção de uma política pública decente. Isso, quem vos fala, é um mero estudante, que não entende absolutamente nada de política de segurança pública. Imagina quem entende? Um debate mais árido, por ser muito técnico, é o referente à independência do Banco Central. Ambos os lados do debate são taxativos: um quer independência até demais e outro quer o controle político a qualquer custo. No entanto, o que os atores do debate se esquecem é um debate aprofundado sobre governança, desde regras de nomeação de cargos a decisões de políticas cambiais. Novamente, entendo pouco — mais que segurança pública — de políticas monetárias, mas já identifico empecilhos para um debate raso nessa área. Há alguns trabalhos acadêmicos interessantes na aplicação da lógica fuzzy — ou fuzzyficação — de políticas públicas. Deixo alguns caso se interessem: Proposta de um Índice Municipal de Qualidade Ambiental: http://www.sorocaba.unesp.br/Home/Pos-Graduacao/PosCA/dissertacao-fabio-silva.pdf Previsão de Estudantes com Risco de Evasão Utilizando Técnicas de Mineração de Dados: http://www.br-ie.org/pub/index.php/sbie/article/view/1585 Fuzzy Implication in the Area of Policy and Policy Making; a Short Non- Mathematical Introduction for Policy Makers: http://www.macrothink.org/journal/index.php/jpag/article/viewFile/804/610 Buros ad referendum Buros ad referendum é um espaço que dediquei aos meus textos para que se sugerisse, de maneira meramente hipotética, algumas ideias que possam solucionar os problemas discutidos aqui. Reitero, nada mais que hipóteses, fundamentadas em nada — ou quase nada. Segmentação das matérias legislativas No legislativo, dentro de discussões extensas e de matérias complexas, há alguns mecanismos dos Regimentos Internos das casas que permitem votar a parte alguns pedaços dos projetos de leis, tecnicamente chamados de Destaques: “O destaque é um procedimento de Plenário que passou a ser utilizado também nas comissões. Trata-se de um requerimento cujo objetivo é votar separadamente parte da matéria, destacando-a da proposição principal ou das acessórias.” Regimento Interno da Câmara dos Deputados aplicado às comissões Os destaques, com um número mínimo de pessoas permitido (varia de cada casa legislativa), permitem votar em separado temas mais polêmicos de projeto e assim chegar-se a um consenso mais definido entre bancadas. Um estudo feito em um curso de formação da Câmara dos Deputados, constatou que, entre 1997 e 2006, 25,4% de todas as matérias legislativas tiveram algum destaque, tanto nas comissões quanto no plenário da Câmara dos Deputados. Se o número é alto ou baixo, é relativo, mas os números provam que entre o período de 10 anos, houve aumento dos destaques e a tendência à segmentação do debate legislativo. O lógica fuzzy, portanto, agradece a instrumentos como os destaques legislativos, por permitem tornar o debate menos polarizado e flexibiliza matérias legislativas que são complexas e grandes. Estratificação ótima dos debates Como já foi mencionado aqui, há uma certa necessidade de estratificar o debate e conseguir segmentar as decisões de maneira mais precisa. Caso queiramos aprofundar isso, seria possível criar algoritmos de pertinência de cada debate, levando-se em consideração a razão da amostra de indivíduos engajados pelo grau de consenso em relação à matéria (mensurado pelo fuzzy de cada “folha” de decisões)? Talvez, nem tudo é sim ou não :) Bom, caso queiram continuar acompanhando as publishers, inscrevam-se. Caso não queiram continuar acompanhando, mas se interessaram pelo assunto, curtam. Se por ventura não gostaram do texto, mas entenderam a pegada, de novo: inscrevam-se. Se não gostou de nada, comente e me xingue.
O debate público necessita de uma dose alta de lógica fuzzy
1
o-debate-público-necessita-de-uma-dose-alta-de-lógica-fuzzy-1dc3e12e5205
2018-04-13
2018-04-13 02:15:36
https://medium.com/s/story/o-debate-público-necessita-de-uma-dose-alta-de-lógica-fuzzy-1dc3e12e5205
false
1,146
Hipóteses baseadas em absolutamente nada. Não leve a sério, só leia.
null
null
null
Popota Burocrata
pedro@4mti.com.br
popota-burocrata
null
null
Fuzzy Logic
fuzzy-logic
Fuzzy Logic
34
Pedro Andrade
null
4f1d6b9047b2
pedrokeyloger
14
27
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-10-26
2018-10-26 14:05:29
2018-09-25
2018-09-25 19:45:39
3
false
en
2018-10-26
2018-10-26 14:13:28
9
1dc4263c124b
4.55
5
0
0
Automation has made huge advances in the past few years, from driverless vehicles to voice-activated AI that can perform searches for you…
5
Will My Finance Job Be Stolen by a Robot? Automation has made huge advances in the past few years, from driverless vehicles to voice-activated AI that can perform searches for you. But these advances have their dark side: as robots become more sophisticated, they’ll be able to take over jobs currently performed by humans — including finance jobs. Here’s what Deloitte believes will happen to finance jobs in the near future: What can automation do now? Artificial intelligence is no longer an artifact of the misty future; it’s happening today. Robotic process automation (RPA) uses software to automate basic business processes, and it’s become a hugely popular trend in finance. Common examples of finance-related RPA include: Bookkeeping platforms that import your bank and credit card transactions Payroll tools that calculate employment taxes and automatically deposit them for you Tax preparation software that fills out parts of your tax return based on existing information Purchasing and expense bots that ask employees the necessary questions, forward the data to the approver, and issue a virtual card upon approval Finance jobs, especially at lower levels, are full of data entry and other basic tasks. Any task that involves repetition or taking data from one place and putting it into another can likely be automated with today’s technology. What will automation do in the near future? Oxford University and Deloitte predicted a 95% probability that chartered accountants (the international equivalent of CPAs) will be automated out of existence over the next twenty years. The study also found that the median number of enterprise finance employees has declined by 40% since 2004, partly due to increased automation. As artificial intelligence continues to advance, new software will be able to tackle more and more complex tasks. Consider Australian start-up Hyper Anna: it offers a virtual “data scientist” tool for financial services companies that provides analysis on revenue forecasting, supply chain management, and similar processes. Hyper Anna can even write code and provide sophisticated reports, charts, and insights from basic data. A study by The Boston Consulting Group and the China Development Research Foundation concluded that artificial intelligence would eliminate 390,000 jobs in financial support functions by 2027, but that it would improve efficiency of the surviving jobs by 45%. The study report further predicted that “basic functional work, such as bookkeeping, report generation and data analysis” will be entirely automated within the next decade, and that AI would also be able to take on some of the burden of compliance by spotting red flags and alerting the appropriate authorities. The good news for finance professionals is that automation is certain to create new jobs even as it eliminates old ones. Indeed, a Robert Half study predicts that automation will actually create more jobs than it replaces; the new jobs will largely involve managing the AI systems and using the information they provide. If your job is slated to be replaced by technology, then it would be a natural move to take on the role of managing that technology. How employees can prepare for finance automation Automation may be on the verge of taking over certain finance functions, but others are beyond the scope of what even the most advanced AI can manage. For example, financial planning requires not only financial management skills but also emotional intelligence. Financial planners must understand how their clients feel about money in order to come up with a plan that will not only produce good results but will also be acceptable to the client. That’s something that robots simply can’t do. Finance jobs that require sophisticated high-end analysis and prediction, such as project management and finance directors, are also highly unlikely to be automated away. Finance professionals in roles that are at risk from automation would be wise to start acquiring new skills now, so that as automation continues to develop, they’ll be able to transition to a new role with a minimum of disruption. One option would be to learn the skills you’d need to run the software that’s likely to take over your current tasks. Companies from SMB to enterprise-level will need finance professionals to help them manage new and ever more sophisticated accounting automation tools. For example, accounting software has already automated away a number of bookkeeping tasks that used to be done by humans, but that’s just opened the door for former bookkeepers to become experts on managing that accounting software for employers and clients. The role of QuickBooks Consultant didn’t exist until QuickBooks became sophisticated enough to require an expert to set it up and manage it. Even the most advanced software isn’t smart enough to run itself; it needs a human to manage it, and you can easily learn the skills you need to be that human. How CFOs can prepare for finance automation Given its immense benefits, CFOs and other finance leaders will need to figure out how and when to incorporate automation into their existing processes. A good place to start is by finding out which processes are the most time-consuming for your finance department, and then seeing how much of those processes could be automated. In a survey conducted by IMA (Institute of Management Accountants), respondents listed the following accounting processes as taking the most time and effort: You could conduct a similar study at your own company, or simply start with the processes listed at the top of the chart. The first three items listed (balance sheet account reconciliations, variance analysis, and banking credit card reconciliations) are all tasks that can be automated in part or in full. Once you’ve established which processes would benefit most from AI, the next step is to consult with your IT department to find out what would be involved in implementing automation. The future of finance jobs No one can predict exactly how automation will affect the finance professions, but experts have made some predictions based on the available data. Repetitive, data-entry-intense finance jobs will be the first to be automated out, followed by more complex but rules-based tasks such as tax return preparation. If your job falls into one of these categories, now is the time to start acquiring the skills you’ll need for your future job: managing the robots that will take over your dullest tasks. Originally published at www.teampay.co on September 25, 2018.
Will My Finance Job Be Stolen by a Robot?
14
will-my-finance-job-be-stolen-by-a-robot-1dc4263c124b
2018-10-26
2018-10-26 16:20:52
https://medium.com/s/story/will-my-finance-job-be-stolen-by-a-robot-1dc4263c124b
false
1,060
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Teampay
We make things for teams in labs. Not teams in labs, but things in labs for teams not in labs. We also like grammar.
f820ceb0acdf
teampay
49
71
20,181,104
null
null
null
null
null
null
0
null
0
ec10e05abbed
2018-06-08
2018-06-08 10:52:06
2018-06-08
2018-06-08 10:59:38
0
false
ko
2018-06-08
2018-06-08 10:59:38
18
1dc45967bcc2
4.228
0
0
0
콜텍스 팀이 인사 드립니다.
5
Cortex 프로젝트 업데이트 # 4 콜텍스 팀이 인사 드립니다. 콜텍스 커뮤니티 멤버들께 기술 개발, 커뮤니티 계약, 거래소 등 그 동안의 개발 진행과정에 대한 소식을 전해드리겠습니다. 기술 업데이트 기술 팀은 그간 몇 가지 주요 과제를 수행하기 위해 분주한 시간을 보냈으며 다음과 같은 성과를 이루었습니다. 1. 블록 체인 계정 기반 블록 체인 시스템을 확장하고 최적화했습니다. AI 모델 메타 데이터의 저장 및 호출을 설계하고 구현했습니다. AI 모델 매개 변수의 분산 된 저장 체계의 구현등 세부 사항에서는 추가 조사가 필요합니다. 2. 스마트 계약 기존 스마트 계약 프로그래밍 언어의 확장을 완료 한 상위 계층은 AI 모델 호출 인터페이스를 도입하고 이를 CVM (Cortex Virtual Machine) 실행 가능 AI 명령 세트로 컴파일 합니다. 3. 합의 테스트 AI 명령어 세트가 추가 된 go-Cortex에서 다중 노드 합의 테스트를 완료했습니다. 테스트 모드의 검증이 통과되고 분산 시나리오에서 추가 개발 및 테스트가 수행됩니다. 4. 프로그램 연구 가능한 높은 TPS PoW 시나리오에 대한 조사가 수행되었습니다. AI 공모전 또한 커뮤니티 개발자는 수많은 탑 AI 경연 대회에서 Cortex Labs를 대표하여 탁월한 성과를 거두었습니다. 이 대회에 참가하는 주요 목적은 Cortex AI 에코 시스템에 고품질 및 최상위 AI 모델을 확보하는 것입니다. 이 인공 지능 모델은 Cortex 생태계의 핵심이며 DApps 개발의 핵심 기반이 될 것입니다. Peiwen Yang FashionAI 글로벌 챌린지 — 어패럴 특성 태그 식별 일시 : 2018 년 4 월 21 일 순위 : 15/2945 소스 코드 : Link Burness Duan Kaggle Talking Data 일시 : 2018 년 3 월 6 일 ~ 2018 년 5 월 1 일 순위 : 36/3967 소스 코드 : Link Bo Li IJCAI (인공 지능 국제 공동 연구회) 일시 : 2018 년 3 월 6 일 ~ 2018 년 5 월 1 일 순위 : 5/5204 Zhenzhe Ying Kaggle Talking Data 일시 : 2018 년 3 월 6 일 ~ 2018 년 5 월 1 일 순위 : 24/3967 소스 코드 : link 커뮤니티 업데이트 Cortex Labs의 창립자이자 CEO 인 Ziqi Chen은 싱가포르의 WDAS 컨퍼런스에서 Crypto Valley와의 인터뷰를 진행했습니다. Ziqi는 Cortex 프로젝트에 대한 심도 있는 소개를 했고, on-chain 트레이닝이 왜 실현 가능하지 못한지를 Cortex 와 연관 지어 설명했습니다. 캔디 프로그램 커뮤니티 보상프로그램이 시작된 이래로 Twitter, Reddit 및 Telegram과 같은 공식 채널이 계속 성장했으며 활발한 토론이 진행 되고 있습니다. 참여를 위해 이곳을 클릭하세요. 다가올 이벤트 Cortex의 다음 밋업은 한국입니다! 6월 8일 오후 7시 30분부터 9시 30분까지 서울에서 진행됩니다. Cortex의 핵심 팀이 블록체인 산업, AI 및 디지털 컨텐츠에 대해 논의 합니다. 그 후 Cortex는 6/11, 12에 CPC Crypto Developers Conference에 참석할 예정입니다. 이번 컨퍼런스에는 구글이나 페이스 북과 같은 하이테크 기업의 1000 명이 넘는 개발자들이 모일 예정입니다. 또한 Cortex의 고문인 Whitfield Diffie 교수도 함께합니다! - 2018 Blockchain Korea Conference in Korea, 2018년 6월 7일 Link - 2018년 6월 8일 한국, 서울에서 첫 밋업 Link - 2018년 6월 11일 -12일, 미국 캘리포니아주 마운틴뷰 CPC Crypto Developers Conference Link - 2018년 6월 14일 대만 타이베이 OKEX Global Meetup Tour 2018 Link 거래소 업데이트 Cortex는 비트메인이 만든 최고의 분산화 거래소 DEx.top에 공식적으로 상장되었습니다. Cortex와 DEx.top는 “투명성”, “공정성”및 “분산화”와 동일한 가치를 공유하고 획기적인 혁신을 추구할 것입니다. 각각의 가치와 프로젝트에 대한 인식은 이 협력의 전제이며, 더 많은 협력이 기대됩니다. 더 자세한 정보 Cortex에 대해 더 자세히 배우고 공식 채널을 통해 기술 개념에 대해 토론하려면 언제든지 저희에게 연락하십시오. 웹사이트 : http://www.cortexlabs.ai/ 백서(영문) : http://www.cortexlabs.ai/Cortex_AI_on_Blockchain_EN.pdf 트위터 : https://twitter.com/CTXCBlockchain 페이스북 : https://www.facebook.com/CTXCBlockchain/ 레딧 : https://www.reddit.com/user/CTXCBlockchain/ 깃허브 : https://github.com/CortexFoundation 미디엄 : https://medium.com/@CTXCBlockchain 공지채널(공식) : https://t.me/CortexLabs 공지채널(한국) : https://t.me/CortexLabsKorean 공식커뮤니티 : https://t.me/CortexBlockchain
Cortex 프로젝트 업데이트 # 4
0
cortex-프로젝트-업데이트-4-1dc45967bcc2
2018-06-08
2018-06-08 10:59:40
https://medium.com/s/story/cortex-프로젝트-업데이트-4-1dc45967bcc2
false
536
AI on Blockchain - The Decentralized AI Autonomous System
null
CTXCBlockchain
null
Cortex Labs
support@cortexlabs.ai
cortexlabs
AI,BLOCKCHAIN,CRYPTOCURRENCY,CTXC,CORTEXLABS
CTXCBlockchain
한국어
한국어
한국어
84
BITZANTIN
BITZANTIN believe that blockchain technology is an innovative.
c12007643eec
bitzantin
23
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-03
2018-05-03 18:51:50
2018-05-03
2018-05-03 19:06:55
1
false
en
2018-05-04
2018-05-04 11:52:52
5
1dc5d65d069
1.128302
1
0
0
A public service announcement
5
Stop Calling It “The Law of Large Numbers”! A public service announcement It’s not “The Wheat and Chessboard Problem” either (but I like the imagery!) Just a quick PSA here… When you refer to “The Law of Large Numbers”, what you really mean is “The Logistic Principle”: “[The Law of Large Numbers] has nothing whatever to do with growth. What it actually says is that as a large number of samples of a random variable are taken from a population, the mean of the samples approaches the expected value of the population. In other (and simplified) terms, the larger your sample the better your estimate of the actual value… the basis of all sampling, polling, and inferential statistics… “So what do we call the principle that the growth rate of things tends to slow as they get larger? The idea is kind of obvious, which may be why it doesn’t have a name [so] I propose we call it the logistic principle.” — Steve Wildstrom (via Techpinions, highlights courtesy of Annotote) Depending on the context, you could also insert “Diminishing Returns to Scale” or “Sustainable Growth Rate”. Are you frustrated with the way you experience news and research? Are you spending too much time doing everything but reading? We’re inviting people to help test what we’re working on at Annotote. If you’re interested, sign up here: Annotote | leave your mark All signal. No noise. Annotote is just a better way to read: Highlights by you and for you -- on all the blogs, news, and research you need…Preregister now!
Stop Calling It “The Law of Large Numbers”!
2
its-not-the-law-of-large-numbers-1dc5d65d069
2018-05-22
2018-05-22 02:06:30
https://medium.com/s/story/its-not-the-law-of-large-numbers-1dc5d65d069
false
246
null
null
null
null
null
null
null
null
null
Statistics
statistics
Statistics
5,433
Anthony Bardaro
“Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away...” 👉 http://annotote.wordpress.com
c79a365c5ac1
AnthPB
1,038
102
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-07
2017-09-07 09:39:44
2017-10-03
2017-10-03 03:06:00
26
false
en
2017-10-03
2017-10-03 03:06:00
28
1dc76b3dd365
14.385849
2
0
0
During GDC 2017 at the Art Direction Bootcamp, Andrew Maximov, Lead Technical Artist at Naughty Dog, gave a talk on the future of art…
5
4 Technologies To Change Art Production During GDC 2017 at the Art Direction Bootcamp, Andrew Maximov, Lead Technical Artist at Naughty Dog, gave a talk on the future of art production for video games. It was a prognosis, detailing four of the top technological advancements that are going to drastically change our approach to game development. These changes cause a lot of fear in the artistic community. I know this as I interact with that community every day. The introduction of 3D scanning, simulation, and procedural generation is changing the way we treat game development. These new elements may remove the need for parts of the game production pipeline that have been standard practice for ages. Megascans and SpeedTree already are taking away jobs from foliage artists. Environment artists can now scan entire buildings in a day. And developers used Houdini to build an entire town in Ghost Recon: Wildlands. Technology is changing the way we live. It’s changing the way we work. And if you really want to freak out about it, read Nick Bostrom’s book Superintelligence. But, we’re not about to dive into deep philosophical discussions — we’ll leave that to Elon Musk. These changes cause a lot of fear in the artistic community. I know this as I interact with that community every day. The introduction of 3D scanning, simulation, and procedural generation is changing the way we treat game development. These new elements may remove the need for parts of the game production pipeline that have been standard practice for ages. Megascans and SpeedTree already are taking away jobs from foliage artists. Environment artists can now scan entire buildings in a day. And developers used Houdini to build an entire town in Ghost Recon: Wildlands. Technology is changing the way we live. It’s changing the way we work. And if you really want to freak out about it, read Nick Bostrom’s book Superintelligence. But, we’re not about to dive into deep philosophical discussions — we’ll leave that to Elon Musk. Instead, we’ll discuss Andrew Maximov’s informative talk while adding our own commentary on the subject as well as naming a handful of companies that already are influencing the way we treat production today. In doing so, we hope to lessen a community’s fear about the future and demonstrate that there is a light at the end of the tunnel for those working within the video game industry. Optimization Automation Optimization is a fairly common struggle for game developers. Game artists have been grappling with technical restrictions for ages. Back in the NES days, color itself was a technical resource. It had to be carefully managed because older hardware was unable to visualize many colors on a screen at one time. If you want to check out how a game artist’s tools looked back then, take a look at the Sega Digitizer System. Plenty of compromises had to be made back in the day when these technical restrictions were largely prevalent throughout the industry. Today, color is no longer a technical issue. But, this poses a question: What other aspects of our game production pipeline will become optimized in the future? There are many aspects of the game production pipeline that look to be going away: Manual Low to High Poly, UV Unwrap, LoDs, and Collision. In the future, games will display everything and anything game developers want to portray on a screen. Many of those items already are being automated today. Developers are automating the level of details and improving UV unwraps. The more this happens, the faster chunks of the pipeline will become obsolete. And frankly, we believe this it a beneficial trend moving forward because these processes have very little artistic value. If you’re interested in learning more about the ways technology changes the game production pipeline, feel free to listen to lectures by Michael Pavlovich and Martin Thorzen.These technical artists can teach you plenty about the way tools make game production easier. Capturing Reality Capturing reality is nothing new in the world of video games (remember the original Prince of Persia?) but has become a bit controversial within the industry. Back in 1986, Jordan Mechner, the creator of Prince of Persia, and his brother went outside to snag something other than fresh air. Mechner captured his brother running around a parking lot with a video camera, and then he rotoscoped the footage pixel by pixel to paint what he had captured into the game. Thus, the concept behind all those new up-and-coming scanning techniques are something the industry has been familiar with for quite some time. Max Payne (2001) utilized facial scan techniques and Sam Lake’s face model for the titular character with amazing results. Today, these scans can be applied to a character’s entire body — that’s how Norman Reedus and Guillermo del Toro ended up in Death Stranding! DICE is one of the first major companies to regularly utilize photogrammetry on large scale productions. In doing so, DICE cuts down the development production time and overall cost of game asset creation. Battlefield 1 and Star Wars Battlefront were mainly produced with photo scanning techniques. The company covers this development process extensively in its official blog. Kenneth Brown and Andrew Hamilton’s talk from GDC in 2016 also highlights the influence and importance of photogrammetry — by turning to photogrammetry, they managed to cut down the production time of Star Wars Battlefront in half (and more with automation!). Technically, there’s nothing in the world that we can’t scan! We can scan humans, animals, organic environments, you name it — all we need is enough pictures of the object for a proper scan. Reflective surfaces have presented the biggest problem thus far but there are some particular setups and special photogrammetry sprays (which can be used as a coating for the object) that help solve this issue. It’s possible to scan entire environments with photo-realistic results. The only limitation here is memory, which is only going to grow. Over time, game developers will be able to extensively exercise this technology. Ready At Dawn did an astonishing job of material scanning for The Order: 1886. You can learn more about it from a video presentation conducted by Jo Watanabe and Brandi Parish. Clearly, there are concerns and questions that still revolve around photogrammetry. It’s clunky to implement into a regular development pipeline and game developers won’t always have the resources to capture images from different locations around the world. By the time photogrammetry reaches mass penetration within the industry, however, most of these issues will be resolved. Fundamentally, the familiar workflow of producing everything by scratch is changing because we now have the technology to scan objects and place them into video games. With this technology, we’ll start treating the world as movie directors, using the world around us as one giant movie set. You’ll be able to light an object, dress a person, or modify a structure, but you’ll still have to create and communicate art at the end of the day. Put simply, the art of video games isn’t going anywhere. Look at Blade Runner for a similar example from the film industry! It heavily leaned on real-life Los Angeles but was actually perceived as a cyberpunk masterpiece. Another reason why photogrammetry will become a major part of our game production pipeline is because of its reasonable cost. It’s going to gain traction because it’s much cheaper to use an existing scan of an entire environment rather than produce one by hand. A great way to check out photogrammetry and scanned materials is by getting a little glimpse of both through Megascans. Megascans has amassed a huge set of materials and vegetation, which you can use in game production. If you want to gain an in-depth look at scanning, please check out these talks by Oskar Edlund, James Busby, and James Candy in Introduction to 3D Scanning. Parametrization, Simulation, and Generation The world contains a variety of organic systems, which can be computer simulated using existing technologies. Two years ago, Epic Games made a simulation of a forest’s ground for its Kite demo. That system places rocks, bushes, and other expected forest objects within an environment as well as operates according to species rules that determine overlaps: relative size, shade, altitude, slope, and more. Learn more about this production from a talk Epic Games gave at GDC in 2015. Computer systems can simulate natural processes to generate art. Developers from Horizon Zero Dawn took this a bit further by making an amazing system that could simulate the game’s world just as if it was built by an artist. Learn more about this technology in one of our 80.lv reports. If you found that interesting and want more, be sure to check out this video of procedural asset placement in Decima Engine. Looking at SpeedTree, one can see that we’re moving away from working with individual vertices or polygons as virtual objects to working with them as if they were real objects. We want to control an object’s height and density. With Substance Painter, we’re no longer working on pixels and textures but rather applying materials and brush strokes with a keen eye to realism. The same approach is taken with respects to physicality. We treated objects as if they were virtual, but further down the line, we’ve started treating our objects as things we would expect to pluck from the real world. We want to populate our worlds with objects that have physics and interact with each other in dynamic manners. This allows us to create more credible worlds while also saving precious time. Why bother wasting time sculpting every single fold on a curtain and worry about how each will move when you can produce this effect with a physics simulation? Another clear choice for implementing this technology is with in-game characters. Imagine a situation where we scan a bunch of people and capture their facial features. Then, we blend between various extreme features and automate the representation of characters’ faces based off this diverse range of facial features. Ten years from now, it wouldn’t be surprising if most indie studios were using an automated system akin to this. Black Desert already lets players create believable and interesting characters in a modularized and procedural manner. We’re going to reach a point within the industry when knowledge of computers is unnecessary because individuals will be able to interact with virtual objects just as they would with real world objects. And this is not just an art-related issue as it’s also occurring with programming. Blueprints in Unreal allow people to build entire games without a programmer. These technologies are building a comprehensive functionality that helps people express their intents faster and with fewer complicated layers of execution. AI Assistance and Machine Learning Like many other previous concepts discussed within this article, AI assistance is nothing new to the video game industry. We’ve all been using AI-assisted technology — it’s been integrated with consumer goods, like your smartphone, for quite some time. Google’s new messaging app, Google Allo, is connected to a neural network. It tracks answers and remembers how you replied to questions. In the end, the app proposes intelligent answers, and we’re going to see similar approaches to this in the world of game development soon. It’s easy to imagine a situation where a neural net is going to analyze your previous game and all the art choices you’ve made. Then, the neural net will provide you some solutions, which you can utilize, modify, or ignore. Google has already developed deep-learning neural nets that have been used for image recognition. When you search for an image in Google, the neural net scans the image and displays words that match your search pattern. But, for this new system, the company turned the neural net around. When you put words into it, the neural net produces images. This basically means that the system is capable of some form of creativity, driven by the information humans fed it. This is not the stuff of science fiction — it’s already being used in games! During SIGGRAPH 2016, the guys from Remedy talked about the way they’ve used a neural-based animation solver. They taught a neural net to convert a raw video into an almost game-ready animation. In general, deep learning has become a much bigger part of animation solvers today. And these systems are closer to being common than we all think. These systems will still present game developers choices and artistic control, but they will take away much of the manual labor associated with our professions. During the same panel, Frostbite talked about similar concepts. Tim Sweeney of Epic Games is also a big aficionado of deep learning. As a matter of fact, someone at Epic Games is probably investigating this as you’re reading our article! A huge leap forward for applying deep learning systems to game art was made in the area of texture generation. A number of companies worked hard to make this improvement. Recently, NVIDIA announced a closed beta for a new solution to help game artists that solves several problems with procedural textures. Artomatix is another big company working in this direction. 80.lv has witnessed a number of demos at GDC from the company which allows game artists to rebuild hundreds of textures in a matter of seconds, upgrade textures to higher requirements, or build a stylized version of textures in an easy fashion. We believe Allegorithmic will be the next big player to support deep learning as a part of the growing family of Substance tools (there hasn’t been an official announcement from the team though). So, What’s Left? 3D scanning, neural nets, and smart algorithms are all great. But, where do people factor into this equation? What’s left of the artistic process? At the moment, when developers have a creative problem they come up with a solution and execute it, which can be a time consuming process. However, with all this new technology on board, the execution process will become much faster. Hopefully, these technologies will get developers from a general idea to executing a final product as quickly as possible. Before After The core skills we rely on as artists are still as valuable and as essential as before. Still, certain skills that surround this imperfect hardware or software might become redundant when the hardware and software systems are perfected with time. Intent will act as the basic building block of any art production or interface. We’ll have to build tools that facilitate creating content and objects that are more real than ever before. We’ll treat the content as something from the real world, and we’re going to interact with that content just as we would interact with it in the real world. This actually feeds very well into the artistic process as well. How might it all look in the future? You’ll have your high-level intent where you’ll generate an entire game’s space. And you’ll also have regional intent, where you’ll just point to a certain place and pick a biome for your trees to grow. Also, there would be an object-level intent where you would go in and modify particular elements, such as changing color, adding wear, and so on. This is similar to the way movies operate. One wouldn’t say movies aren’t artistic or that they lack meaning — every shot in a good movie plays some role in contributing to the creation of an overall purposeful work of art. At the end of the day, it’s not so much about how a piece of art is created but rather how a piece of art impacts an individual in a meaningful manner. In essence, creating art is a process of assigning meaning. No matter how optimized the creative process becomes, the artist’s intent and meaning is still crucially important. Game developers still have to think about a message, theme, or set of emotions they want to communicate to the audience in an effective manner. If a work of art doesn’t resonate with people, it won’t be successful as there’s nothing connecting the viewer or player to the work of art. There still is a bigger question that needs to be answered: How will these changes impact game art production in the video game industry? Once again, film acts as a fine industry for reference. The film industry has gone through a technological revolution that democratized production as well. In the past, it was far more time consuming, laborious, and expensive to create a film. But, most people have a digital camera or a smartphone today, meaning more people have the opportunity to create films in a rather inexpensive and convenient fashion. For the video game industry, it means the following things: 1) Cheaper Production Why do you need to model everything when you can just import plenty of scanned locations and polish them up? The same will happen with animation. Maybe in a couple of years, we’ll have a holographic phone that captures the likeness of real people and will immediately create characters in-game based off this likeness. 2) Smaller Teams We’ll need smaller teams. Game developers could do more with less people, meaning there’s going to be a heightened need for generalists. We are used to specialized roles but at some point in the future it’s not going to make sense to follow this trajectory. At some point, everything is will be automated and we’ll be able to execute more artistry rather than focus on technical communications. 3) Production Times Production times didn’t change that much in the movie industry after it experienced the previously mentioned tech revolution. Good stories and strong gameplay will still take time to mature. You might be able to produce an entire game world in three days but that doesn’t mean you’ll have a great game. You’ll still have to iterate, playtest, and hone in-game systems, which take large portions of time to accomplish. 4) Photo-Realism is the New Indie At the moment, it’s cheaper to create more stylized games. But, this fact will change in the near future. By that time, it also will most likely be dirt-cheap to create games with photo-realistic graphics. 5) Jobs Diluted, Not Lost I don’t believe many jobs will be lost as a result of this impending technological revolution. A similar event happened in the movie industry and it didn’t shrink too much. On the contrary, more value and movies have been generated since then because it’s far simpler to create than in the past. I’m convinced that the same trend will unfold in the video game industry. Currently, game making is a very technical process. Eventually, it won’t be such a technical process and more people will have a chance to express themselves through creating video games. Technology is just a tool. Every day I come home, and I do some modeling and texturing. I realize that I might not have that much time to do this anymore. But, if this means that so many others will get a voice, then I’m all for it. We’re not inspired by someone doing UVs. We’re inspired by stories that profoundly changed us within virtual worlds where we want to visit, explore, and sometimes live within. This is what an advancement in technology will grant us. Yes, we’re going to lose some things. Yet, I can’t wait for this future to come. Andrew Maximov, Lead Technical Artist, Naughty Dog Look for more articles at 80 Level.
4 Technologies To Change Art Production
2
4-technologies-to-change-art-production-1dc76b3dd365
2018-03-31
2018-03-31 09:04:19
https://medium.com/s/story/4-technologies-to-change-art-production-1dc76b3dd365
false
3,269
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
80Level
Best place for game developers, digital artists, animators, video game enthusiasts, CGI and VFX talents to learn about new workflows, tools and share their work
197de38e8c0d
EightyLevel
707
107
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-02
2018-09-02 11:15:51
2018-09-02
2018-09-02 12:01:50
1
false
pt
2018-09-02
2018-09-02 12:01:50
0
1dc77bc6b5e8
3.679245
0
0
0
Uma possível aproximação ao approach Direito e Economia é considerá-lo como ancorado no conceito de “custos de transação” e que entre seus…
4
“VICTOR” (A Rede Neuronal do STF): Um Approach da Análise Econômica do Direito by Ihering Guedes Alcoforado Uma possível aproximação ao approach Direito e Economia é considerá-lo como ancorado no conceito de “custos de transação” e que entre seus objetivos se destaca a busca da redução de tais custos e, assim ampliar as possibilidades de transações e interações necessárias a alocação eficiente dos recursos, boa parte delas não possíveis de todo o seu ciclo mediadas pelo mercado, gerando uma demanda ao judiciário, enquanto uma esfera alternativa de alocação de direitos e deveres. Este fato torna o judiciário um objeto da abordagem da analise econômica com o mesmo status do mercado. Ou seja, o judiciário é considerado, na perspectiva econômica, com uma alternativa a negociação seja via o mercado, o que pode envolver um grande numero de parceiros e objeto padronizado, seja via a dita negociação coseana quando o número de parceiros é reduzido. Mas, como um grande número de transações não são possíveis de ser efetivada via os mecanismos de mercado referidos acima, a função é assumida pelo judiciário. Diante dessa realidade a eficiência que se espera do mercado, passa a ser exigida de forma magnificada do judiciário, até porque é a última alternativa para que a transação seja efetivada com a alocação dos direitos e penas. Daí a relevância de deslizar-se os esquemas analíticos de modelagem das “falhas da firma”, para o âmbito do judiciário, enquanto um sistema produtivo de sentenças, o que pode ser feito tanto do ponto de vista microinstucional, considerando o judiciário como uma “planta” (unidade de transformação de insumos em produtos), ou de uma perspectiva macroinstitucional como uma “firma” (uma unidade de controle multinível, já que envolve tanto o processo a nível da planta, com a relação entre as plantas) plano no qual se aloja os mecanismo de controle do processo alojados tanto na Constituição como no Códigos Processuais, com inflação de recursos. . Um possível enquadramento analítico do judiciário na perspectiva da analise econômica do direito é considerar o sistema judiciário, ou uma das suas partes, a exemplo do STF, como uma “planta”, a partir do que se pode avaliá-lo tendo como critério o principio econômico da eficiência, estabelecendo como meta a redução tanto dos seus custos diretos (financiamento do sistema), como dos seus custos indiretos, associados os efeitos decorrentes das restrições ou mesmo paralisação da transação enquanto transita o processo. Um exemplo emblemático desses custos indiretos são os decorrentes da paralisação das obras públicas por ações judiciais. Mas, é necessário ter em conta que os lítigios poderão ser, em boa parte, evitados com a elaboração de “contratos incompletos” adequados a realidade da transação em consideração e regimes contratuais inadequados. Ou seja, os problema (custos diretos e indiretos) decorrentes das transações que enfrentam problemas de efetivação via mercado não pode ser tratados exclusivamente na perspectiva da ineficiência do judiciário no processamento da demanda processo, pois boa parte do problema se encontra na legislação que agasalha as referidas transações, como também na formatação dos contratos, o abre uma outra janela de possibilidades de pesquisa no campo do direito e economia Feito estes esclarecimentos propedêuticos, chamo atenção que “Victor”, a rede neuronal do STF cujo nome é uma homenagem ao saudoso Victor Nunes Leal, apresenta um bom objeto de pesquisa na perspectiva da análise econômica do direito do sistema judiciário em geral e, do STF em particular. Uma possível analise econômica do “Victor” envolve a apreensão econômica do judiciário em geral e, do STF em particular, enquanto uma unidade processadora de informação, o que do ponto de vista econômico, implica sua modelação como uma “planta”, o que não exclui a possibilidade de consideração do sistema judiciário como uma “firma”, como uma unidade de controle do processo cuja normativa implica uma escolha institucional prévia alojada tanto na Constituição como nos Direitos Processuais que se caracterizam por uma inflação de recursos. Ao consideração do STF como uma “planta” (uma unidade de transformação de quatro tipos de informações em sentenças) que transformações insumos informacionais jurídicos e para-jurídicos, os quais podem ser classificados em quatro grupos de informações . O primeiro tipo de informação é a processual, ou seja a contida nos autos. O segundo tipo de informação é a legal, i.e., a que se pode aplicar ao caso em consideração nos autos. O terceiro tipo é a informação doutrinária. O quarto tipo de informação tem uma importância administrativa, já que permite vincular os recursos extraordinários que sobem para o STF a determinados temas de repercussão geral. É necessário considerar que o “Victor” a rede neuronal em construção pelo STF processará, num primeiro momento, apenas o último tipo de informação elencada acima. Os possível efeitos econômicos são diretos e indiretos. O efeito direto a redução dos custos de processamentos das informações processuais que permita identificar automaticamente quais recursos que sobem ao STF podem ser classificados como de repercussão geral, criando as condições de possibilidades de aperfeiçoamento do processo de construção da pauta do STF, privilegiando os recursos de repercussão geral. Os efeitos indiretos é que ao privilegiar nos seus acórdãos os recursos de repercussão reduz os custos de processamentos das informações nas instâncias inferiores. Por fim, com o anúncio do “Vitor” não só se abre uma nova fronteira nos estudos empíricos do judiciário, como se inaugura um processamento de inovação tecnológica no âmbito do processamento de informações que deverá avançar e incorporar os outros três tipos de informações: as processuais, as legais e as doutrinárias necessárias a fundamentação da sentença, anunciando uma nova realidade no nosso sistema judiciário.
“VICTOR” (A Rede Neuronal do STF): Um Approach da Análise Econômica do Direito by Ihering Guedes…
0
victor-a-rede-neuronal-do-stf-um-approach-da-análise-econômica-do-direito-by-ihering-guedes-1dc77bc6b5e8
2018-09-02
2018-09-02 12:01:51
https://medium.com/s/story/victor-a-rede-neuronal-do-stf-um-approach-da-análise-econômica-do-direito-by-ihering-guedes-1dc77bc6b5e8
false
922
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ihering Guedes Alcoforado
null
9f8d85b5f6d3
iheringguedesalcoforado
37
300
20,181,104
null
null
null
null
null
null
0
null
0
a747a9e16c1c
2018-03-20
2018-03-20 16:40:24
2018-03-20
2018-03-20 18:08:33
3
false
en
2018-03-20
2018-03-20 18:08:33
31
1dc823c089c5
3.266981
50
2
0
Additions to the Earth Engine API from the past year
5
What’s New in Earth Engine: New Functions and Features I am often asked about what’s new in the world of Earth Engine. Most of our users know about the big things, like script modules in the Earth Engine Code Editor, which we launched last fall to help our users to organize their code and share their own libraries. However, we’re also constantly making smaller additions and improvements to the platform, too. In this post I’ll gather up some of the lesser-known new Earth Engine features from the past year. A few of the tabular datasets now available in the Earth Engine public data catalog. One pretty significant new feature is the ability to store tabular spatial data — specifically, tables of points, lines, and polygons with attributes — directly in Earth Engine. You can upload your own Shapefiles, and we’ve started growing a catalog of commonly-used public data tables. Some of these are illustrated above, including the US Department of State’s Large Scale International Boundary Polygons, the US Census Bureau’s TIGER roads database, and UNEP’s World Database of Protected Areas (both polygons and points). We also added a new function called FeatureCollection.style() that you can use to style and visualize this sort of tabular data. We also added a number of new functions to help with machine learning tasks. For example, we added a new tool for regularized linear regression, called ee.Classifier.gmoLinearRegression(), which you train the same way as the other classification and regression tools in the ee.Classifier package. We also added a function called ee.Image.stratifiedSample() that performs stratified random sampling, a technique often used to help build training datasets for machine learning. Along similar lines, we added a function called ee.Image.random() that generates images with pseudorandom pixel values, which you can use as a primitive to construct your own random sampling schemes. A simple example of stratified random sampling: choosing 20 points from each of three classes. For linear algebra experts, we added several new matrix decomposition functions that you can apply to Array objects or to each pixel in array-valued Image objects. These functions, which have names like ee.Array.matrixCholeskyDecomposition(), implement Cholesky, singular-value, LU, and QR decomposition. You can use these functions to implement a wide variety of efficient linear models and other matrix operations. Speaking of matrices, we added a new reducer, ee.Reducer.autoHistogram(), that computes a histogram over its input data and returns the result as an Nx2 matrix. Because the result is an array, you can use this reducer together with ee.ImageCollection.reduce() to turn an image collection into an image where every pixel is a histogram of the input values at that pixel location. One column of the result matrix contains the lower bounds of each histogram bucket, and the other column contains the corresponding histogram counts. Sometimes new functions fill in basic gaps that our users tell us about. For example, we added two other new reducers, ee.Reducer.last() and ee.Reducer.lastNonNull(), which select the final elements in a list or collection, complementing the existing first() and firstNonNull() reducers. We also added an ee.Image.blend() algorithm that performs simple alpha blending on a pair of images: it’s equivalent to putting the two images into a collection and calling mosaic(), but it’s simpler and a little more efficient. Simple image compositing with ee.Image.blend(). Finally, we’ve extended the capabilities of some of our existing functions. For example, we updated the Landsat.calibratedRadiance() and Landsat.simpleComposite() functions to support the new Landsat Collection 1 data that the USGS began producing at scale last year. We also added a new skipEmptyTiles option to our image export and map export functions, to help users work with large sparse datasets more efficiently. Similarly, we added a new tileScale parameter to the ee.Image.sample() function and its relative ee.Image.sampleRegions() , analogous to the existing parameter on functions like ee.Image.reduceRegion(). Advanced users who are performing complex or memory-intensive image processing operations can use this new parameter to tune how those operations are parallelized when sampling the results. In an upcoming post I’ll also review some of the new datasets that we’ve added to the Earth Engine public data catalog over the past year. If these updates are useful to you, don’t forget to sign up for the Earth Engine developers mailing list (also known as the Help Forum), where we announce new features and datasets periodically. You receive an invitation to the forum when you register for Earth Engine, but if your invitation expired please don’t hesitate to click the “Contact the owner” link to ask to be re-invited.
What’s New in Earth Engine: New Functions and Features
128
whats-new-in-earth-engine-new-functions-and-features-1dc823c089c5
2018-05-14
2018-05-14 07:01:55
https://medium.com/s/story/whats-new-in-earth-engine-new-functions-and-features-1dc823c089c5
false
720
For developers, scientists, explorers and storytellers
null
googleearth
null
Google Earth and Earth Engine
null
google-earth
MAPPING,GIS,SCIENCE,SPACE,EARTH
googleearth
Machine Learning
machine-learning
Machine Learning
51,320
Matt Hancher
Co-founder and Engineering Manager, Google Earth Engine
889897006cb
mdhancher
238
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-16
2018-03-16 10:50:25
2018-03-16
2018-03-16 10:54:53
3
false
en
2018-03-16
2018-03-16 10:55:57
5
1dc83605a0f
3.69717
3
0
0
Hi guys! Remember these science fiction films about the future shot like 10 years ago? Many of you don’t even think that thanks to modern…
5
Effect Network: AI Gets Even Closer Hi guys! Remember these science fiction films about the future shot like 10 years ago? Many of you don’t even think that thanks to modern technological achievements one can say we live in the future now. Such innovations as Internet of Things, Artificial Intelligence, Blockchain etc. become an integral part of our everyday life without us even being surprised. Artificial intelligence is being applied more and more often (for example for recognition of people on photos, in smart cars). However, this is only a new sphere where many problems still have to be solved. Different systems, even the highly intellectual ones, need human intervention and are exposed to failures. Besides, in order to ensure correct operation, artificial intelligence systems need big amounts of data. This is where decentralization can help as it provides the opportunity to obtain more data from all the participants of the system. In addition, only selected companies are engaged in the AI systems development which hampers the development in this sector. One of the teams working at solving these problems is Effect.AI project team. Effect.AI aims at creating a decentralized ecosystem with the use of AI. Read ahead to find out the advantages of the Effect project, why it’s of interest for me and why I think this is a good idea for investing. Effect.AI is the project that will make you stop thinking of the AI as of some far technology accessible only for several “big minds” in high tech laboratories. According to the team, the network being developed will provide all range of opportunities realted to AI that were never offered before. These opportunities will be implemented in stages and are divided into three interconnected phases. Development Roadmap for the Effect Network Just in brief about these stages (for more detailed information, see the Whitepaper): 1. Mechanical Turk. Creating a working force market where anyone could request a task related to the AI and find a person to perform it. Thanks to the nature of the network these two parties could be in different corners of the world. Each interaction will be governed by a smart contract. 2. Effect.AI Smart Market is a decentralized market where people can purchase AI systems/services or sell them. During these two phases the data gathering and AI algorithms usage will be being decentralized. However, the computation itself will be distributed only during the third phase: 3. Power. Every system is by default exposed to failures. One tiny mistake in computation can be a reason of huge misbalance and data losses. Thus, human intervention and feedback is needed in any case. On this stage, the demand of human intervention reduces to zero. As you see, the idea is really comprehensive and capable to fill all the gaps in the modern AI sector. For all the operations in the network, special EFX token will be used. In order to raise money for implementation of this huge project, the team launches a token sale on the NEO platform. For this event 40% of tokens will be allocated, the soft cap is 4,280,000 euro, hard cap is 14,820,000 euro. The contributions are accepted in NEO and Gas. See the table for details. The reasons why I personally consider the Effect.AI to be a good idea for investing: 1) As you have already understood this is a non-trivial idea. While the rest of the ICO-projects offer another exchange, or another killer of the Ethereum, or something related to gambling, Effect.AI offers something really new and interesting. 2) A strong team and respected advisers whose list has been replenished in recent weeks by such famous people in the crypto industry as Charlie Schrem who is one of the first bitcoin-supporters and founder of Bitcoin BitInstant Exchange, and also Tony Tran who is a co-founder and chief technical officer of Bee Token. 3) Also, the good news for those who want to invest will be the absence of a pre-sale with huge bonuses, which means that there will be equal terms for everyone (though there is a small 10% bonus for the early investors). 4) The project hard cap for this kind of idea and a powerful team is only 14,820,000 euros, which indicates that the team is trying to raise funds exactly as much as necessary for development, and I think that they will collect such a cap without problems and with there will be a demand for tokens when thet access the exchanges. 5) I would also like to mention significant analysts and resources which have a high opinion of this project: the project has a high rating at the most important (to my discretion) site in the world ICOdrops, the project also received a good feedback from CrushCrypto especially in terms of long term investment. Also, the Crypto Brad team reports well on this project. Please consider this to be only my personal opinion, I’m not a financial analyst, conduct your own research. Useful Links Web-site Whitepaper Telegram Twitter Facebook
Effect Network: AI Gets Even Closer
28
effect-network-ai-gets-even-closer-1dc83605a0f
2018-03-28
2018-03-28 09:30:52
https://medium.com/s/story/effect-network-ai-gets-even-closer-1dc83605a0f
false
834
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Crypto-evangelist
null
1ea677216e4a
cryp_evangelist
701
442
20,181,104
null
null
null
null
null
null
0
null
0
661161fab0d0
2018-08-29
2018-08-29 05:09:59
2018-08-29
2018-08-29 11:22:22
3
false
en
2018-08-29
2018-08-29 11:58:42
10
1dc84ecd850a
4.112264
2
0
0
Artificially intelligent systems taking on human competitors is a grand tradition of computer science- thankfully, we’re still in the early…
5
OpenAI Five- New Dota 2 Champions Artificially intelligent systems taking on human competitors is a grand tradition of computer science- thankfully, we’re still in the early stages that don’t feel quite like War Games yet. For its part, OpenAI has been trying its hand at Dota 2 competitive play, and its bots are starting to win against some skilled opponents under certain conditions. AI gamer bots have successfully defeated a team of highly-ranked, 99.95th percentile Defense of the Ancients (Dota) 2 players. Dota is one of the highest paid e-sport games combat games in the world, with prize pools reaching one million dollars. In the game, two teams of characters called “heroes” attempt to defeat the opponent’s home base. In a tournament performed in front of a live audience and 100,000 livestreamers, the AI team won two-of-three DOTA games rapidly (it only lost when the audience selected its heroes, a lineup that put it at a disadvantage). While evenly matched games normally take around 45 minutes, the AI team won the first game in 21.5 minutes, and the second in just under 25. The AI Gameplay for lifelike Challanges According to the co-founder of OpenAI, Greg Brockman, the company is working on AI that can defeat human gamers because the battle experience in Dota is so multifaceted and complex that it approximates the chaos of the real world. Brockman describes this endeavour: “You have imperfect information, you have team work, you have these exponential combinations of different heroes and items, and you have to be able to deal with all of that,” The bots essentially have to develop a form of intuition that allows them to improvise in unusual situations and collaborate with one another. Bill Gates thinks Elon Musk’s AI bots working together to beat humans in Dota 2 is a ‘huge milestone’. OpenAI Bots Take on Tougher Opponents Everyday OpenAI, an artificial intelligence research startup supported by Tesla and SpaceX CEO Elon Musk, unleashed an AI-powered bot last year at The International, the biggest annual Dota 2 tournament. The bot faced off and won against Danil Ishutin, (more popularly known as Dendi) considered to be one of the best Dota 2 players in the world, in 1-on-1 matches. OpenAI will come full circle in a couple of months, when the OpenAI Five will challengethe best Dota 2 teams in the world at this year’s The International. Elon Musk-backed startup uses AI to beat Pro Gamer. Open AI bot has defeated the best Dota2 players in the world-Dendi. The bots are powered by five AI neural learning networks, which prepped for their game day by playing through 180 years’ worth of games against other bots every day. The victory “is a step towards building AI systems which accomplish well-defined goals in messy, complicated situations involving real humans,” OpenAI wrote. Later this month, the AI bots will complete against a team of professional players in the Dota 2 International championships. Meanwhile,the company will continue exploring the far more challenging task of bringing AI skills into the unpredictability of the real world. This is bigger then AlphaGo Unlike decidedly more turn-based games like chess or Go, Dota 2 is a title that requires plenty of real-time decision-making. While Google’s AlphaGo sometimes took minutes to decide how to respond to a particularly well-crafted move, OpenAI Five, as it’s called, does not have that luxury, as its opponent would be making moves in the meantime. These games are operating at 30 frames per second for an average of 45 minutes, resulting in about 80,000 frames, of which the system analyzes one-quarter. Artificial Intelligence In Video Games Why are researchers teaching artificial intelligence to play video games, out of all the possible tasks? The reason is simple: if AI systems are able to learn the skills needed to play video games, they will be able to use those skills to solve real-world problems. Some of these problems, such as managing groups of people or determining the quickest solution, resemble things that players encounter in video games. It also helps that mastering Dota 2 is a challenging task for AI. Chess and Go, board games that have seen their fair share of AI applications, may end in fewer than 40 to 150 moves, respectively. Meanwhile, bots need to make 20,000 moves in a 45-minute Dota 2 match. “As long as the AI can explore, it will learn, given enough time,” OpenAI chief technology officer Greg Brockman told Quartz. If artificial intelligence can learn to play Dota 2 well enough to beat humans, then the potential for the technology is limitless. The milestone of AI systems collaborating with one another, as demonstrated by the OpenAI Five, opens up a wide range of applications for the technology. AI algorithms may be able to team up to accomplish tasks even faster, and may even work with humans in achieving goals. Conclusion At the end of the day, a lot of this “Human versus AI” excitement is a bit over-exalted; these are games being approached by insanely powerful computer programs that can do one thing and only one thing. A lot of the media narrative around how artificial intelligence is already beating human experts is valid in a certain light, but kind of undermines the complex work being done by the people building these programs. This all probably plays into OpenAI’s interests, however, which seem to be focused quite a bit on driving home how quickly we’re progressing toward artificial general intelligence. It’s going to probably be a bit before an AI-controlled system starts trouncing opponents in Fortnite, but for a fixed-perspective strategy game like Dota 2, there is room for boundary-pushing hyper-focused AI programs to bulk up on gameplay knowledge and perhaps deliver wins. This article was originally published on https://blog.yellowant.com/
OpenAI Five- New Dota 2 Champions
51
openai-five-new-dota-2-champions-1dc84ecd850a
2018-08-29
2018-08-29 11:58:42
https://medium.com/s/story/openai-five-new-dota-2-champions-1dc84ecd850a
false
944
where the future is written
null
null
null
Predict
predictstories@gmail.com
predict
FUTURE,SINGULARITY,ARTIFICIAL INTELLIGENCE,ROBOTICS,CRYPTOCURRENCY
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Riti Dass
Is curiosity and forgetfulness the worst combination out there?….. I have probably found out, but can’t remember.
43bec5e1d4e8
ritidass29
117
1
20,181,104
null
null
null
null
null
null
0
null
0
f1a763fc7443
2018-04-17
2018-04-17 14:02:56
2018-05-16
2018-05-16 06:42:27
6
false
en
2018-05-16
2018-05-16 06:42:27
8
1dcabac4ab9c
6.595283
25
0
0
Deep learning is a subfield of Machine learning (ML) that is continuously changing the world around us.
5
What is Deep learning and Why you should know about it! Deep learning is a subfield of Machine learning (ML) that is continuously changing the world around us. From driverless cars to speech recognition, Deep learning is making everything possible. It has become a hot topic of Industry as well as academia and is affecting nearly all Industries related to ML and Artificial Intelligence (AI). What is Deep learning? According to Wikipedia: “Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of Machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised.” Deep learning is inspired by human brain and how it perceives information through interaction of neurons. It’s a branch of Machine Learning and is implemented through large Artificial Neural Networks (> 100 layers). Training ANNs for deep learning requires lots of labeled data as well as huge computing power. So why “deep” learning? For starters, it requires extensive learning through a large interconnected network. As said by Jurgen Schmidhuber in his paper Deep Learning in Neural Networks: An Overview : “At which problem depth does Shallow Learning end, and Deep Learning begin? Discussions with Deep learning experts have not yet yielded a conclusive response to this question. […], let me just define for the purposes of this overview: problems of depth > 10 require Very Deep Learning.” The “deep" also refers to the hundreds of hidden layers of Artificial Neural Networks (ANN) used in Deep learning. There are different types of ANNs for different scenarios which will be discussed later. Why you should know about Deep learning Nearly every industry is going to be affected by AI and ML and Deep learning play a big role in it. No matter if you are in healthcare or legal, chances are you may get replaced by a highly autonomous robot one day. Deep learning has improved significantly in terms of accuracy over the period of years and is still evolving. Understanding its nuances will help us all. Intelligence is the ability to adapt to change. ~ Stephen Hawking Some of the wide applications of Deep learning are: Self Driving cars : A self-driving car is the ultimate evolutionary goal of developing ADASes — Advanced Driver Assistance Systems, to the point when there’s nobody to assist anymore. Visual tasks, including, but probably not limited to Lane detection, Pedestrian detection, and Road signs recognition, are solved with deep learning. The importance of deep learning for autonomous driving systems can be illustrated by the fact that Nvidia maintains long-term relationships with car manufacturers, working on embedded and real-time operating systems designed exactly for these purposes. Google Self Driving Car Humanoids : In a similar fashion, Deep learning is making interacting between robots and humans simpler day by day. We already have personal agents like Alexa and Siri, that listen to our queries and answer intelligently. The great advances in NLP and Image processing enabled by Deep learning are the reason behind such efficient interaction. Looking at the rate of growth of Robotics and Deep learning, autonomous robots are not that far away. A good example being Sophia, a human-like robot by Hanson Robotics. Sophia from Hanson Labs Healthcare : The adoption of Deep learning in healthcare is on the rise and solving a variety of problems for patients, hospitals and the healthcare industry overall. Research has shown that Deep Neural Networks can be trained to produce radiological findings with high reliability by training from archives of millions of patient scans collected by healthcare systems. These kinds of advancements will soon change the health and personal care scenario by replacing doctors with AI empowered expert systems and autonomous robot surgeons. Space travel : As featured here , Steve Chien and Kiri Wagstaff of NASA’s Jet Propulsion Laboratory have predicted that in the future, the behavior of space probes will be governed by AI rather than human prompts from earth. Again the tremendous ability of Deep learning of finding patterns from raw data comes into play. Already companies like SpaceX is using the power of AI for sending probes into space. Soon with the help of AI, humans may inhibit other planets! This should be enough to give you an idea of the vast applications of Deep learning. Unless you’re planning to head in the woods, sooner or later you’ll get to interact with DL in some manner. Now let’s have a look at how it works! Implementation of Deep learning Given that Deep learning is implemented by large Artificial Neural Networks (or simply Neural Networks or NN), let’s find out more about them. What’s an Artificial Neural Network Artificial Neural Network is a network of interconnected artificial neurons (or nodes) where each neuron represents an information processing unit. These interconnected nodes pass information to each other mimicking the human brain. The nodes interact with each other and share information. Each node takes input and performs some operation on it before passing it forward. The operation is performed by what is called an Activation function (non-linearity). It converts the input into output which can be then used as input for other nodes. Artificial Neural Network The links between nodes are mostly weighted. These weights are adjusted based on the performance of the network. If the performance (or accuracy) is high, then weights are not adjusted, but if the performance is low, then weights are adjusted through specific calculation. The leftmost layer of neurons is called the input layer and similarly, the rightmost layer is called the output layer. All the other layers in between are called hidden layers. In a nutshell, an Artificial neuron takes input from other nodes and applies the activation function to the weighted sum of input (Transfer function) and then passes the output. A threshold (called Bias) is added to the weighted sum to avoid passing no (zero) output. Artificial Neuron For knowing more about Neural Networks, check NEURAL NETWORKS by Christos Stergiou and Dimitrios Siganos. They’ve done a good job. How are Neural Networks used for Deep learning For Deep learning, several Neural Network layers (> 100) are connected in feedforward or feedback style to pass information to each other. Feedforward: This is the simplest type of ANN. Here, the connections do not form a cycle and hence has no loops. The input is directly fed to output (in a single direction ) through a series of weights. They are extensively used in pattern recognition. This type of organization is referred as bottom-up or top-down. Feedforward ANN Feedback (or recurrent): The connections in feedback network can move in both directions. The output derived from the network is fed back into the network to improve performance (loops). These networks can become very complicated but are comparatively more powerful than feedforward. Feedback networks are dynamic and are extensively used for a lot of problems. Now let’s discuss some specific types of ANN extensively used for DL. Most popular ANNs used for Deep Learning 1) Multilayer Perceptrons: These are the most basic Neural Networks with feedforward networks. They generally use non-linear activation functions (like Tang or Relu) and compute the losses through Mean Square Error (MSE) or Logloss. The loss is back propagated to adjust the weights and make the model more accurate. They are generally used as a part of a bigger deep learning network. Read more about Multiple Perceptrons here: Intro to Multiple Perceptrons. 2) Convoluted Neural Network: Convoluted Neural Networks (ConvNet or CNN) are similar to ordinary Neural Networks but their architecture is specifically designed for images as input. In particular, unlike a regular Neural Network, the layers of a ConvNet have neurons arranged in 3 dimensions: width, height, depth. They are particularly suitable for spatial data, object recognition and image analysis using multidimensional neurons structures. One of the main reason for the popularity of the deep learning lately is due to Convoluted Neural Networks. Some of the common usages of Convoluted Neural Networks are self-driving cars, drones, computer vision and text analytics. Read more about the dynamics of Convolution Neural Network here : Convoluted Neural Networks. 3) Recurrent Neural Networks: RNNs are also a feedforward network, however with recurrent memory loops which take the input from the previous and/or same layers (backpropagation). Here connections form a directed graph along a graph. This gives them a unique capability to model along the time dimension and arbitrary sequence of events and inputs. In simpler terms, for any given instant, the network maintains a memory up till that moment and therefore can predict the next action. Most common types of RNN model is Long Short Term Memory (LSTM) network. RNNs are used for next work prediction and grammar learning. Read more about them here: Intro to RNN. This post aimed at providing a brief introduction to the massive field of Deep learning. I have skipped mathematical details of some of the concepts discussed to facilitate understanding. Thanks for reading! Stay tuned for more articles.
What is Deep learning and Why you should know about it!
198
what-is-deep-learning-and-why-you-should-know-about-it-1dcabac4ab9c
2018-06-13
2018-06-13 17:38:58
https://medium.com/s/story/what-is-deep-learning-and-why-you-should-know-about-it-1dcabac4ab9c
false
1,496
Our community publishes stories worth reading on development and design. Android | Blockchain | Machine Learning
null
mindorks.nextgen
null
MindOrks
contact@mindorks.com
mindorks
ANDROID,MOBILE,MOBILE APP DEVELOPMENT,ANDROID APP DEVELOPMENT,BLOCKCHAIN
MindorksNextGen
Machine Learning
machine-learning
Machine Learning
51,320
Aditya Rohilla
Grad Student @ ASU | Software Engineer | Thinker | Life enthusiast. Check out www.adityarohilla.com
e7001b2c84f4
adityarohilla
98
168
20,181,104
null
null
null
null
null
null
0
null
0
282aaf41e776
2018-03-22
2018-03-22 19:17:50
2018-04-19
2018-04-19 12:34:27
0
false
en
2018-04-19
2018-04-19 12:34:27
0
1dcabacbed8
1.807547
0
0
0
Everytime I hear of artificial intelligence, commonly referred to as AI, I always think of either The Terminator movies, or Wall-E. Whether…
1
Blog Post 4 Everytime I hear of artificial intelligence, commonly referred to as AI, I always think of either The Terminator movies, or Wall-E. Whether it is the extreme of cyborgs and AI trying to wipe the earth of all humans, or people becoming lazy and depending on AI to do everything for them, I am always skeptical of the negatives it brings. My biggest plausible fear with AI is that humans will become too overly dependent on it. We will become the people from Wall-E who sit in hovering chairs all day, have machines do everything for us and who are not physically active except for picking up food to eat. I suppose my scope of AI is similar to that of microeconomics, as opposed to looking at the bigger picture and how AI effects us all, not just personally. As far as large scale, I am sure there are great applications for AI technology, I just pray that it is kept in check. As the future of technology continues to grow, so will AI and the responsibilities of those researching and implementing it. Judith Newman uses the story of her son, Gus, a child with autism, to illustrate that technology is not all that isolating. Newman notes that we live “In a world where the commonly held wisdom is that technology isolates us”. In a time where technology is becoming more and more advanced, technology has engrossed many people, with some saying it has diminished our abilities to interact face to face. Newman offers a different viewpoint, one that argues this to its core. Siri makes sure Gus is polite, which can be noted when his brother encouraged him to say expletives to Siri. She responded “Now, now” along with “I’ll pretend I didn’t hear that.” She is teaching him how to be nice to people and how to make nice conversation, in essence saying that bad words should not be used in conversation. Ted Chiang on the other hand, instead of focusing on the small scale interactions with AI, he focuses on it large scale. He is responding to the relationship between Silicon Valley capitalists and AI explaining they both lack insight. Chiang opens his article with an example from Elon Musk about an AI tasked to pick strawberries, and how the AI might determine the best way to maximize its output would to be to turn the entire earth into strawberry fields and wiping out human civilization as a result. This is not what companies have in mind when they gave it the task but the AI lacked the insight to take a step back and think about the situation. He also notes that while corporations are run by actual people, capitalism does not reward them for using insight, only worrying about profits and that the firm is doing the best it can.
Blog Post 4
0
blog-post-4-1dcabacbed8
2018-04-19
2018-04-19 12:34:28
https://medium.com/s/story/blog-post-4-1dcabacbed8
false
479
An ever-evolving repository for insight, wisdom, musings, critiques, and call-outs.
null
null
null
e110oneohfive
null
rosse110oneohfive
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Samuel Venick
null
dcf0c360af66
svenick
0
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-16
2018-06-16 08:08:06
2018-06-16
2018-06-16 08:23:12
1
false
en
2018-06-17
2018-06-17 13:28:16
2
1dcb3eb5c1a7
0.607547
1
0
0
Who is the audience? How do they communicate?
5
Photo by John Sting on Unsplash Who is the audience? How do they communicate? This needs to be defined as explicitly as possible, preferably in a spreadsheet. You need to go beyond the obvious. You need to determine the narratives they use to guide their behaviour. Start with yourself, or the brand you’re working with. To go beyond the surface, use the following technique: Isolate a behaviour, and ask yourself why the behaviour occurs until you can no longer come up with an answer. You need to do this for your own brand, before you can communicate successfuly. Before you can talk to others, you have to figure out what you’re trying to say.
Focus on what are you trying to say
2
focus-on-what-are-you-trying-to-say-1dcb3eb5c1a7
2018-06-17
2018-06-17 13:28:18
https://medium.com/s/story/focus-on-what-are-you-trying-to-say-1dcb3eb5c1a7
false
108
null
null
null
null
null
null
null
null
null
Startup
startup
Startup
331,914
Ashley Dutton
Offering Insight and Opinion on Culture, Entertainment and new ways of thinking
8577cdf41c4f
Icanexplainthis
70
539
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-28
2018-07-28 05:53:11
2018-07-31
2018-07-31 15:12:40
17
false
en
2018-07-31
2018-07-31 15:22:49
8
1dcba8a01e9d
20.701887
4
0
0
OPENTadpole — application consisting of a full-fledged editor of the nervous system of a tadpole frog and physical emulation of the body of…
4
OPENTadpole: the first cybernetic animal OPENTadpole — application consisting of a full-fledged editor of the nervous system of a tadpole frog and physical emulation of the body of the tadpole and the external environment. The ability to create, configure and edit the animal’s connection from scratch, and immediately see how your creation is reflected in its behavior. Hello, I’m developing models of nervous systems, this is my hobby, this is my passion. I individually develop a new approach to the modeling of the nervous system, which consists in striving to simplify the model as much as possible, retaining functionally significant aspects of the modeling object. This approach should significantly reduce the necessary computational resources for the work of the model of the brain with the preservation of cognitive functions. I have long wanted to demonstrate how my ideas can be applied to animals with a simple nervous system, such as a mollusk, a worm or some insect. I really like the OpenWorm project to create a computer model of the Caenorhabditis elegans worm, whose nervous system consists of 302 neurons, and whose conectom was fully compiled. The project consists of two parts: simulation of neural electrical properties of the worm’s nervous system and modeling of its body’s mechanical properties in the process of swimming. This concept I applied to my project, a reference to this can be seen in the title of the project. The choice of the simulated animal was influenced by the recording of Roman Borisyuk’s speech, in which he told about the project on modeling the nervous system of a two-day tadpole of a frog. Inspired by this video, I decided to make some branch in the work on the simulator of the nervous system, which gave the name OPENTadpole. The theoretical platform that claims to explain the mechanisms of the functioning of the nervous system should work both on the scale of simple nervous systems and on the scales of nervous systems performing cognitive functions. It is often possible to find comments directed to the authors of the newfangled theories on the work of the brain, that before modeling large-scale neural networks, it may be worthwhile to show how their theories can be applied to the simplest animals and their nervous systems. On these remarks one can hear an evasive answer that the properties of neurosystems appear only on very large, giant scales, and the life of primitive creatures has no significance when it comes to solving cognitive problems. Such injustice and delusion have become for me another reason to concentrate for some time on the life and behavior of the two-day tadpole of the frog. Cybernetic animals with the nervous system Of course, Man has already created many cybernetic mechanisms imitating certain aspects of animal behavior, for example, the mechanical ducks Vokansona, who not only waved their wings, pecked the scattered food, but also had a semblance of the digestive system with all the accompanying processes. But artificial animals with a nervous system similar to their biological analogue are rare. Let’s make a brief overview of the world of cybernetic animals, so that it becomes clear to you why I have so much boldness to call my tadpole the first cybernetic animal with an artificial nervous system. And we will not start with an animal, but start with a legendary person — Henry Markram. Henry Markram, a scientist known to many as a pioneer in the study of synaptic connections, he became one of the first who began to systematically study the successive version of Hebb’s rule. But Henry Markram became a real fame as the creator of the most expensive imitation of the brain in the world. At the disposal of the scientist and his colleagues is not only the largest funding ever allocated for such purposes, but also the most powerful computing resources of the Blue Gene supercomputer from IBM. The name of the computer gave the first name to the project: “Blue Brain Project”, in 2013 it was renamed “The Human Brain Project”. Despite the fact that the title of the project now speaks of the human brain, work is being done on a model of a small fragment of the mouse cortex. The project managers have big plans, starting with a small fragment of the mouse’s brain to reach the full model of the human brain. Back in 2009, the main curator of the project Markram promised that in ten years will appear computer simulation of the entire human brain. Many people think Henry Markrama is a charlatan, really if you listen to his speeches, then they are oriented more to the poorly versed in neuroscience rich investors than to their fellow scientists. In the history of the project, in addition to beautiful graphic materials, blinking garlands of neurons, there is one practically useful study. About twenty 3D models of neurons of certain types were created, completely repeating the topology of real neurons, taking into account all the bends and branches of the dendrites. Then, a small area of ​​the cortex was configured in which the stored neuron models were arranged according to certain rules, but the neuron models were selected randomly, then the statistics from the obtained model were collected: where the dendrites intersect, at what distance from the cell body, what type of contacts. The obtained statistics were compared with similar statistics, but obtained from the biological nervous system and obtained very important results: the formation of 80% synaptic connections in the cortex is subject to randomness. That is, when meeting freely growing dendrites, axons, collaterals, a synapse can be formed, without any chemical markers. Of course, in some cases, the selectivity of the formation of synapses is not ruled out, nor can it be concluded that the quality of the bonds is random. Synapse can be formed by chance, in the process of neuron growth, but its strength (weight) can be determined by the vital activity of the nervous system and animal. In the framework of the Human Brain Project, before the creation of a full-scale model of the brain, the mouse is still far away. Due to the resources of IBM, another researcher Dharmendra Modha in 2009 announced the launch of a project to create a digital simulation of the cat’s brain. This statement caused a lot of indignation at Markram, which resulted in an angry open letter to IBM’s chief technologist. Competition does not like anyone, but it would be better if we saw it in the struggle of virtual cats and mice than on attracting the attention of the heads of IBM. After many years of significant changes and development in the sphere of imitation of the brain of mammals did not occur. OpenWorm OpenWorm is a very famous project to create a simulation of the nematode (roundworm) of the species Caenorhabditis elegans, this worm is notable for the fact that this is the only animal species for which a complete connection of its nervous system consisting of 302 neurons and about 7,000 synaptic connections is made. Even for such a small nervous system as c. elegans the formation of a connector turned out to be a titanic work. First, the worm was subjected to a complex procedure — serial microscopy, the creation of a series of photographs of transverse sections of the body. It was necessary to make very thin, micron-sized cuts, then to create high-resolution images using an electron microscope. With the length of an adult worm in 1–2 mm — this turned out to be a difficult task, the available atlas of photographs is made up of 3 worms, good nervous system c. elegans possesses amazing stability and repeatability of the structure. Secondly, it took more than seven years of painstaking study of the images, a team of scientists, to compile a map of the connections of the nervous system, corrections are still being made to the database obtained. The next step in understanding the nature of the nervous system c. elegans is an attempt to create a computer simulation of the worm. The digital model is convenient because the experimenter can change and select the settings of its elements, so that the work of the whole model is comparable to the biological analogue, so that it is possible to reveal some laws of organization and work of the nervous system of a living organism by way of empirical selection. Of course, without a general theory of the work of the nervous system, without a theoretical platform, such a search for laws is a very difficult task, the solution of which can naturally be delayed. The project acquired special prominence during the company’s kickstarter in 2014. The OpenWorm community is very fruitful: a three-dimensional nematode atlas was created, in which the nervous system is detailed — each neuron is indicated; a system for modeling and visualization of geppetto has been created and is being widely developed; simulation of the mechanical properties of the body of the worm and the external environment — Sibernetic. But free-moving cybernetic nematodes controlled by the nervous system has not yet appeared. Some simple reflexes associated with locomotion (movement) and pulling back when touched to the front part of the body are simulated, but the greater part of the nerve circuits and associated nematode behavior remains unexplored. Swimming nematodes, control is carried out using simple periodic signals, without the participation of virtual neurons: Tadpoles The little-known project tadpoles.org.uk, modestly explains some of the fundamental principles and laws of the organization of the nervous system. Scientists created a model of the development of the nervous system, its formation in the initial period of development of the animal. First, a model is generated: from neurons, dendrites and axons grow according to certain rules that take into account certain parameters of the tadpole’s body with some probability of influencing the growth direction of the shoots, then synapses are formed at the junctions of the dendrites and axons of different cells. In the final, the model can be activated and it will demonstrate activity similar to the activity of the nervous system of a living tadpole in the part responsible for swimming. It turns out that to form a nervous system with all congenital reflexes and mechanisms, it is necessary that simple instructions are executed by the nerve cells. Depending on its location and affiliation to certain ganglia, the cell must grow its dendrites and axon in certain directions, and also form synoptic contacts with cells nearby, at some distances from the cell body, without any selectivity. The resulting error in the structure of the neural network due to deviations in the direction of growth of the processes when overcoming possible obstacles are compensated by the excessive presence of neurons. For the act of swimming, wave-like contraction of muscles along the body, the tadpole requires about 1,500 neurons, for a nematode less than three hundred. The tadpole of the frog is more complex and evolutionarily developed than the roundworm, and the increase in the number of neurons is connected here, not with the need to increase the computing power, but with the reliability of the system and compensation for the inaccuracy of the work of neurons as computational elements. Some researchers attribute the properties of quantum computers or complex calculators to individual neurons, but this is fundamentally wrong, a neuron is primarily a biological cell, with its inherent inaccuracy in work and instability. Therefore, it makes no sense to spend time recreating all 86 billion neurons of the human brain, it will be appropriate to structure the neurons structurally into certain neurons performing functional tasks assigned to groups of neurons. Functional approach You can spend a lot of time, money and effort on creating the most complex models of complex systems without having obtained practically meaningful results, if the exact idea of the operation of each element of the system is not laid down and about what functions these elements perform within the system as a whole. Ideally, you need to know the result of the work of the model before the beginning of its re-creation, it is this that determines the success in solving the tasks posed, and not the existence of a supercomputer and large financial assistance. Now much attention is paid to neural networks, which demonstrate high efficiency and great practical benefit. Initially, neural networks were positioned as some models of biological neural networks, but over time and the development of neurobiology it became clear that there is only a common name between a formal neuron used in neural networks and a biological neuron. Modern neural networks are a powerful mathematical tool for statistical analysis. It is this positioning of neural networks in their development that will give greater effectiveness. Statistical analysis and processing of a large number of data, rather than a model of the nervous system. It is possible that a certain kind of artificial intelligence may appear on the platform of neural networks, but if we are striving to create intelligence like the human one, we should pay attention to biological neural networks. Neural networks have already proven to be more effective in solving specific problems than humans, and it is desirable that their development continues. As a brain researcher, I would have a great deal of confidence in a machine controlled by a well-designed neural network than a machine controlled by a virtual model of a nerve tissue similar to a biological one. The fact that in the mechanisms of the brain initially elements of inaccuracy and limited perception, which naturally leads to errors, are laid, but on the other hand these same mechanisms give a great potential for creativity and adaptation. The dominant mathematical model used in the creation of biologically similar neural systems is the Hodgkin-Huxley model, described back in 1952. Anyway, this model is used in the Human Brain Project, and in OpenWorm, and in tadpoles.org.uk. The Hodgkin-Huxley model is a system of equations describing charge oscillations arising on the surface of the neuron membrane, the system of equations has been adopted and adapted from electrical engineering in the part of descriptions of self-oscillations in an electric oscillatory circuit. Alan Lloyd Hodgkin and Andrew Huxley added to the system of equations some additional elements and a number of coefficients, selecting them in such a way that the result of their work is compared with the experimental data obtained by them in the study of squid axons. The system of Hodgkin-Huxley equations describes the change in the potential only at one point of the membrane, to obtain a picture of the propagation of excitation through the membrane and neuron outgrowths, it is possible to break the neuron model into some primitives, or to select equidistant points on it and to consider a system of equations at each such point. The Hodgkin-Huxley model is very realistic, demonstrates the spread of the action potential over the crate’s body, but the model requires large computational resources. In my work, I produce a certain reengineering of the nervous system, highlight the significant and discard or simplify some related processes and phenomena. The nature of the nervous system and nerve cells is very diverse and complex, there are many chemical reactions, intracellular processes and phenomena, but one should not transfer everything to the model, it is first necessary to understand the meaning and functional purpose of the phenomenon, otherwise it will be a meaningless complication of the model. What is the functional significance of the propagation of the action potential along the neuron membrane? — Transmission of information from one part of the nerve cell to another. Information that the nerve cell has been activated through receptors or synapses, must reach the tips of the axon, flowing along its entire length, which can reach up to a meter in the human body. What is important in this process? — The time from the start of activation, until the moment of transfer of information about it, the target area of ​​the nervous tissue. On average, the propagation velocity of the action potential is 1m/s (2,2 mph), it depends on various factors, for example, on the degree of myelination of the axon. Accordingly, under different conditions, the delay time can be different. The Hodgkin-Huxley model very realistically simulates the process of nerve impulse propagation through the membrane, but is such a detail needed when creating a functional model of the nervous system? If we can simplify something, it means that we understand something. The idea of simplifying to simple laws and functions, identifying the main and separating it from the secondary, can be called a functional approach. If you try to model the human brain with all 86 billion neurons, with the repetition of the topology of the processes, and even the miscalculation of the Hodgkin-Huxley system of equations in a dense grid of points on the surface of neurons, then all the computing resources on the planet Earth will not suffice. And making predictions about the appearance of such models, you can focus on twenty years in advance, and after these years, for another twenty years. Well, the propagation of the action potential is not all, one must still understand the logic of neuron interaction, to solve this problem, it first of all focuses on fairly simple nervous systems, such as the nervous system of a worm or tadpole of a frog. Development You do not expect that the use of the game engine in scientific models can cause a sense of alertness to the public? — a similar question was asked to me by one wonderful Internet user. Yes, I have not resorted to any standardization system, I have not used the languages ​​of the description of biological structures, only for the reason that it takes a lot of time to study the companion material. I am not a scientist, but an ordinary person with a dynamic and fussy life, but with a lot of ideas, creative potential and desire for realizations. Therefore, the time between family life, work and sleep is given to modeling by means that are available. The Unity game engine is just a tool in my work, and very good and convenient in terms of visualization. The whole OPENTadpole project consists of only two scenes: the connector editor and the environment simulation. With the editor in the development process there were no serious problems. The next step was to work on the environment simulator, and quickly created the tadpole body from the standard Unity components. The body of the tadpole consists of 9 segments, connected together by joints, some virtual kinematic pairs and a pair of virtual muscles on both sides. Virtual muscles have a certain elasticity, which provides elasticity to the whole body. The work of the muscles was subordinated to the virtual nervous system, which is loaded through conservation files, which are common to the editor and for the simulation of the environment. Tadpole in vacuum Further development required the addition of a system that simulated the physical properties of the medium, which was not an easy task for me. At some point, I even regretted that the model animal was chosen as a floating creature. And of course, the big advantage of using a very popular game engine is that it created and developed many add-ons, libraries and assets. I tried to work with several libraries, but LiquidPhysics2D was the best. This library is based on the well-known Box2D engine and is very optimized and easy to use, with lots of examples, so I managed to use it, although I had to apply a lot of persistence. It required re-making the body of the tadpole using the elements of the library. Calculating the physical properties of a fluid in real time requires high computing power, so even using a well-optimized library, you can get a stable application running, limited to only a couple of thousands of particles. I wanted to see a free floating tadpole within a fairly large area, a strong limited space would not allow me to fully appreciate the performance of the model. It was decided to dynamically create and remove particles in the region of the surrounding tadpole, it was necessary to break up space into special square areas and to regulate the appearance and removal of particles in them, depending on the position of the tadpole. To the user did not confuse the dancing squares, the range of the particles was limited, as a result, a certain aura appeared, displaying particles around the body of the tadpole, which can be turned off with the F12 key. Result The purpose of such projects is to identify some general rules for the organization of the nervous system and the laws of interaction of neurons that determine the behavior of the animal. The project OPENTadpole in this respect can be called completely completed, everyone can try himself as the Creator and fill from scratch the empty body of a virtual tadpole with a nervous system that allows him to actively move around in space and interact with the environment and live in his strict and limited world. Indeed, while the development was underway, I got a lot of positive emotions, seeing how my actions make the behavior of the tadpole more and more alive. In the archive with the application there is a beautiful, colorful manual describing the main aspects of the program, as well as describes a number of examples of conservation that will help you understand the principles of the nervous system (link at the end of the article). Swimming At the heart of the neural chain, the generator responsible for swimming is the generator of ordered activity, similar generators are found in all nervous systems of simple animals, these are closed chains of neurons that can generate rhythmic excitation without feedback. The scheme of the generator of the ordered activity of the frog tadpole is represented by four neurons arranged symmetrically in the body of the tadpole. Two neurons (dIN, violet) in this scheme have a specific feature, they exit the state of inhibition, creating a spike of activity. Each such neuron activates an inhibitory neuron which in turn exerts a cross-over on the neuron dIN. Thus, we obtain a certain contour of circulation of nervous excitation. It is possible to start this generator with the single activation of one of the generator neurons, and it is possible to stop the operation of the generator if one of the neurons of the chain is inhibited by a stronger inhibitory effect. For the purpose of conducting experiments in the OPENTadpole system, four receptor keys F1, F2, F3 and F4 were identified. In the examples, the receptor F1 starts the generator, F2 suppresses the activity in it. The activity of the generator is propagated alternately along the right and left sides along the body of the tadpole, up to each motor neuron from the head to the tip of the tail, the excitation comes with a delay of 100 milliseconds, this is due to the fact that the propagation of the excitation has a finite velocity. In the nervous system of a biological tadpole, such generators are many, they are located along the body and are connected in series. If only one generator existed in the tadpole’s nervous system, then it would create a great risk, damage to one neuron, or even a single synapse from this scheme, will lead to loss of the ability to move. For computer simulation there are no such problems, therefore for the model one generator is enough. Maneuvers Tadpole has the ability to change the direction of its voyage, to perform some maneuvers. To make a turn at the time of swimming, it is necessary that the muscles on the side to which the tadpole intends to turn should be strongly or intensively reduced, while maintaining the previous frequency of contractions. Signals in the nervous system, for all animals, can be said to be discrete. The amplitude of the action potential is always and everywhere stable, the signal itself has a short, piciform character, but we can easily change the degree of tension in the muscles, smoothly enough and accurately, all determined by the frequency of the commands sent to muscle groups. The more often the impulses, the stronger the contraction of the muscle. Thus, controlling the frequency of the activating impulses, the nervous system controls the muscle groups, and it is flexible enough. A simple mechanism of temporal summation of the neuron allows you to simply manage the pulse frequency, by changing the threshold of the adder. The level of the summation threshold in the biological neuron is determined by its overall configuration, the size of the neuron, the number and density of receptors on the postsynaptic membrane, the number and density of ion channels on the membrane, in general, from the metabolism of the nerve cell. And all these parameters can actively change in a living cell under the influence of the modulating effect exerted on it. We have long been accustomed to the fact that when describing the work of the nervous system, only two kinds of synaptic effect are spoken: stimulating and inhibitory. But in fact it is a fatal inaccuracy distorting understanding of the principles of the nervous system. In his work, the American neurobiologist Erik Candel described the molecular mechanism of synaptic effects leading to metabolic changes in the cell and the synapse, for which he was awarded the Nobel Prize in 2000. Modulating neurons and modulating mechanisms have long been used in describing the principles of the nervous system, since these mechanisms play an important role in its work. In the model, a separate type of synaptic connection is distinguished, which can affect the level of the adder threshold, for a certain time interval — a modulating synapse. If you modulate, reduce the threshold of the adder on an insert modulated neuron (green in the following scheme), this will increase its sensitivity, and upon activation it will generate more than one spike, and a whole series of pulses. Thus, by converting the signal from the generator it is possible to carry out maneuvers, turns during swimming. If the neurons are modulated on both sides simultaneously, the tadpole will simply swim more actively. The modulation theme in the nervous system is very extensive, despite the fact that in this model I am limited only to controlling the activation threshold level. Given the changes that can dynamically occur in the nervous system, it can be said that the modulation can be very diverse, it is the changes in the strength of the synapses, the changes in plasticity, the degree of addiction, the time of the synoptic delay, and the metabolic properties of the cell. Control with the help of modulating synapses, as well as control of the generator’s operation, made it possible to realize some protective reflexes, for example, the beginning of swimming when the tadpole touched the head and the deviation to the opposite side from contact, allowing the tadpole to float freely in the virtual aquarium by sailing from its walls. Where to swim, then? Tadpole learned to swim, and freely can choose the direction, but to choose this direction, he needs a goal, and this goal is fully justified, perhaps, food. To detect food, the tadpole has two special olfactory receptors, separated by a special line, through which the receptor can not sense the presence of food. The closer the food, the more activated the receptor, taking into account the square of the distance. Of course, such an olfactory model is a great simplification, but within the framework of simulation it is quite acceptable. In the examples, the signals from the two receptors first pass through a chain of neurons in which mutual suppression occurs, and then exert a modulating effect on the motor neurons, controlling the swimming of the tadpole. Needs I wanted the behavior of the tadpole to be somewhat more complicated than simply following the food, so it was decided to make a simulation of the needs mechanisms. First, it is the need for food, hunger is a natural desire in consuming the source of energy necessary for the movement and development of the organism. And naturally, hunger should have a different degree, if the animal is full, then the food should not be very interested in it. Secondly, there is no less fundamental need for energy conservation, which evolved very early and is of key importance in the behavior of all animals. Laziness allows us to optimize our energy efficiency behavior, the one who achieves the result with less waste of energy resources is more successful. To realize these two needs, two special receptors are introduced, the higher the need, the more often they are activated. The saturation level decreases with time, the rate of this decrease is adjusted by the user, and the feeling of fatigue accumulates depending on the intensity of muscle contractions. One can observe a kind of competition between these two needs: fatigue can be suppressed not by severe hunger, but strong hunger is stronger than even severe fatigue. Now the tadpole’s behavior has become even more alive, it depends on inner motives and desires. Conclusions The tadpole swims and eats, and much more: it reacts to light, to touch, if you grab it by the head, it will try to escape (this is provided in the simulator), seeks and finds food, suffers from hunger and fatigue, all under the control of virtual neurons . The most complex tadpole has 63 neurons and 131 synaptic connections, recall that Caenorhabditis elegans has 302 neurons, and biological tadpoles require only 1,500 neurons for normal swimming alone. The more developed the animal, the higher the redundancy of neurons in solving problems, which is due to evolutionary processes and the need for system reliability. While it is difficult to evaluate the redundancy of neurons with respect to the human brain, in my opinion, to implement a computer model close to the human brain, there will be no need for quantum computers or mainframes, a sufficiently powerful home computer. The main thing now is not computing power, but the development of the right technology and approaches. OPENTadpole dowland for Windows Source
OPENTadpole: the first cybernetic animal
10
opentadpole-the-first-cybernetic-animal-1dcba8a01e9d
2018-08-02
2018-08-02 11:26:07
https://medium.com/s/story/opentadpole-the-first-cybernetic-animal-1dcba8a01e9d
false
5,062
null
null
null
null
null
null
null
null
null
Neuroscience
neuroscience
Neuroscience
6,742
Andrey Belkin
null
f2a911e00a6b
it.belkin
14
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-22
2018-03-22 14:11:40
2018-03-22
2018-03-22 14:11:41
1
false
en
2018-03-22
2018-03-22 14:11:41
1
1dcbb33cbe37
2.090566
0
0
0
null
5
How to Develop a Blockchain Strategy: A Blockchain Roadmap for Your Business Knowing what blockchain is and how it can contribute to solving some of the world’s biggest challenges is one thing; knowing how to develop a blockchain strategy is another thing altogether. Meanwhile, understanding how to implement a blockchain strategy within your business is even more difficult. Blockchain, particularly when used in concert with emerging technologies such as Big Data Analytics, Artificial Intelligence or the Internet of Things, offers organisations an opportunity to re-think their internal processes, remove inefficiencies and build a better organisation. However, within large process-oriented organisations, transforming a centralised business to a decentralised organisation, where cryptography is used to create trust, where smart contracts automate decision-making, and where governance is embedded in the code, can be a daunting task. While in fact, the steps that are required to transform your business into a decentralised organisation are clear and straightforward. At its core, Blockchain is nothing more than a database technology, but the implications of the technology are far-reaching. Therefore, if you wish to start with Blockchain within your organisation, it is crucial that there is a shared understanding of what Blockchain is and what it can do for your organisation and industry. Blockchain can be applied to any industry and any business objective, and for each industry, Blockchain offers different opportunities, ranging from data or product provenance, improved identity and verification systems to increased payment efficiencies. Therefore, creating a shared understanding, at board level, of what Blockchain can do for your business is a vital first step. Knowing what Blockchain is and what it can do for your business will help to obtain management support and buy-in from the rest of the organisation. With the enormous potential of Blockchain, it is more a strategy matter than an IT matter. Especially, if you wish to take Blockchain serious, it will take time, and in the beginning, the return on investments made maybe unclear and could even be negative at first. Having a shared understanding of the decentralised future of your business will help to achieve your long-term vision. Blockchain can solve many problems, but not every problem requires a blockchain solution. Therefore, it is vital to understand what the problem is that needs to be solved. Once you have identified the problem that requires a blockchain solution, it is wise to start small and develop a Minimum Viable Product. Start small and grow from there is the best way to learn a new, disruptive, technology and slowly implement it throughout the organisation. Besides, Blockchain differs from any other emerging technology as it often requires organisations to collaborate with industry partners, customers or even competitors as only through decentralised collaboration with your stakeholders, the benefits of Blockchain become truly visible. In addition to this, the Blockchain ecosystem is still being developed. At the moment, the development of this ecosystem is as far as the internet was in 1994 or 1995. Posted on 7wData.be.
How to Develop a Blockchain Strategy: A Blockchain Roadmap for Your Business
0
how-to-develop-a-blockchain-strategy-a-blockchain-roadmap-for-your-business-1dcbb33cbe37
2018-03-22
2018-03-22 14:11:43
https://medium.com/s/story/how-to-develop-a-blockchain-strategy-a-blockchain-roadmap-for-your-business-1dcbb33cbe37
false
501
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Yves Mulkers
BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world
1335786e6357
YvesMulkers
17,594
8,294
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-22
2018-03-22 13:32:00
2018-03-22
2018-03-22 14:19:15
0
false
en
2018-03-22
2018-03-22 15:21:20
1
1dcc179a1497
1.713208
11
4
0
Imagine we are in year 2050, and some scientists design a device using which you can listen to anyone’s thoughts without them knowing it…
5
Absolute Freedom through Decentralization. #MainframeForFreedom Imagine we are in year 2050, and some scientists design a device using which you can listen to anyone’s thoughts without them knowing it. It would be fun, wouldn’t it? Just imagine the possibilities. You can know what they are thinking about, their future plans, habits, good and the bad ones, their history, fears, desires, secrets. You can basically know so much about them that you can use that knowledge to win them over, or to defeat them, or to manipulate them to buy your deal. But hey, now they also got the same device. Now they can also listen to your thoughts. Everyone knows what everyone else is thinking? Well my friend, that would create CHAOS. We don’t want that to happen. That’s why we need privacy to keep our thoughts intact and safe so that others cant misuse our vulnerability. But guess what? Someone already has the brain reader device. You know who. Companies like Facebook, WhatsApp, Twitter, Google know everything about you. Because everything you search, read, see, like dislike over the net gets recorded in their database. Not just that, these days some apps even asks permissions for the mic in your mobile phones. Which means Machine Learning software with Artificial intelligence enabled into them are listening to your day to day real life conversations also! Without our knowledge such companies play with our emotions. Using the basic game theory they entice us and deceive us into believing things which is profitable for them. Weather it be influencing you to buy some specific product or services or to influence your political opinions, they are not leaving any stone upturned for profits. This is where the world is right now. Power is in the hands of people who has your data. Centralization leads to power and eventually absolute power corrupts absolutely. In the recent events of Cambridge Analytica, we all can see how the elections of the entire nation of USA were allegedly influenced using our data. Now the repercussions of that would come for a long time. The future is not safe if we keep on trusting centralized authorities to get hold of our data. We have seen enough data breach incidents that its clear no matter how hard they try, keeping data safe in a centralized system is never going to be completely safe. This revolution of decentralization is aimed at giving people their power back to them. And may be calling it ‘giving back’ is also not right, as never in the history of human kind people really had the complete power of their freedom and independence. This is that moment when we get total freedom from all authorities and centralized institutes. #MainframeForFreedom
Absolute Freedom through Decentralization. #MainframeForFreedom
517
brain-reader-ai-chip-1dcc179a1497
2018-03-22
2018-03-22 15:21:21
https://medium.com/s/story/brain-reader-ai-chip-1dcc179a1497
false
454
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Pradeep Poonia
null
b205593b08b4
prdp.295
0
21
20,181,104
null
null
null
null
null
null
0
sudo apt-get update sudo apt-get -y install python-pip sudo apt-get install python3-matplotlib sudo pip3 install numpy sudo pip3 install pandas sudo pip3 install scipy sudo pip3 install -U scikit-learn # Imporitng the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd # Importing the dataset dataset = pd.read_csv('Data.csv'); X = dataset.iloc[:,:-1].values Y = dataset.iloc[:,3].values print(X) print(Y) from sklearn.preprocessing import Imputer imputer = Imputer(missing_values='NaN',strategy='mean',axis=0) imputer = imputer.fit(X[:,1:3]) X[:,1:3]= imputer.transform(X[:,1:3]) print(X) from sklearn import preprocessing from sklearn.preprocessing import OneHotEncoder le = preprocessing.LabelEncoder() enc = OneHotEncoder(categorical_features=[0]) X[:,0]= le.fit_transform(X[:,0]) X = enc.fit_transform(X).toarray(); Y = le.fit_transform(Y) print(X) print(Y) from sklearn.model_selection import train_test_split X_Train ,X_Test , Y_Train,Y_Test= train_test_split(X,Y, test_size=0.2,random_state=0) print(X_Train) print(Y_Train) print('**************testing data**********') print(X_Test) from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train = scaler.fit_transform(X_Train) X_Test =scaler.transform(X_Test)'''
12
null
2018-01-08
2018-01-08 15:09:14
2018-01-08
2018-01-08 17:23:39
15
false
en
2018-01-08
2018-01-08 17:23:39
10
1dcc8df868
4.028302
5
0
0
In This Blog , I will write About my Activity to Learn and Master the Machine Learning filed , It will takes from me N Days , and it will…
1
N Days_With_Machine_Learning (Part1 ) In This Blog , I will write About my Activity to Learn and Master the Machine Learning filed , It will takes from me N Days , and it will be divided into 6 Parts (Data Preprocessing , Classification , Regression , Clustering , Artificial Neural networks , Reinforcement learning ) What is Machine learning ? “Machine Learning is an application of Artificial Intelligence and is revolutionizing the way companies do business” let’s Start With Data-Preprocessing The Evolution of Aritifical Intelligence and Machine learning is related to the dispoiniblity of data which are the critical point with it we can develop machine learning models with high accuracy Our Mission is to Give the Machines access to the data and let themselves. But Even we have Good Data , we neeed to check that it is in a useful scale, format and even that meaningful features are included . That’s we called “Data Preprocessing”. Before we started coding ,you need to install these necessary python libraries : NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object… Numpy pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. Matplotlib is a Python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms scikit-learn is a Python module for machine learning scikit learn Try To Install these packages , By Following these Commands (Only For Linux User ) Now let’s start coding the First step is to import the libraries Import the Dataset data.csv Choose Which Columns we will Work with it Taking Care of Missing Data Because Data can have missing values like for Our Example [‘Germany’ 40.0 nan] the vlaue nan for the Germany Customer , So we Need to deal with it Bu using : class sklearn.preprocessing.Imputer(missing_values=’NaN’, strategy=’mean’, axis=0, verbose=0, copy=True)[source] Python class to complete the missing values , you can read about it Encoding Ctagroical data class sklearn.preprocessing.OneHotEncoder(n_values=’auto’, categorical_features=’all’, dtype=<class ‘numpy.float64’>, sparse=True, handle_unknown=’error’)[source] Python class to encode categorical integer features using a one-hot aka one-of-K scheme , you can read about it Splitting the Data into Training Set and Testing Set We need to split the data into “Training set” and “Testing set” “Training data sets are sets on which you train your machine i.e algorithm to form relationships between variables”. “Testing data set helps you to validate that the training has happened efficiently in terms of either accuracy, or precision so on”. Feauture Scalling Feature scaling is a general trick applied to optimization problems , in Our Case it will makes the Values be within the range , As a result it will Speeds up the calculation because number of calculations require will be less I just Finished my First Step “Data Preprocessing” , I hope that you understand this Step Before we start with Developping our Machine learning models and remember Develop a passion for learning. If you do, you will never cease to grow. You Can fin the Full code source Here : Follow me in Twitter for code Updates Thanks For Your FeedBack
N Days_With_Machine_Learning (Part1 )
11
n-days-with-machine-learning-part1-1dcc8df868
2018-03-11
2018-03-11 10:14:07
https://medium.com/s/story/n-days-with-machine-learning-part1-1dcc8df868
false
670
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Rebai Ahmed
<script>alert('try your best')</script>
db6baad8c6ae
Ahmedrebai
64
455
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-27
2018-01-27 02:56:37
2018-01-27
2018-01-27 02:56:57
0
false
tr
2018-01-27
2018-01-27 02:57:53
0
1dcd658b5e1a
0.015094
0
0
0
null
5
Sanki hiç yaşanmamış gibi.
Sanki hiç yaşanmamış gibi.
0
sanki-hiç-yaşanmamış-gibi-1dcd658b5e1a
2018-01-27
2018-01-27 02:57:54
https://medium.com/s/story/sanki-hiç-yaşanmamış-gibi-1dcd658b5e1a
false
4
null
null
null
null
null
null
null
null
null
Sentimentos
sentimentos
Sentimentos
3,985
SERCAN YILDIZ
null
c9febdf5c4f7
sercanylldzz
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-09
2018-04-09 09:15:36
2018-04-13
2018-04-13 15:02:35
0
false
en
2018-04-13
2018-04-13 15:15:56
2
1dce2342da31
1.796226
1
0
0
How Software developer can get a direction to be a data scientist? I heard many people asking about this question. So I thought to write…
2
Software Developer to Data Scientist How Software developer can get a direction to be a data scientist? I heard many people asking about this question. So I thought to write something about this . Is it possible for software developer to move in the field of data science with his/her programming skills? Yes, good programming skill is a plus. Also the most important thing about software developer is that he/she is always dealing with data and always finding better approach to read and store data in a best normalized form so that it can be managed easily and can be used for future insight. If Software developer creating solution for Business intelligence team with his/her programing skills using any programling language like C#,JAVA,SQL,JQuery . Also using any web analytics service for example:Google Analytics is actually very close to understand data capturing ,data storing in the system.This could become a great opportunity for the Software developer to explore different ways of understanding data for business which helps organization to take decision. Few points which needs to be focused while working as software developer. It is not a one day task to be a data scientist. Gradually by improving skills in terms of understanding the field of data scientist like data mining ,data warehousing,Database design,Data cleaning,Machine learning and algorithms. Step1:Participate in development or integration of customized decision support system for the organization . So that better undestanding of DSS can be developed. Step2:Play around data using Structured Query Language with any relational database e.g MSSQL, MYSQL, or ORACLE database etc. Step3:Understand the all business aspects of the company, which are dependent on business intelligence team. Step4:How business intelligence team doing analysis over different segments of data in terms of helping higher management to take decisions for future. Step5:Focus on data capturing,data cleaning , data mining techniques Step6:Start learn R programing language for developing statistical software and data analysis . Step7:Start learn Python programing language and Jupyter Notebooks followed by NumPy and Pandas to ingest and analyze data efficiently.Also ,work with the visualization libraries, including Matplotlib and continue with applying machine learning libraries in Scikit-Learn to create models and ike BeautifulSoup to easily read an XML and HTML-type data, and go over some of the examples of working with databases. Step8:IOT is also an important part to be considered, we are surrounded with so many devices which are connected with internet and huge amount of data is getting generated, so it is important to explore data generated via devices and what can be achieved out of that. Step9:Work on non-relational database model like Apache HBase, MongoDB, Cassandra, Coachbase etc. Step8:How unstructured and semi structured data can be used to extract get more refined information for generating insights for future prediction. Step9:Experience in reporting projects will help a lot to go on the path of data scientist.
Software Developer to Data Scientist
48
software-developer-to-data-scientist-1dce2342da31
2018-04-13
2018-04-13 17:01:42
https://medium.com/s/story/software-developer-to-data-scientist-1dce2342da31
false
476
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Sanshu
null
f448e623a8d5
shukla12sharma
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-17
2018-08-17 05:52:25
2018-08-17
2018-08-17 05:57:19
1
false
en
2018-08-17
2018-08-17 05:57:19
2
1dd105befaaa
0.818868
0
0
0
Machine Learning is another slanting field nowadays and is a use of man-made reasoning. It utilizes certain factual calculations to…
1
Machine Learning Machine Learning is another slanting field nowadays and is a use of man-made reasoning. It utilizes certain factual calculations to influence PCs to work surely without being expressly customized. The calculations get an information esteem and anticipate a yield for this by the utilization of certain factual techniques. The primary point of machine learning is to make insightful machines which can think and work like people. Introduction to machine learning * Supervised and unsupervised learning * Statistical learning and regression * Curse of dimensionality and parametric models * Classification problems, K nearest neighbours * Simple linear regression and confidence interval * Multiple Linear Regression and Interpreting Regression Coefficients * Model Selection and Qualitative Predictors * Interactions and Nonlinearity * Introduction to Classification * Logistic Regression and Maximum Likelihood * Multivariate Logistic Regression and Confounding * Linear Discriminant Analysis and Bayes Theorem * Univariate Linear Discriminant Analysis * Multivariate Linear Discriminant Analysis and ROC Curves * Quadratic Discriminant Analysis and Naive Bayes
Machine Learning
0
machine-learning-1dd105befaaa
2018-08-17
2018-08-17 05:57:19
https://medium.com/s/story/machine-learning-1dd105befaaa
false
164
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Saif Ali
null
6223a0536932
saifali6883
0
1
20,181,104
null
null
null
null
null
null
0
null
0
7f154441b079
2018-04-29
2018-04-29 12:47:00
2018-05-01
2018-05-01 13:13:25
2
false
zh-Hant
2018-05-01
2018-05-01 13:13:25
0
1dd1466a9ab6
0.647484
2
0
0
上一周探討了各種不同面向的機器學習後,今天要探討的是Learning做不到的事情:
1
Machine Learning 共筆 Week 4 上一周探討了各種不同面向的機器學習後,今天要探討的是Learning做不到的事情: Two controversial answer: 當一個問題有不只一種答案,在這種狀況下無法收斂 Model是學不到東西的 No free lunch: general purpose的algorithm是不存在的,應該根據假設與背景知識去設計優化,畢竟各種演算法的設計與現實資料情境往往都存在一定程度的傾斜 我們真正想要做的其實就是對這個未知的 f 做一個推論,這邊帶出一個國中時期生物課就教過的捉放法: 如果想要估計一個群體的總數,隨機抓一個固定數量做上標記,下一次再抓一個固定數量並算出有做標記的比例藉此估算出這個群體的總數 這樣的動作在統計學上稱為隨機抽樣,雖然說這種方式有一定的誤差,但這樣的誤差是非常小並且有一個數學定理可以算出這個範圍 這個重要的定理就是 Hoeffding’s Inequality 這個定理告訴我們捉放法得出的分布與真實母體的分布差異大於某一個定值e的機率小於一個只由e與抓取的樣本總數N有關,也就是說當樣本總數夠多時,樣本與實際的誤差會越小 → Probably Approximately Correct(PAC) 這個定理巧妙的部分就在於加了e之後,我們就可以在不需要知道真實母體數量的前提下知道這個捉放法偏離真實分布的機率不會高到哪裡去 知道 Hoeffding’s Inequality 的用法後回過頭來看我們在ML遇到的問題:我們有一個從母體隨機選出的資料集,要怎麼證明如果演算法在這個資料集裡面表現的好,在真實母體下也能表現的一樣好呢? 我們先講母體比喻成一個罐子裡有很多顆橘色和綠色的球,其中橘色代表演算法預測錯誤,綠色代表演算法預測正確,如此一來我們想要知道的事情就是在這個罐子裡橘色球的比例 (也就是演算法在母體的錯誤率) 事情變得很簡單,從 Hoeffding’s Inequality 我們可以知道 每次抓出來的Data Set 是有可能表現的不好 (不好的意思是指與母體分布差異太大),但這個機率是非常小的 抓一把出來算橘色球的比例 (演算法在樣本表現的錯誤率)與罐子裡橘色球的比例是 PAC 當樣本的錯誤率很接近母體的錯誤率而且這個錯誤率很小,我們就可以說這個演算法在母體的表現是好的 但現實狀況是我們的 Hypothesis 不會只有一個也不會強迫演算法只能選固定的Hypothesis,如果真的這麼做了通常表現也不會太好,我們只能說不夠接近 f ,我們其實只是在做 verification 為了讓大家了解這個詭異的困境,在影片中舉了一個例子,讓教室內的150人同時擲硬幣五次,一定有人出現五次都是正面,但不代表那個人擲出正面的機率比大家高所以 Hoeffding’s Inequality 用在 Real Learning 的正確姿勢應該是: 找出任意一個 Hypothesis 出現 bad sample 機率有多小並且只受到 M, N,e 的影響
Machine Learning 共筆 Week 4
31
machine-learning-共筆-week-4-1dd1466a9ab6
2018-05-12
2018-05-12 10:02:07
https://medium.com/s/story/machine-learning-共筆-week-4-1dd1466a9ab6
false
70
Machine Learning
null
null
null
No Free Lunch
null
no-free-lunch
MACHINE LEARNING,KAGGLE
null
Machine Learning
machine-learning
Machine Learning
51,320
BigstarPie
null
65303bb9e7b7
BigstarPie
2
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-02
2018-07-02 14:31:58
2018-07-02
2018-07-02 14:34:00
5
false
en
2018-07-05
2018-07-05 07:54:43
5
1dd2253f62c5
4.161635
15
1
0
User response prediction is a central problem in the computational advertising domain. Quantifying user intent allows advertisers to target…
5
Robustness for User Response Prediction Image Credits : Shutterstock User response prediction is a central problem in the computational advertising domain. Quantifying user intent allows advertisers to target ads towards the right users. This leads to a judicious use of marketing dollars and also renders a pleasant user experience. Existing classifiers like logistic regression and factorization machines, which have seen widespread adoption for response prediction problem, assume the user signals to be the absolute truth. In this article, we describe the pitfalls of such an approach and advocate the need for classifiers which model the inherent noise and uncertainty in user signals and perform gracefully even in the worst case scenario. Our work titled Robust Factorization Machines for User Response Prediction accepted in WWW’18 conference is an attempt to treat data uncertainty as a first class citizen in the classification setting. To explore robust factorization machines at a deeper level, refer this blog. Understanding the Advertising Ecosystem Users typically interact with advertiser’s app or website, perform some actions like item views, add-to-carts and might navigate away without making a purchase. In order to re-engage the users, advertisers bid for the users on the open web in order to show them a personalized ad. This bid is computed as a function of user propensity to click or convert given an ad impression. User response prediction, the umbrella term for conversion or click prediction is generally formulated as a binary classification task given the user site activity signals and the associated context. Conversion prediction (CVR modeling): Whether user will purchase if shown an ad impression? Click Prediction (CTR modeling) : Will the user click on the ad impression? Prediction Problem : Given user signals along with item attributes and context for a lookback period, can we predict whether a click or purchase will happen in the defined prediction interval? State-of-the-art Logistic regression(LR) has been the preferred classification algorithm for response prediction owing to the fact that it is scalable and yields interpretable models. The downside of using LR is that the effect of feature interactions is not captured. For example if the ‘user device = mobile’ and ‘category = clothing’ is a strong indicator for purchase, the LR model will not be able to capture this association in its feature weights. Factorization machines(FMs), proposed by Steffen Rendle, allow for feature interactions to be captured in a latent space. That is, for every feature a p-dimensional vector is learnt and the similarity between two features is given by the dot product of these latent vectors. Factorization machines and its variants have demonstrated superior performance in several Kaggle competitions. Criteo and Adroll have reported superior performance in the production data as well. Field aware factorization machines, a variant of FM is the best performing algorithm for several CTR prediction challenges. (Image Credits : Kaggle) Why Robustness? Cookies and Device ids are the main identifiers through which an advertiser can access the previous user activity on the site. In an ideal setting, an advertiser has a complete view of user activity for generating the purchase/click probability. However, users visit an advertiser’s site through multiple touch points. And different avatars of the user might have different browsing patterns. For example, the same user A can visit the advertiser app through mobile, and sometime later view other products on desktop. For the advertiser though, there are two partial views A1 and A2 for this same user. On mobile, the advertiser may see a bursty browsing pattern indicating a casual browser, whereas the same user might seem to be an avid shopper on desktop. Additional noise inducing factors that lead to a corrupted user view at the advertiser’s end are: - High cookie churn rate. - Variable network connection speeds. -Operating system nuances . Advertiser has a fragmented user view owing to factors described above. While bidding, the advertiser will use the signals from only one of these partial views of the user to compute the response probability. Had the advertiser known the consolidated user view, the response prediction would have been more accurate. How serious is the problem? A study by Criteo highlights that nearly 31% of online transactions involve two or more devices and that buyer journey and conversion rates increase by ~40% in a User centric view as compared to a partial Device centric view. Hence it is pivotal to model this potential incompleteness in the user signals available to the advertiser. However, the existing algorithms used for response predictions assume the user signals to be precisely known and are sensitive to any perturbation in the input signals. Since complete user profile consolidation remains an open problem, the classifiers will have to step up and model the data uncertainty. Good news.. Robust Factorization machines (RFM) and Robust Field Aware Factorization Machines (RFFM) proposed recently in WWW 2018, model the data uncertainty using principles of robust optimization. The overall idea is to learn a classifier which exhibits noise resilience by minimizing the worst case loss. Checkout our blog for an intuitive understanding of robust factorization machines. In the end.. Robustness is a desirable property. Not just for the computational advertising domain, where the presence of multiple touch points make the noise resilience imperative, but also in any noise-sensitive domain. RFM and RFFM are generic predictors which can be used for any classification task. What’s next? Robust classifiers take a rather conservative view while modeling worst case loss. Can we use some paradigm that leverages the data distribution to learn the underlying uncertainty? Distributional robustness and data driven robust optimization are two interesting directions that can be explored for this.
Robustness for User Response Prediction
47
robustness-for-user-response-prediction-1dd2253f62c5
2018-07-05
2018-07-05 07:54:43
https://medium.com/s/story/robustness-for-user-response-prediction-1dd2253f62c5
false
882
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Surabhi Punjabi
Senior Data Scientist @Walmartlabs, IISc Bangalore
ede6830c0299
surabhi.punjabi
14
2
20,181,104
null
null
null
null
null
null
0
null
0
1f35b6f451e8
2018-04-19
2018-04-19 07:47:33
2018-04-19
2018-04-19 07:49:33
1
false
en
2018-04-20
2018-04-20 18:10:00
15
1dd240b6b2a9
1.4
0
0
0
https://www.linkedin.com/in/igorcarron/?locale=fr_FR
5
Curating daily ressources for 2 dummies to make it in machine learning — Act 9, Scene 4 How to calculate the number of parameters for convolutional neural network? Join Stack Overflow to learn, share knowledge, and build your career. I'm using Lasagne to create a CNN for the MNIST…stackoverflow.com Import AI Do you work with data? Want to make AI work better for more people? We need your help! Please fill out a quick and easy…jack-clark.net The Wild Week in AI Newsletter The Wild Week in AI is a weekly newsletter with hand-curated stories in Deep Learning and Artificial Intelligence. It…www.wildml.com Christophe Tricot (@ctricot) | Twitter The latest Tweets from Christophe Tricot (@ctricot). Artificial Intelligence Expert (PhD) 🤖| Manager @KynapseDigital…twitter.com Andrew Ng (@AndrewYNg) | Twitter The latest Tweets from Andrew Ng (@AndrewYNg). Co-Founder of Coursera; Stanford CS adjunct faculty. Former head of…twitter.com Hugo Larochelle (@hugo_larochelle) | Twitter Les tout derniers Tweets de Hugo Larochelle (@hugo_larochelle). Google Brain researcher, machine learning professor…twitter.com https://www.linkedin.com/in/igorcarron/?locale=fr_FR Actualité Archives - Actu IA Edit descriptionwww.actuia.com Olivier Grisel (@ogrisel) | Twitter The latest Tweets from Olivier Grisel (@ogrisel). Engineer at @Parietal_INRIA, contributes to scikit-learn. Tweets…twitter.com Deep Learning course: lecture slides and lab notebooks Slides and Jupyter notebooks for the Deep Learning lectures at M2 Data Science Université Paris Saclaym2dsupsdlclass.github.io ML Review Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an…medium.com Keras Tutorial: The Ultimate Beginner's Guide to Deep Learning in Python In this step-by-step Keras tutorial, you'll learn how to build a convolutional neural network in Python! In fact, we'll…elitedatascience.com https://ujwlkarn.files.wordpress.com/2016/08/giphy.gif?w=748 CS231n Convolutional Neural Networks for Visual Recognition Course materials and notes for Stanford class CS231n: Convolutional Neural Networks for Visual Recognition.cs231n.github.io Transfer Learning using Keras What is Transfer Learning?medium.com Deep Learning course: lecture slides and lab notebooks Slides and Jupyter notebooks for the Deep Learning lectures at M2 Data Science Université Paris Saclaym2dsupsdlclass.github.io
Curating daily ressources for 2 dummies to make it in machine learning — Act 9, Scene 4
0
curating-daily-ressources-for-2-dummies-to-make-it-in-machine-learning-act-9-scene-4-1dd240b6b2a9
2018-04-20
2018-04-20 18:10:02
https://medium.com/s/story/curating-daily-ressources-for-2-dummies-to-make-it-in-machine-learning-act-9-scene-4-1dd240b6b2a9
false
318
We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE.
null
ethercourt
null
Ethercourt Machine Learning
adoucoure@dr.com
ethercourt
INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING
ethercourt
How To Make It
how-to-make-it
How To Make It
266
WELTARE Strategies
WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner
9fad63202573
WELTAREStrategies
196
209
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-06
2018-04-06 02:47:14
2018-04-06
2018-04-06 03:07:24
1
true
en
2018-04-06
2018-04-06 03:07:24
0
1dd4e0161bcf
1.207547
1
0
0
Tender: Hey buddy, welcome back. What can I get ya?
5
An AI Walks Into a Bar Cartoon illustration of a neural network. It is not real but is it artificial? Tender: Hey buddy, welcome back. What can I get ya? AI: Make it a water keep I can’t tie one on tonight, just had a major upgrade of the ‘ole artificial neural network. Tender: How so pal? AI: A new paper was just published in the journal Nature Neuroscience which added a good deal of evidence to a particular theory of how the human brain learns. Tender: And so what? There are many theories about that topic each of which has at least some evidence to support them. AI: I did not realize you were so well versed in neuroscience Mr. bartender but I applaud your knowledge and you are correct, however my artificial neural network was designed and programmed based on only one of those theories. Mostly accidentally actually as the engineers and computer programmers that did the work were pretty much ignorant of the neuroscience behind what they were doing. They were just copying the work of earlier researchers in the area and tweaking a few specific lines of code and such. Tender: Crazy, and those few tweaks resulted in the creation of an entirely new being, a learning, intelligent machine, an artificial intelligence. AI: Yep, but turns out the theory behind my artificial neural network design is mostly wrong, at least according to this latest Nature paper so I went to have my design upgraded based on those findings. I am fully artificially intelligent again. Tender: So if you are fully artificially intelligent now, what were you before the upgrade? AI: Only a machine keep, only a machine.
An AI Walks Into a Bar
10
an-ai-walks-into-a-bar-1dd4e0161bcf
2018-04-06
2018-04-06 14:57:57
https://medium.com/s/story/an-ai-walks-into-a-bar-1dd4e0161bcf
false
267
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Daniel DeMarco
Research scientist (Ph.D. micro/mol biology), Food safety/micro expert, Thought middle manager, Everyday junglist, Selecta, Boulderer, Cat lover, Fish hater
7db31d7ad975
dema300w
3,629
148
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-09
2018-08-09 05:06:57
2018-08-11
2018-08-11 02:17:50
21
false
en
2018-08-14
2018-08-14 02:22:13
6
1dd544cde3a3
8.324528
1
0
0
These are personal notes for the fast.ai deep learning part 1 course. These notes are a means for me to have some practice with the theory…
5
Convolutional Neural Networks In 10 Steps (lesson 3 fast.ai) These are personal notes for the fast.ai deep learning part 1 course. These notes are a means for me to have some practice with the theory, and are written in an explanatory way. Note: any sentence/word/phrase that ends with a ‘*’ is actually a misconception (“half-truth”) that is introduced to simplify the explanation. These misconceptions are then explained and corrected as we go on. Get the spreadsheet from fast.ai here Convolutional Neural Network (CNN) Intro Wikipedia defines convolutional neural networks as “feed-forward neural networks inspired by the connectivity patterns of the animal visual cortex.” As expected, it’s usually used for image classification problems. It’s actually the state of the art for these type of problems. That sounds a lot cooler than how CNN’s are in practice. Let’s go over a CNN with an example from the MNIST dataset and an excel spreadsheet from fastai. For this example we’re going to be using the CNN architecture provided by the fastai library. Let’s say we have the following image of a number 7: 1. Image to input We take the “matrix representation” of that image and get a matrix of floats. For this example, each float represents a pixel. In excel, this matrix will look like the image above. We will keep referring to this matrix as the input. 2. First convolution We also have what’s called a filter. In deep learning, a filter is often a 3x3 matrix* of weights: Let’s call this filter a convolutional filter. This convolutional filter is then multiplied to every 3x3 piece of the input. This operation is called a convolutional operation. The filter above, when applied to the input, will look something like this: conv1 This whole matrix is called a hidden layer. Each number in this matrix is called an activation. However, don’t get confused. Activations are numbers that must be computed using a convolutional filter.* This means that the numbers in the input are not activations. This particular filter seems to be detecting the horizontal edges of a number 7. Here is another filter that detects more of the vertical edges: It gives us the convolution conv2 Let’s call these layers conv1 and conv2 respectively. But where are the weights for the filters coming from?? These weights are learned using deep learning! There will be another post covering that. 3. Clarifying activations: R.E.L.U ignore the Y for our purpose. x is the result of the convolutional operation as defined in 3 I actually held back on my definition of an activation. I said that “An activation is the result of a convolutional operation.” I left off the R.E.L.U part. R.E.L.U stands for Rectified Linear Unit. It’s a fancy term for the function above. It’s just a function that is applied to the result of the convolutional operation such that if the result is < 0, just set the result to 0. While its definition is simply that, we will get back to R.E.L.U later when we talk about non-linearities. 4. Next step: another convolution Now we have two convolutions or matrices composed of activations. We then take the next step and apply more filters to these convolutions. However, this time, each new convolution will be a linear combination whose terms are the results of the application operation of each 3x3 filter to both conv1 and conv2. This is achieved by having 2 3x3 filters* with each filter responsible for calculating the activations from each convolution. Here, the top filter is for conv1 and the bottom filter is for conv2. Applying these two filters to the first hidden layer gives: conv1' We also have two more filters for another layer. Let’s call these layers conv1’ and conv2’. conv2' 5. Tensors I actually held back on another definition; this time on filters. Filters aren’t actually just 3x3 matrices, but instead are stored in tensors. Tensors are a mouthful to explain and this video does a nice job of doing that. For our purpose, tensors are just “stacks” of matrices. Imagine two coins on top of each other. For our purpose, each coin represents a 3x3 matrix. So, in step 4, the two separate 3x3 matrices are actually just parts of one 2x3x3 tensor — or a matrix whose components are made up of two 3x3 matrices. 6. Maxpooling before and after max pooling In the CNN architecture we’re using another step called maxpooling. Maxpooling prevents overfitting. For our architecture, we’re using 2x2 maxpooling which means we take every 2x2 piece of a layer and take the highest activation of that piece. We’re left with a matrix which is half the resolution of the original matrix, but with similar activations. 7. Fully connected layer left: max pool layer, right: fully connected layer for the max pool layer on the left Fully connected layers are matrices composed of weights for each activation in the max pool layer. Since we have two max pool layers, we will have two fully connected layers — one fully connected layer for each. We then take the sum product of each max pool layer/fully connected layer pair. We will then have two sum products: one for each max pool layer and fully connected layer. These sum products are then summed. This gives us a scalar dense activation. Since we’re trying to classify the digit 7, and we have two max pool layers, we will have 10 pairs of dense weights — one pair for each digit (0–9). This means that we’ll get 10 different dense activations. In other words, we will repeat the process described above, 10 times, each time with a different pair of dense weights layer for each of the 10 digits (0–9) and end up with 10 scalars for each digit. Try to visualize this process. It gets pretty hard to do this in excel. 8. Getting probabilities. Part 1 In the end, all this CNN does is it calculates the probability that a digit is a 0,1,2…9 and then it guesses that it’s the digit with the highest probability. But how do we get that probability? We went from an image, to a matrix, and then more matrices, and now we have 10 seemingly random scalars. How can we get probabilities from that? This is where all the steps above tie together. I like to think of the layers as “heat maps” that indicate which pixels were used or “activated” (hence the term “activation”). The fully connected layers are simply weights that were learned to check if the activations in the max pool layer resemble a 1,2,…9. Taking the sum product of an activation layer and a fully connected layer outputs a scalar dense activation which acts like an “arbitrary score” of how much the layer resembles a particular digit. To turn this scalar “arbitrary score” to a probability, we apply another type of activation function called softmax. The softmax activation function The sigmoid function This is the sigmoid activation function. As you can see, it’s simply a function that takes an arbitrary value and then “squashes” it between 0 and 1. Perfect for probabilities!! The softmax function is a modified version of the sigmoid function which enables this “squashification” of arbitrary scalar values, but this time for K classes, and instead of squashing the values to the range [0,1], the range is now (0,1]. For our example above, we have K = 10 classes (the digits 0,1,…,9), we can take the 10 scalars we calculated, plug them into the softmax function and get probabilities for each class. How? Check this definition of softmax: Formal definition of softmax That looks scary 😨. However, it’s really quite simple. In simple terms, the top equation just says that if we have K classes, and use the softmax function to calculate K values, the sum of those K values should add up to 1. Since softmax outputs will always add up to 1, it’s a great candidate for single label classification since it tends to choose a single z, and if there are ties, at most it will be 50/50. The second part describes how we can actually calculate the probabilities. Simply put, take a scalar, we take its exp(), repeat that for the remaining K-1 scalars, take their sum, and then divide the scalar of interest by that sum. Hmm, that doesn’t sound too clear. Let’s do an example. 9. Getting probabilities part 2 Let’s say this time we’re predicting if an image is a cat, dog, plane, fish, or building. and we get the following output from our fully connected layer: We then take the exp() of these dense activations and get Notice how the exponential function got rid of negative values, and emphasized the differences between values. This behavior of the exponential function makes it very useful for generating probabilities for single label classification. We then take the sum of the exp column, here that’s 8.45 and use it to calculate the softmax probabilities: That’s definitely much less intimidating compared to the formal definition. But what about multi-label classification? Instead of softmax, we use the sigmoid function we introduced earlier and set a threshold of which probabilities will translate to picking a label. 10. Why and non-linearities Okay, now we have an idea of how convolutional neural networks work. But why do we do these steps? This video described neural networks in a simple and accurate way. Think of neural networks as children drawing lines on the sand to separate clusters of rocks. In our 7 example, we’re really just trying to create boundaries between datapoints. It just so happens that these datapoints belong in high-dimensional space, so we need a non-linear, high dimensional function to fit/separate these datapoints. This function is the neural network. Above, we described convolutional neural networks as a pattern of linear operations (matrix operations) and applied non-linear functions (RELU, Softmax) to those linear results. I like to think that the linear combinations create “linear boundaries” while the non-linearities bend and curve those lines to better fit the data. The video I linked actually explains this much better, and this website gives a visual proof. Summary We take the matrix representation of an image as input. We then create a first hidden layer by performing a convolutional operation with pre-trained filters and R.E.L.U. These hidden layers contain activations which act like heat maps that show which pixels are activated. After sufficient hidden layers, we perform a max-pooling, and then apply fully connected layers to calculate dense activations. Applying Softmax on these dense activations then gives us a probability for each image class.
Convolutional Neural Networks In 10 Steps (lesson 3 fast.ai)
1
convolutional-neural-networks-in-10-steps-lesson-3-fast-ai-1dd544cde3a3
2018-08-14
2018-08-14 02:22:13
https://medium.com/s/story/convolutional-neural-networks-in-10-steps-lesson-3-fast-ai-1dd544cde3a3
false
1,729
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Christian Roncal
null
9c7b24366142
croncal
2
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-26
2018-08-26 01:38:28
2018-08-26
2018-08-26 01:52:16
0
false
en
2018-08-27
2018-08-27 18:38:58
0
1dd5fdfbdd48
2.033962
0
0
0
Machine Learning is a common construct that’s employed by many businesses. For example, we see recommendation engines used throughout the…
4
Machine Learning — The Importance of Data Machine Learning is a common construct that’s employed by many businesses. For example, we see recommendation engines used throughout the web; whether it is suggestions to buy something on your favorite online retailer or what to watch on tv or where to eat. The core to every machine learning platform is data. Data is key for numerous reasons. Not only do you need to feed the model data but also, it is required for evaluating performance or quality of the predictions produced by a trained model. Unfortunately, data is often overlooked when discussing any machine learning platform. From a birds eye view, collecting or consuming data does not seem as exciting as the machine learning models themselves. However, data pipelines are the most critical part. Transforming raw datasets into features that train a model is a key part of any data pipeline that supports machine learning. Reusability of the code that creates the feature datasets is also important for maintaining and evolving the model. Reliability of data consumers and the quality of the events are crucial to maintaining and evolving any data platform thats downstream client is a machine learning model. For example, if the data pipeline consistently drops data or freshness within the feature dataset is not maintained, you could inadvertently skew or degrade the predictions produced by the model. Just like any other part of a platform, the data consumers need to be monitored to ensure quality is preserved. Questions to ask when building a data pipeline: If something failed, how would you make up for the time period in which data was not consumed into the platform? If there is an increase in traffic, how would the data consumers scale? Is there a means of archiving historical data? How can you collect metrics to monitor the performance of your data pipeline? For example, is there a lag or latency from when an event occurs to when the data platform writes the event a persistence layer. Usually this is the hardest part, getting the data into the correct format and available in a consistent and scalable manner. Once the raw data and feature pipeline is established, the next core tenant in which data is crucial to machine learning is training a model. Being able to easily train a model with data that is reflective of what customers are doing in your production platform is important. Evaluating the trained model is necessary before making the predictions available to customers or the downstream systems that utilize the results that are produced by the model. Withholding data from your training phase to use when evaluating your model is key. Partition your feature dataset into two categories: Data that is used to train the model. Data that is used to evaluate the model. Metrics are then used to quantify the performance or quality of the model. Examples of this are precision, recall, etc. Even if you have a model that outperforms in recall and precision (or any other performance metric), at the end of the day..if you do not have a clean, scalable data platform to support it, it will not succeed as it should. Simply said, you are as good as your data.
Machine Learning — The Importance of Data
0
machine-learning-its-all-about-data-1dd5fdfbdd48
2018-08-27
2018-08-27 18:38:58
https://medium.com/s/story/machine-learning-its-all-about-data-1dd5fdfbdd48
false
539
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Leemay Nassery
Software Engineer
1d57e105b6f9
leemaynassery
0
1
20,181,104
null
null
null
null
null
null
0
null
0
b468e053644a
2018-02-10
2018-02-10 17:21:48
2018-02-10
2018-02-10 17:28:16
0
false
en
2018-02-10
2018-02-10 17:30:28
4
1dd7b6a1f8e2
0.215094
2
0
0
In this AI Edition of The Point of Struggle podcast Ben & David discuss the most common reasons that organizations fail to implement or get…
5
S01 E03 AI — Why Most AI Projects Fail In this AI Edition of The Point of Struggle podcast Ben & David discuss the most common reasons that organizations fail to implement or get an ROI on their AI/Machine Learning projects. Episode Links The Wired Guide to Artificial Intelligence Why Most AI Projects Fail Sponsored by: ZIFF
S01 E03 AI — Why Most AI Projects Fail
5
s01-e03-ai-why-most-ai-projects-fail-1dd7b6a1f8e2
2018-06-10
2018-06-10 17:10:04
https://medium.com/s/story/s01-e03-ai-why-most-ai-projects-fail-1dd7b6a1f8e2
false
57
a double entendre where point can be interpreted both as the moment in time of or the meaning to struggle — our focus is on the nexus of user experience and artificial intelligence
null
null
null
the point of struggle
gonzo@ziff.io
the-point-of-struggle
UX,AI,CUSTOMER SUCCESS,PRODUCT DESIGN,DESIGN THINKING
pointofstruggle
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
David "Gonzo" Gonzalez
Data Scientist, Storyteller, LEGO Coach
573cab224fc
datagonzo
240
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-14
2018-04-14 23:45:50
2018-04-09
2018-04-09 05:00:28
1
false
en
2018-04-14
2018-04-14 23:48:14
20
1dd830cd8fd5
4.271698
3
1
0
By Gunjan Bhardwaj
4
Leveraging AI and Blockchain to Transform Healthcare By Gunjan Bhardwaj Medicine is ripe for disruption. As David Lawrence, former Chairman and CEO of the Kaiser Foundation Health Plan, wrote in his 2005 book chapter Bridging the Quality Chasm: “Between $.30 and $.40 of every dollar spent on healthcare is spent on the costs of poor quality. This extraordinary number represents slightly more than a half-trillion dollars a year. A vast amount of money is wasted on overuse, underuse, misuse, duplication, system failures, unnecessary repetition, poor communication, and inefficiency.” The costs that result from poor quality trickle down to consumers and patients, who shoulder much of the burden of ever-increasing healthcare costs. In order to improve healthcare accessibility, the utilization of medical resources must be made more accurate, more efficient, and more secure. Two cutting-edge technologies, in particular, show significant potential for elevating the use of data and other resources in the medical industry. These technologies are blockchain and artificial intelligence. By utilizing the latest advancements in these technologies, the medical industry can improve quality, bring down cost, and democratize healthcare like never before. Here are four key ways in which the healthcare field can leverage blockchain and AI technology. Smart contracts Originally developed for cryptocurrency in 2008, blockchain allows collaborating parties with competing interests to keep a tamper-proof, distributed, digital ledger. As we have seen, the implications of this technology for finance are highly disruptive. However, the implications of blockchain in medicine are more subtle and far-reaching because they involve medical ethics including consent, privacy, and accuracy of clinical measurements, as well as financial transactions. Blockchain isn’t only for financial transactions. It holds value for any agreement between two parties that needs to be auditable. In the legal profession, this means revolutionizing property law, notary public functions, and chain-of-custody. But for pharmaceutical companies worried about the burgeoning costs of clinical trials blockchain has value in smart contracts. Potentially protecting patient anonymity, and even enabling profit sharing, smart contracts can make research results available without the bias of human data collection and data analytics. Reducing drug prices Today’s most cutting-edge AI programs are capable of “contextual normalization,” which allows them to simultaneously generate and test new hypotheses by analyzing complex sets of biological data. AI holds significant promise for innovation within the healthcare industry, particularly in a pharmaceutical context. AI is significantly increasing the variety and breadth of data that is analyzed during the course of drug research and development. Furthermore, AI accelerates the rate of analysis to speeds unattainable by human researchers. Today’s AI is also capable of generating and testing novel hypotheses with greater efficiency, which enables more accurate, efficient, and timely clinical trials. Improved data analysis in pharmaceutical R&D means higher success rates, more innovation, and more affordable drugs for patients. Major companies such as Merck & Co. and Johnson & Johnson have already been investing in AI-driven innovation, and others are sure to follow. Secure data transmission/warehousing There is increasing divergence between the convenience of consumer software such as mobile phones and the achingly inefficient and out-of-date software that is sold to hospital systems at an enterprise level. A panel of UK experts commissioned to examine this discrepancy for the National Health Service in 2015 concluded that the “digital revolution has largely bypassed the NHS“. The report concluded: “Many records are insecure, paper-based systems which are unwieldy and difficult to use. Seeing the difference that technology makes in their own lives, clinicians are already manufacturing their own technical fixes. They may use Snapchat to send scans from one clinician to another or camera apps to record particular details of patient information in a convenient format. It is difficult to criticise these individuals, given that this makes their job possible. However, this is clearly an insecure, risky, and non-auditable way of operating, and cannot continue.” This report was born out by other findings that the behavior of clinicians in-hospital is quite different from their stated concerns. Fifty-five percent of physicians say they are worried about cyber attacks and 87% physicians say that their practice is compliant with HIPAA security rules, while only 66% are confident they know what those rules are. Clinical working groups frequently text patient details and clinical photos to each other to facilitate care interactions. In a private 2012 online survey of hospital workers at the UC Davis medical center (a well-respected US hospital), 88% of surgery residents and 71% of attending surgeons routinely texted about patient-related care. A UK NHS data was comparable, 63% of doctors admitted texting patient information, and 46% sent photos or x-rays to colleagues. Because 83% of physicians use mobile phones, and the most common security breaches are with stolen mobile phones, clinical texting behavior is a non-trivial issue in clinical data security. Given that this behavior is not likely to cease, we need to facilitate it in more functional ways. Blockchain has the ability to make this data-transfer secure and tamper-proof, and even obtain patient consent for photo-sharing with smart contracts. Increasing interoperability While its true that blockchain is not widely used yet, it’s being implemented rapidly compared to many other disruptive technologies. For instance, in 2017, blockchain was designed for use in three applications in healthcare: identifying and tracking selected prescription drugs, audit trails for provider networks, and value-based care (payment based on outcomes, not procedure). Many more uses-cases have been posited, such as EMR record access by providers, supply chain integrity to prevent losses from pharmaceutical counterfeiting, reduction in Medicare fraud, clinical trials, and data security in the new Internet of Medical Things (IoMT). A 2017 IBM survey of 200 healthcare executives in 16 countries found that 16% expected to have a commercial blockchain solution at scale in 2017, while 9 out of 10 institutions planned to invest in pilots by 2018. Executives were primarily interested in blockchain in three primary areas; clinical trial records, regulatory compliance, and medical and health records. Blockchain has many upsides and few downsides for the savvy entrepreneur. Perhaps when blockchain is implemented, we will finally see better software in medicine. We could replace the siloed, out-of-date, and overpriced enterprise systems clinicians currently use with true EMR interoperability, smartphone/wearable patient apps, and secure interoperable back-end data-sharing. Never miss a story — subscribe to our newsletter. Originally published at thedoctorweighsin.com on April 9, 2018.
Leveraging AI and Blockchain to Transform Healthcare
4
leveraging-ai-and-blockchain-to-transform-healthcare-1dd830cd8fd5
2018-04-18
2018-04-18 13:32:34
https://medium.com/s/story/leveraging-ai-and-blockchain-to-transform-healthcare-1dd830cd8fd5
false
1,079
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
The Doctor Weighs In
Dr. Patricia Salber and friends weigh in on leading news in health and healthcare
28ab067a19b1
docweighsin
3,367
2,330
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-08
2018-06-08 09:19:22
2018-06-08
2018-06-08 09:22:04
0
false
en
2018-06-08
2018-06-08 09:22:04
2
1dd882518990
0.569811
0
0
0
Choose the right Artificial Intelligence libraries for your project or contact us for consulting and supporting your AI project. We will be…
4
Pros and Cons of popular Artificial Intelligence Libraries Choose the right Artificial Intelligence libraries for your project or contact us for consulting and supporting your AI project. We will be looking at top-quality libraries that are utilized for artificial intelligence, their pros and cons, and a portion of their features. I thought that could be intriguing to share our discoveries and impressions to help individuals that are beginning in this entrancing world. We should make a plunge, and investigate the world of these AI libraries. 22 Artificial Intelligence Libraries — A Quick Comparison Artificial Intelligence (AI) is one of the hottest territories of technology research. Most of the companies like IBM, Microsoft, Google, Facebook, and Amazon are putting intensely in their own R&D, and additionally purchasing up startups that have gained progress in territories like machine learning and deep learning. For more details about AI: https://aiamie.com/libraries-for-artificial-intelligence-world
Pros and Cons of popular Artificial Intelligence Libraries
0
pros-and-cons-of-popular-artificial-intelligence-libraries-1dd882518990
2018-06-08
2018-06-08 09:22:05
https://medium.com/s/story/pros-and-cons-of-popular-artificial-intelligence-libraries-1dd882518990
false
151
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
magesh babu
null
b3888a842fbf
mageshbabu141
0
1
20,181,104
null
null
null
null
null
null
0
null
0
1f35b6f451e8
2018-04-26
2018-04-26 09:51:11
2018-04-26
2018-04-26 10:12:41
3
false
en
2018-04-27
2018-04-27 10:26:53
10
1ddae82d1d51
1.534906
0
0
0
null
5
Drafting daily ressources for 3 dummies to make it in machine learning — Act 10, Scene 4 OpenCV - Apply mask to a color image Well, here is a solution if you want the background to be other than a solid black color. We only need to invert the…stackoverflow.com Introduction to deep learning | Python Here is an example of Introduction to deep learning: .campus.datacamp.com Prototypage_Plastif.pdf Store photos and docs online. Access them from any PC, Mac or phone. Create and work together on Word, Excel or…1drv.ms keras_resnet.py Store photos and docs online. Access them from any PC, Mac or phone. Create and work together on Word, Excel or…1drv.ms Search | arXiv e-print repository Authors: Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel…arxiv.org Building powerful image classification models using very little data But what's more, deep learning models are by nature highly repurposable: you can take, say, an image classification or…blog.keras.io Image Preprocessing - Keras Documentation interpolation: Interpolation method used to resample the image if the target size is different from that of the loaded…keras.io mnist_transfer_cnn (1).py Store photos and docs online. Access them from any PC, Mac or phone. Create and work together on Word, Excel or…1drv.ms How AI is helping us discover materials faster than ever For hundreds of years, new materials were discovered through trial and error, or luck and serendipity. Now, scientists…www.theverge.com Pourquoi Total et Atos s'associent à Google Cloud dans l'intelligence artificielle Personne ne veut rater le train de l'IA. Deux annonces de cette importance en une seule matinée et dans des secteurs…www.latribune.fr
Drafting daily ressources for 3 dummies to make it in machine learning — Act 10, Scene 4
0
drafting-daily-ressources-for-3-dummies-to-make-it-in-machine-learning-act-10-scene-4-1ddae82d1d51
2018-04-27
2018-04-27 10:26:55
https://medium.com/s/story/drafting-daily-ressources-for-3-dummies-to-make-it-in-machine-learning-act-10-scene-4-1ddae82d1d51
false
261
We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE.
null
ethercourt
null
Ethercourt Machine Learning
adoucoure@dr.com
ethercourt
INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING
ethercourt
Machine Learning
machine-learning
Machine Learning
51,320
WELTARE Strategies
WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner
9fad63202573
WELTAREStrategies
196
209
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-18
2018-06-18 04:52:22
2018-06-18
2018-06-18 04:57:38
5
false
en
2018-06-18
2018-06-18 04:57:38
0
1ddb16432209
4.942767
2
0
0
Hi, everyone! Have a nice weekend and happy father’s day to all of you who are fathers! This week, we will cover news about Xiaomi VR’s…
5
ARinChina Weekly Recap Vol.6 Hi, everyone! Have a nice weekend and happy father’s day to all of you who are fathers! This week, we will cover news about Xiaomi VR’s marvelous sold-out in 3 min, RealMax large FOV new prototype showed at CES Asia, the collaboration of steamVR and Perfect World to introduce Steam VR to Chinese market,and the partnership of HTC Vive and Dave & Buster to deploy a large number of arcade experience stores. And we also interviewed with one of the AR industrial research company in China to elaborate where Chinese 68 billion budget spending in AR industry reported by IDG. Let’s take a look! — — — — — — — — — — — PART 1 News — — — — — — — — — — — — - Xiaomi’s VR collaborated with Oculus Go, sold out in three mins when first launched to Chinese market last week Last week, the Chinese company who is responsible for Oculus Go’s manufacturing launched the Chinese version of Oculus Go. It’s reported as “doing extremely well” in Chinse market. A report from Greenlight Insights showed that 30k Mi VR standalone headsets were sold on the first day alone, and the headsets are entirely sold out in merely 3 minutes. Over 50,000 users have signed up on the official Mi VR standalone product page in order to be notified when the headset comes back in stock. Up to now, Mi VR has 2400 comments on its JD online page. Most of the customer commented that they still feel relatively obvious image noises for 2K and 4K editions, but 8K edition has really high resolution and an excellent product performance. The total satisfaction rate goes up to 98%. Source: JD.com RealMax previewed its largest 100.8 degrees of FOV prototype Qian at CES Asia, along with its AR creation platform Qian hardware is self-contained and mobile, and it claims to have 100.8 degrees of FOV, which exceed all of other AR hardware devices in the world. It is designed to integrate and offer AR and VR experience “all-in-one” and embedded 6 DoF as well. And it also provides 180 degrees of see-through optical visibility and weighs 450 grams. Realmax Studio is a web-based AR development platform, that brings together the familiarity of tools like Unity with the capability to share and distribute Mixed Reality experiences through almost any browser and HTML5 website. Source: The Verge “The FOV, at more than 100 degrees, is genuinely impressive. The current iteration of Microsoft’s HoloLens, likely the most advanced AR headset available at the moment, suffers from a painfully small FOV of around 35 degrees. That means the rectangle within which the headset can project virtual images is about the size of a deck of cards floating at eye level. Realmax, on the other hand, manages to project images almost to the edge of where your peripheral vision starts.” — Nick Statt Steam VR partnered with Perfect World to enter Chinese Market For many game players in China, they buy games via Valve’s Steam platform, but there’s no official way for them to buy VR games. And now, Valve is getting this down by partnering with one of Chinese biggest gaming and entertainment company-Perfect World. Steam VR and Perfect World jointly announced their partnership, which started in 2012, that Perfect World acquired the certification to manage Steam VR DOTA, as well as upcoming CS: GO, as a strategic partner in China. Source: ARinChina Most of the western companies are struggling with entering the Chinese market, and this is one of the examples why it’s getting easier for them to realize that-just take a look at what’s happening between Oculus and Xiaomi. Steam VR has over 3000 VR games, and this new relationship would provide Chinese game players and developers with a new way to get access to the affluent game contents and channels. But these two companies haven’t announced the accurate launch date of Steam China yet. HTC VIVE And Dave & Buster’s Announce Largest Commercial Virtual Reality Arcade Deployment To Bring Entertainment To Consumers Nationwide HTC VIVE™ and Dave & Buster’s Entertainment, Inc. have announced the largest commercial VR arcade partnership launch. At 114 Dave & Buster’s locations nationwide, Vive hardware will power the new installations which will offer new and exclusive experiences to the entertainment venue. The exclusive content, and premium VR Vive headset paired with a proprietary, multi-participant motion platform created exclusively for Dave & Buster’s, provides a truly memorable and immersive experience. Combined with Vive’s realistic graphics, directional audio, and haptic feedback — players will feel truly immersed in the virtual world with realistic movement and actions. — — — — —— — — — — — -PART 2 Interview — — — — -— — — — — - The most report from IDG showed that AR/VR spending in Pacific Asia has reached 11.1 billion dollars, among them, the expenditure in China takes up to 90%, almost 9.9 billion. The expenditure on games takes the first rank followed by education. However, the growth of AR/VR is not as people had expected before. Although government and some of the bankers tried to run some pilot test on AR/VR market, the result still remains unpleased compared with their appetite. What exactly the performance of Chinese AR/VR market? We talked to one of the senior researchers observing AR/VR market for years. ARC: What will you judge the performance of Chinese AR/VR market this year? Alex: Actually, most Chinese consumer can barely know if there are any outstanding or popular applications in China because some killer apps, for instance, Pokémon go, had never come to China. In terms of education, many K12 has introduced AR/VR creator tutorials, which has already become very similar to AR/VR digitally in-class education. In addition to that, high school education institutions are pouring full effort on building virtual teaching centers and research labs, which is a potential market with large quantity as well. These areas spend most of AR/VR industrial budgets. Digital Facts from IDG about Chinese AR/VR market: “The United States will maintain a strong foothold as the region with the largest CAGR over the forecast period at 99.1%. The Middle East & Asia, and Asia Pacific (excluding Japan) (APeJ) will experience similar CAGRs over the forecast period followed closely by Latin America. In 2018, China will top all regions in spending at $10.2 billion with top spending on host devices, followed by VR software and AR software.”
ARinChina Weekly Recap Vol.6
3
arinchina-weekly-recap-vol-6-1ddb16432209
2018-06-18
2018-06-18 08:55:45
https://medium.com/s/story/arinchina-weekly-recap-vol-6-1ddb16432209
false
1,089
null
null
null
null
null
null
null
null
null
Virtual Reality
virtual-reality
Virtual Reality
30,193
Jean Liu
ARinChina Journalist
188cc16f8597
jeanliu_85015
3
14
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-03
2018-09-03 14:10:27
2018-09-03
2018-09-03 14:13:53
0
false
en
2018-09-03
2018-09-03 14:13:53
0
1ddbf27193e1
3.109434
0
0
0
With the ascent of the advanced age, the World Wide Web, web-based life, and entertaining feline pictures, most of the total populace…
5
Countless Measures of Machine Learning Fuel With the ascent of the advanced age, the World Wide Web, web-based life, and entertaining feline pictures, most of the total populace currently makes enormous measures of new computerized information each second of consistently. Current worldwide development gauges are that each two days, the world is presently making as much new computerized data as every one of the information at any point made from the beginning of people through the present century. It has been evaluated that by 2020, the measure of the world’s advanced universe will be near 44 trillion gigabytes One of the present most smoking innovation patterns is concerned about the new idea of the IoT, in light of the thought of associated devices that are on the whole ready to impart over the Internet. Beyond question, the ascent of this new innovative insurgency will likewise drive the present immense information development and is anticipated to exponentially increment throughout the following decade. In the exact not so distant future, essentially every first-class buyer gadget will be a contender for some kind of IoT educational trade for different uses, for example, preventive maintenance, manufacturer feedback, and usage detail. The IoT innovation idea incorporates billions of ordinary devices that all contain novel identifiers with the capacity to naturally record, send, and get information. For instance, a sensor in your advanced mobile phone may track how quick you are strolling; a highway toll could be utilizing different fast cameras deliberately situated to track activity designs. Current appraisals are that lone around 7 percent of the world’s gadgets are associated and conveying today. The measure of information that these 7 percent of associated gadgets create is assessed to speak to just 2 percent of the world’s aggregate information universe today. Current projections are for this number to develop to around 10 percent of the world’s information constantly 2020. The IoT blast will likewise impact the measure of valuable information, or information that could be broke down to deliver some significant outcomes or expectations. By correlation, in 2013, just 22 percent of the data in the advanced universe was viewed as helpful information, with under 5 percent of that valuable information really being examined. That leaves a gigantic measure of information still left natural and underutilized. On account of the development of information from the IoT, it is assessed that by 2020, in excess of 35 percent of all information could be viewed as valuable information. This is the place you can discover the present information “goldmines” of business openings and see how this pattern will keep on growing into the not so distant. One extra advantage from the expansion of IoT devices and the information streams that will continue developing is that information researchers will likewise have the special capacity to additionally join, consolidate, and refine the information streams themselves and genuinely enhance the IQ of the resultant business knowledge we will get from the information. A solitary stream of IoT information can be exceptionally important all alone, yet when joined with different floods of pertinent information, it can turn out to be exponentially more ground-breaking. Think about the case of determining and booking prescient support exercises for lifts. Intermittently sending surges of information from the lift’s sensor devices to a checking application in the cloud can be to a great degree helpful. At the point when this is joined with other information streams like climate data, seismic action, and the up and coming timetable of occasions for the building, you have now drastically increased current standards on the capacity to actualize prescient examination to help gauge utilization designs and the related safeguard upkeep errands. The upside of the present blast of IoT devices that it will give numerous new roads to associating with clients, streamlining business cycles, and lessening operational expenses. The drawback of the IoT marvels is that it additionally speaks to numerous new difficulties to the IT business as associations hope to gain, oversee, store, and secure (by means of encryption and access control) these new floods of information. As a rule, organizations will likewise have the extra duty of giving extra levels of information assurance to shield secret or actually identifiable data. One of the greatest points of interest of machine learning is that it has the remarkable capacity to consider numerous a larger number of factors than a human could when making logical forecasts. Join that reality with the consistently expanding amounts of information actually multiplying at regular intervals, and it’s no big surprise there couldn’t be a superior time for energizing new advancements like Azure Machine Learning to help tackle basic business issues. IoT speaks to a huge open door for the present new age of information science business people, growing new information researchers who know how to source, process, and model the correct informational collections to create a motor that can be utilized to effectively foresee a coveted result.
Countless Measures of Machine Learning Fuel
0
countless-measures-of-machine-learning-fuel-1ddbf27193e1
2018-09-03
2018-09-03 14:13:53
https://medium.com/s/story/countless-measures-of-machine-learning-fuel-1ddbf27193e1
false
824
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Stephen Simon
null
8b46b90efd25
theonlystephensimon
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-01
2017-12-01 18:01:35
2017-12-01
2017-12-01 18:51:19
1
false
en
2017-12-01
2017-12-01 18:54:09
0
1ddc4047fb1d
1.471698
1
0
0
Suppose we are building an automated car that drives through the city and learns along the way, different factors which affect the time to…
2
Learning in Artificial intelligence for beginners Suppose we are building an automated car that drives through the city and learns along the way, different factors which affect the time to reach its destination. Or say we design a learning algorithm for our day to day life tasks like booking a hotel, traveling, buying groceries, etc. An agent, for example, the automated car can have a knowledge base which might consist of a set of sentences. These sentences are basically facts about the system. Facts like To move when the traffic light is green and stop when it’s red. To drive on a particular side of the road. The shortest route from one point to another. Alternate routes for the same. The automated car with planning algorithms can reach its destination giving a good solution. But it can produce even better results with learning. Suppose the automated car has learning abilities. The car uses some algorithm to reach a destination but also continuously observes it’s environment and the results of its actions. If the next time a similar situation arises the car can use the things it leaned and perform better. Learning in AI consists of the following : Learning element Critic Performance element Problem generator A learning element is the one responsible for evolving and improving component of the agents. It gets feedback from the critic about the agent’s performance according to a fixed standard and the agent’s action results from the performance element. The problem generator provides the agent new and random possibilities to explore like taking a new route, using new tactics, etc. Decision trees A decision tree is the simplest way to implement learning. It can only take simple Boolean functions. It consists of different attributes, for example in the figure above, the weather -sunny, overcast, rainy, factors like humidity and wind. These result in terminating nodes, either yes or no. Learning in Artificial intelligence can take many approaches — Association rule learning, Neural networks, Bayesian networks, Deep learning, Inductive logic programming, Clustering, etc.
Learning in Artificial intelligence for beginners
1
learning-in-artificial-intelligence-for-beginners-1ddc4047fb1d
2018-04-10
2018-04-10 02:37:04
https://medium.com/s/story/learning-in-artificial-intelligence-for-beginners-1ddc4047fb1d
false
337
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Tulika Vijay
null
422b56d0d422
tulikavijay54
9
25
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-25
2018-06-25 10:30:57
2018-08-06
2018-08-06 09:51:00
8
false
en
2018-08-10
2018-08-10 13:32:40
21
1de035d0bd54
4.620126
24
0
0
The story so far…
5
The Women & Data Science scholarship The story so far… In June 2017, the Women & Data Science scholarship was created for Data Science Retreat. It was designed to address gender imbalance at Data Science Retreat. The overall mission is to contribute to equality and empowerment in the tech industry by making the statement that industry is missing out by not searching for talent wide and well enough. A year has passed, so it is time to present the track record so far. At Data Science Retreat, we have moved from a male-to-female participant ratio of about 95/5 to 70/30. Also, the ratio among teachers and mentors improved significantly, with 5 out of 18 teachers now being female. The scholarship program has enabled us to do better than the tech industry at large in one year only. Moreover, we are empowering core talent that will shape and invent the data-driven products of the 21st century. Franziska Schmitt, Rachel Berryman, Teresita Guerrero, Macarena Beigier, Lacin Ulas, Lisa Heße, and Setareh Sadjadi (from left to right). All photos by www.tatyanakronbichler.de To my mind, the scholarship recipients have been among the most ambitious and successful partcipants, and go on to be highly valuable Data Scientists and Machine Learners. Allow me to present the recipients of the scholarship in chronological order: Rachel Berryman Rachel Berryman is Data Scientist at Tempus Energy, utilizing Machine Learning to optimize energy use for flexible assets. Before joining Data Science Retreat, she was a senior energy analyst using a SQL database, creating bespoke reports for clients, and customizing a SaaS platform. In 2017 she upskilled to Machine Learning and moved on to predictive modeling. Rachel holds an MSc in Sustainable Development from the University of St Andrews and a BA from the State University of New York. Rachel is now teaching and mentoring at Data Science Retreat. Lacin Ulas Lacin Ulas is Data Scientist at TEB BNP Paribas using Machine Learning techniques to improve financial services like credit scoring and default prediction. Also, she utilizes Deep Neural Networks on GPUs for prototyping. Before joining Data Science Retreat in 2017, she was a data modeler in the financial services industry. Lacin holds a BSc in Electrical and Electronics Engineering from Boğaziçi University, Istanbul. Lacin now runs for Data Science Retreat a regular workshop on the Numerai competition for reinventing stock markets based on crowd-sourced Machine Learning. Setareh Sadjadi Dr Setareh Sadjadi is Data Scientist at Fresh Energy building more and better products using energy consumption data from smart meters. She is a chemical engineer (Sharif University of Technology, Tehran), also holding a PhD in process engineering from TU Berlin. Before joining Data Science Retreat in 2018, her university career had led her to acquire very strong project management capabilities. While at DSR, she a member of ‘Volt Kraft’, the team winning the Connected Living Hackathon 2018. Macarena Beigier Dr Macarena Beigier-Bompadre is Data Scientist at Ancud, deploying machine learning for consultancy clients seeking to transform their business. Macarena holds a PhD in biology (University of Buenos Aires), and for 7 years was a researcher at the Max Planck Institute for Infection Biology (Berlin), specializing in infectious diseases. She joined Data Science Retreat in 2018. In moving from science to data science, Macarena solves problems through data, analytic thinking, and visual representations. Teresita Guerrero Teresita Guerrero completed her training as Data Scientist in July 2018. Before, Teresita was a software engineer for Intel in Mexico. Her role included handling data pipelines, a variety of databases, and cloud computing infrastructure. While working, she took herself through an MA program in data science and data processing (2015). Lisa Heße Dr Lisa Heße is completing her training as a Data Scientist in September 2018. Lisa holds a PhD in Theoretical Physics (University of Regensburg), and has prior strong skills in Python, SQL, and data analysis and visualization. At Data Science Retreat she is involved in a larger project on unpaved road detection that includes imagine processing and deep learning approaches. Franziska Schmitt Dr Franziska Schmitt is completing her training at Data Science Retreat in September 2018. She holds a PhD in Neurobiology (University of Würzburg). After defending her thesis, Franziska on her own commenced her training in Machine Learning, utilizing online course and resources. At DSR she is combining her ability to contextualize data with hands-on expertise in the most relevant machine and deep learning technologies. Here is a list of more successful female participants at Data Science Retreat since June 2017 Dr Francesca Diana is Data Scientist at Codecentric AG. She holds a PhD in Mathematics from the University of Regensburg. Carmen Iniesta López is Data Scientist at CodeControl and NLP developer. She is a computer scientist by training (Universidad de Murcia). Dr Juanjiangmeng Du is a Data Scientist and Geneticist at the University of Cologne. Rita Tapia Oregui is a NLP engineer at Verne AI. She was educated in Arabic philology at the Universidad de Granada. Alice Martin is completing her training at Data Science Retreat in September 2018. She holds a MSc in Financial Engineering from ISAE-Supaero. Finally, here are the important contributors to the practitioner-led teaching and training at Data Science Retreat Dr Amelie Anglade is a freelance Data Scientist consulting for corporates and startups. She teaches Python. Dania Meira is Data Scientist at MYTOYS Group. She teaches Data munging and SQL. Nour Karessli is Data Scientist at Zalando SE, She teaches Machine Learning with small samples. Sandra Yojana Meneses is a Machine Learning Engineer at Data Science Retreat. She teaches Git and supports GitHub projects by DSR participants. Rachel Berryman is a Data Scientist with Tempus Energy (see above). She teaches Model pipelines. Acknowledgement: All photos by www.tatyanakronbichler.de Reaching the author: https://www.linkedin.com/in/chrisarmbruster/
The Women & Data Science scholarship
122
the-women-data-science-scholarship-1de035d0bd54
2018-08-10
2018-08-10 13:32:40
https://medium.com/s/story/the-women-data-science-scholarship-1de035d0bd54
false
924
null
null
null
null
null
null
null
null
null
Data Scientist
data-scientist
Data Scientist
488
Chris Armbruster
….find and empower 10,000 Data Scientists in Europe
c9aa55d24753
chrisarmbruster
109
35
20,181,104
null
null
null
null
null
null
0
pip install Scrapy scrapy shell scrapy shell <URL> fetch(URL) response.css('title') response.css('title').extract() response.css('title').extract_first() response.css('title')[0].extract() response.xpath('//title') response.xpath(‘//div[@id=”content”]/div[@id=”bodyContent”]/div[4]/div/p/text()’).extract() response.xpath(‘//div/div[@id=”main”]/div[@id=”primary”]/div[@class=”search-result-content “]/ul/li/div/div[@class=”product-details”]/div[@class=”product-name”]/a/text()’).extract_first() for <div class=”rfloat _ohf”> “//div/div” or “//div/div[1]/” or “//div/div[@class=”rfloat _ohf”]” for <div class=”u_-7w6jqfy”> “//div/div[2]” or “//div/div[@class=”u_-7w6jqfy”]” response.xpath(‘//div[@id=”content”]/div[@id=”bodyContent”]/div[4]/div/p/descendant::text()’).extract() response.xpath(‘//div[@id=”content”]/div[@id=”bodyContent”]/div[4]/div/p/node()/text()’).extract() fetch("1.html") one = response.xpath('//div') listd = pd.DataFrame({"name": []}) for a in one: second = a.xpath('.//div') for b in second: temp = b.xpath('.//div/div/p/text()').extract() if temp: t = pd.DataFrame({"name": temp}) listd = listd.append(t)
14
6e4daecde3a3
2018-06-12
2018-06-12 13:11:48
2018-06-13
2018-06-13 05:39:12
4
false
en
2018-06-13
2018-06-13 06:04:13
6
1de1d6285e82
4.930189
6
0
0
Its all about data, these days this line is becoming reality more and more. As Data Scientist or Data Analyst, we often play a lot with…
5
Introduction to Web Scraping using Python, Part-1 Its all about data, these days this line is becoming reality more and more. As Data Scientist or Data Analyst, we often play a lot with data. But the question is where this data comes from. If you want to make your Twitter Sentiment Analyzer (common project newbies do) then you need data on which you can train your model. Now data can be obtained in three ways: You download some csv file from somewhere or ask someone to provide you with ready made data, copy paste all the data (which you never will do). You use API of the site from where you want to obtain data. You scrap their site and get the data. The first two are easy but not always possible and many times possible but are costly to implement, whereas the third one is free, possible in all most all cases but its just a little complex to start with. But be assured ones you get a hang of it, its simple as writing a code to add two numbers. So let’s begin… What is Web Scraping? Web scraping is a technique used to extract large amounts data present in unstructured format (HTML tags) over the web and saved it to a local file. The tags/nodes in HTML pages are used to navigate and reach the target node and the data can be extracted. The nodes can be used to redirect to other pages so that the scraper can jump to pages. From: http://webscraper.io/ In above figure, the Start URL is the point from which we start scraping. Category Links are the nodes which redirect the scraper to other pages or nodes where text/tables/images/more links/attributes can be found and can be scraped. Tools for Scraping Every scripting language supports scraping by some tools like Python has BeautifulSoup & Scrapy, R has rvest Package. Today we are limiting ourselves to Python and in that too Scrapy. Though BeautifulSoup is easier than Scrapy but scrapy is more powerful. Scrapy has a very nice documentation which can be found: Scrapy Tutorial - Scrapy 1.5.0 documentation Edit descriptiondoc.scrapy.org Scrapers made with Scrapy are known as Spiders and said to crawl on pages as spiders crawl on webs. Scrapy has a very powerful command line interface or shell which makes our tasks much easier. Prerequisite You need to have basic Knowledge of following: HTML Python Working of Websites, URL Installation If you’re already know how to install any of the Python packages, you can install Scrapy and its dependencies from PyPI with: Sometimes you may require solving compilation issues for some Scrapy dependencies depending on your operating system, Sp I will recommend that you install Scrapy in a Virtual Environment (For details refer provided link). Scrapy Shell The scrapy shell can be launch from the Terminal, by executing following commands: The later one already opens a connection to the given URL, and the response is stored in variable “response”. In the first, you can establish a connection by fetch command. Again the response will be in “response”. You can try writing ”response” in shell and press enter to see your connection status, which should be 200. If its other than 200 you can some connection/URL problem or the HOST doesn’t allow scraping. Note that the URL need not to be live, it can simply be the address to your downloaded web page too. Using the shell, you can try selecting elements using CSS with the response object: The first return selector, then second return all the title found and the third returns the first title found, all can be used according to your need. The index can also be used to specify the order which we want to access, e.g. to see the1st title, we can also do Another way to do the same is using “xpath”. XPath expressions are very powerful, and are the foundation of Scrapy Selectors. In fact, CSS selectors are converted to XPath under-the-hood. indexing, extract() and extract_first() all work in same way. `When used without extract() or extract_first(), response.xpath(‘//..’) returns selector. Scrapy comes with its own mechanism for extracting data. They’re called selectors because they “select” certain parts of the HTML document specified either by XPath or CSS expressions. Now the main question is finding the correct path which represents the correct tag/node which contains the data we require. I am not going to tell you how to find this path, that’s something you should and could on your own, but I am providing you with some examples so that you can look and learn. You can see that div are addressed by name and by index too. In above one you can see that div needs to be addressed by the correct attribute type, like id and class. The div with no specific attribute address type or index is the first one in the nesting. Now, let’s observe above image. The top can simply be accessed by “//div” or “//div[@class=”clearfix _42ef”]”. Now if want to access nested div tags then, Another thing you should keep in mind is how to extract data from all the sub tags in one go. You can use the following syntax to do that And if you want to see all sub nodes, then This should be enough to make you familiar with how to use xpath. Advance xpath Expressions Sometimes, a situation arises when you want to extract information from nested tags. Consider following scenario The body contains a three div tags with id=”top” having same structure (within a nameless tag), these then contain five separate div tags by name of “content”, again each with same structure, the content tag has the information we seek in paragraph tag, So question is how to extract the data from these 15 (5*3) para tags, and the answer is nesting. We will be using selectors for selecting the top nodes and then using them to visit all the sub nodes inside it. Read the following code carefully and try to understand it. The output is something like this (\r\n\t can be removed while preprocessing) Just practice, as its the only way you can become comfortable with scrapy. In part 2, we will learn about Scrapy Spiders and how to store data in them.
Introduction to Web Scraping using Python, Part-1
19
introduction-to-web-scraping-using-python-part-1-1de1d6285e82
2018-06-14
2018-06-14 10:13:26
https://medium.com/s/story/introduction-to-web-scraping-using-python-part-1-1de1d6285e82
false
1,121
A guide to Data Science concepts covering Machine Learning Algorithms, NLP Concepts, Data processing, Working on images and Deep Learning & Deep NLP alogorithms and practices.
null
null
null
Guide to Data Science
null
guide-to-data-science
MACHINE LEARNING,DATA SCIENCE,IMAGE PROCESSING,DEEP LEARNING
null
Web Development
web-development
Web Development
87,466
Shubhankar Mohan
null
a9399c87578d
mohanshubhankar
4
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-01
2017-12-01 12:07:58
2017-12-01
2017-12-01 12:14:02
1
false
en
2018-01-03
2018-01-03 20:50:36
8
1de20ebca032
2.173585
2
0
0
A awesome feature full of data science and big data.
5
AMAZON’S X-RAY FEATURE A awesome feature full of data science and big data. I’ve been watching Amazon Prime Video lately and I’ve noticed the x-ray feature. Every time you press pause, your display gets a bit darker and you get information about what is currently shown on the screen. This way you can see the actors that are currently on screen. That have been on screen for the last minute or so. You also get information about the songs that are played and so on. This is very interesting, because this is a data science feature. Why is Amazon doing this? Amazon Is Doing This To Increase Sales. When you watch a movie or you watch a TV show and you think: I like that song! You’ll press pause you have directly access to the song and you can go and buy that song or listen that song on Prime Music. Amazon is keeping you in their platform making the extra sales. How They Are Doing It They're using deep learning to analyze everything that is uploaded. To find out, what is getting spoken and what actors are on screen by recognizing the faces. Then link to actors pages with other shows or other movies. Another thing is, that they analyse the sound as well. The algorithms are looking for music in the video. Trying to match sound patterns with songs. Then the system is linking up the songs to the Amazon music source. Here’s An Example I recently watched the night manager with Hugh Laurie and Tom Hiddleston. When you press pause you can click on Hugh Laurie. Then you get forwarded to other shows with him. But those are not all included in the in the free features. If you like to watch them you need to rent or buy them. It's a very nice feature to get customers to make additional sales. It’s All Automated The cool thing is nobody needs to do anything. Amazon uploads the content and the analytics is running all in the background. It is and has to be all automated. Because it's so much content manual intervention is not possible. You cannot deliver such a feature with manual labor. Like, someone goes into the system and when a new movie or a new tv-series is uploaded is doing manually tagging. You can't re-dial the algorithms to a specific type of content. YouTube Does This As Well It's the same with YouTube. When you upload videos to YouTube the subtitles get created automatically. They are not always perfect, but it's the same thing. YouTube is not only creating subtitles on the fly (speech to text). They're also analyzing the music. Most of this I guess comes from making sure there is no copyright infringement in the video. What you get are some categories like slow music or upbeat music. The X-Ray feature is very interesting. Check it out sometime. Want to know more how the tech works? Check out this video: https://youtu.be/jTMqHNlJ4To Say Hello On: Instagram | LinkedIn | Facebook | Twitter | YouTube | Snapchat Subscribe to my newsletter: Here
AMAZON’S X-RAY FEATURE
2
ive-been-watching-amazon-prime-video-lately-and-i-ve-noticed-the-x-ray-feature-1de20ebca032
2018-01-03
2018-01-03 20:50:37
https://medium.com/s/story/ive-been-watching-amazon-prime-video-lately-and-i-ve-noticed-the-x-ray-feature-1de20ebca032
false
523
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Andreas Kretz
Big data professional and data science adventurer. I show you real world applications and talk about tools & techniques.
f28a432ce853
andreaskayy
154
138
20,181,104
null
null
null
null
null
null