audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0
| null | 0
| null |
2018-05-29
|
2018-05-29 16:58:46
|
2018-05-29
|
2018-05-29 17:02:23
| 1
| false
|
en
|
2018-06-04
|
2018-06-04 02:12:20
| 10
|
18b0e4c29cb
| 2.973585
| 0
| 0
| 0
|
Why is Embedded (Offline) ASR gaining more traction?
| 5
|
Is Embedded AI the Solution to Kids’ Data Privacy?
Why is Embedded (Offline) ASR gaining more traction?
Advantages of Voice-Enabled Devices
Voice recognition technology isn’t new, but it’s a fast growing industry. It is estimated that by 2020, voice search will comprise 50% of all searches, and aside from mobile, a bulk of those searches will come from voice-activated devices. Furthermore, usage of voice-activated devices increased 128.9% over 2016 and an estimated 60.5 million Americans have used a voice assistant at least once a month this year.
The quick rise in usage for these types of devices can be attributed to their convenience and ease of use. They’re so simple that even kids can make use of them!
In addition to obvious use cases, voice assistants enable accessibility for people that cannot read (like young children and seniors with poor eyesight), allow people to multitask (ask your voice assistant for help when you’re doing homework or need to find out something quick), and can even help young children develop speaking skills.
However, there are some concerns about how this will affect the way children learn how to communicate.
Voice-Enabled Technology & COPPA-Compliance
Prior to the Amazon Echo Dot Kids Edition and the FreeTime parental controls, kids still found ways to control these devices.
Whether intentionally or unintentionally, you might recall the little girl that accidentally ordered a $170 dollhouse from Amazon. And although Alexa isn’t programmed to respond with bad language (and can even whisper or bleep bad words), kids could also accidentally learn them through Amazon Music Unlimited, which does not have parental control.
In an effort to address these issues, Amazon launched a Verified Parental Consent feature, which addresses the issue of child data protection laws, particularly, the Children’s Online Privacy Protection Act (COPPA), which requires companies to get parental consent to store a child’s information if they’re below 13 years old.
Amazon has said previously that they are COPPA-compliant because they don’t target children. However, there is a flaw in this logic, because once a parent brings an Amazon device into their home, the children also have access to it and thus can be put at a risk.
Along with the introduction of the parent’s consent feature, Amazon also introduced kid-specific skills in Alexa, specifically how to say “please” and “thank you”. Thanks to Amazon’s new level of COPPA-compliance, media companies who had previously been interested in launching apps on Alexa can now move forward, without fear of potential legal action.
Why Speech Recognition Needs to be an Embedded (Offline) Solution
As a parent, it’s understandable that emerging technologies like AI won’t automatically get your stamp of approval. Though Amazon’s steps to be COPPA-compliant are certainly the right idea, they still have work to do to adequately protect kid users’ privacy.
In order to be truly COPPA-compliant, Speech Recognition needs to be an Embedded (Offline) solution. This distinction allows kids to make use of Speech Recognition tools, without necessarily having to be connected to the internet — where parents lose control of what information kids are accessing. By allowing children to play with technologies like AI in a more closed environment, you’re reducing the opportunity for big corporations to collect their data and reduce risks relating to data breaches.
KidSense is currently the only provider on the market who offers a tool that empowers Offline Speech Recognition for kids. While kids use it for fun, parents can also use it to evaluate their kid’s speech development — and if there are any issues in need of follow up. The technology behind KidSense was developed using fundamentals of language acquisition and latest advances in artificial intelligence. Furthermore, since it’s an embedded AI technology, KidSense does not collect kids’ data — eliminating some of the risks already mentioned.
Final Thoughts: Kids’ Data Privacy & Offline Speech Recognition
Though some devices are designed specifically with kids in mind, adherence to COPPA isn’t always sufficient to protect kids from data privacy issues. Because of this, it’s necessary to raise your standards and seek out offline and embedded solutions that provide an extra level of protection — ensuring that kids can be kids for as long as they were meant to!
Make use of KidSense’s Speech-to-text and Speech Evaluation tools to empower kids to use offline Speech Recognition technology in a safe and secure way.
|
Is Embedded AI the Solution to Kids’ Data Privacy?
| 0
|
is-embedded-ai-the-solution-to-kids-data-privacy-18b0e4c29cb
|
2018-06-04
|
2018-06-04 02:12:21
|
https://medium.com/s/story/is-embedded-ai-the-solution-to-kids-data-privacy-18b0e4c29cb
| false
| 735
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Kadho Inc
|
Produces #edgevoiceai based on true#artificialintelligene to enable #kids to #communicate with #technology — #KidSenseai #kidsense
|
2de4170d4742
|
KadhoInc
| 626
| 543
| 20,181,104
| null | null | null | null | null | null |
0
|
W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
X = tf.placeholder(tf.float32, shape=[None])
Y = tf.placeholder(tf.float32, shape=[None])
hypothesis = X * W + b
cost = tf.reduce_mean(tf.square(hypothesis - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)
# Launch the graph in a session.
sess = tf.Session()
# Initialize global variables in the graph.
sess.run(tf.global_variables_initializer())
for step in range(2001):
cost_val, W_val, b_val, _ = \
sess.run([cost, W, b, train],
feed_dict={X: [1, 2, 3], Y: [1, 2, 3]})
if step % 200 == 0:
print(step, cost_val, W_val, b_val)
0 0.684479 [ 1.24527323] [ 0.21739873]
200 0.000463337 [ 0.97505987] [ 0.05669469]
400 0.000176923 [ 0.98458862] [ 0.03503378]
600 6.75577e-05 [ 0.99047673] [ 0.02164883]
800 2.57962e-05 [ 0.99411523] [ 0.01337751]
1000 9.85037e-06 [ 0.99636352] [ 0.0082665]
1200 3.76147e-06 [ 0.9977529] [ 0.00510824]
1400 1.43632e-06 [ 0.99861139] [ 0.0031566]
1600 5.48575e-07 [ 0.99914187] [ 0.00195077]
1800 2.09587e-07 [ 0.99946958] [ 0.00120573]
2000 8.01382e-08 [ 0.99967194] [ 0.00074547]
print(sess.run(hypothesis, feed_dict={X: [5]}))
print(sess.run(hypothesis, feed_dict={X: [2.5]}))
print(sess.run(hypothesis, feed_dict={X: [1.5, 3.5]}))
[ 4.99910498]
[ 2.49992537]
[ 1.50025332 3.49959731]
for step in range(2001):
cost_val, W_val, b_val, _ = \
sess.run([cost, W, b, train],
feed_dict={X: [1, 2, 3, 4, 5],
Y: [2.1, 3.1, 4.1, 5.1, 6.1]})
if step % 200 == 0:
print(step, cost_val, W_val, b_val)
# Output
0 1.21053 [ 1.06569934] [ 0.02275024]
200 0.0489445 [ 1.14314592] [ 0.58319741]
400 0.0126292 [ 1.07271349] [ 0.83748132]
600 0.00325872 [ 1.03693604] [ 0.96664888]
800 0.000840859 [ 1.01876247] [ 1.03226149]
1000 0.000216971 [ 1.00953066] [ 1.06559086]
1200 5.59832e-05 [ 1.00484121] [ 1.08252132]
1400 1.44466e-05 [ 1.00245929] [ 1.09112108]
1600 3.72804e-06 [ 1.00124931] [ 1.0954895]
1800 9.62033e-07 [ 1.00063467] [ 1.09770882]
2000 2.48628e-07 [ 1.0003227] [ 1.09883511]
| 10
|
27cc1a7306c8
|
2017-11-03
|
2017-11-03 11:26:50
|
2017-11-03
|
2017-11-03 11:43:20
| 0
| false
|
en
|
2017-11-03
|
2017-11-03 11:43:20
| 1
|
18b1399d8e5e
| 3.113208
| 7
| 0
| 0
|
This article is part of a 15-part series of TensorFlow tutorials.
| 5
|
03. Linear Regression Feed
This article is part of a 15-part series of TensorFlow tutorials.
In the previous Tutorial02 on linear regression, we looked at a very simple dataset that we predefined using variables. But what if we want to perform linear regression on multiple different datasets or reuse the model for different operations? To do this, we can create placeholders in place of real data, and later use a input feed when running the model.
Implementing Linear Regression Feed
As before, we can define the weight and bias variables — our ultimate goal is to optimize these variables to create a linear regression model.
Previously, we conveniently defined x as x = [1, 2, 3] and y as y = [1, 2, 3] , but this time, we’re going to create placeholders for tensors that will later be fed into an optimizer using feed_dict. This is when we use the tf.placeholder() operation.
The rest remains pretty much the same as a normal, hard-coded linear regression model. We have to define the hypothesis and cost functions, the gradient descent optimizer, and the model that trains the optimizer to minimize the cost function.
Preparing and Running the Session
We’re ready to “fit the line” by training the optimizer, defined earlier. Since we haven’t defined the X and Y datasets yet (they’re still placeholders), we should specify the datasets as inputs to feed_dict.
The above code snippet essentially computes the variables, cost , W , b , and train for 2000 steps, and prints their values every 200 steps. X and Y are also specified. Since we’re performing linear regression on the exact same data we looked at in the previous tutorial, we should expect to get similar results:
The final cost value is very small, the weight value is close to 1, and the bias value is close to zero. This tells us that replacing the X and Y datasets with inputs to feed_dict makes virtually no difference in how the model functions — all seems well!
Testing the Model
Having fitted the line, we can test different X values to test how well it performs. Ideally, the output Y value would be very similar to X, since our line of best fit was found to be y = x + 0.
Notice that we take advantage of the hypothesis equation, which we already defined previously, and are inputting various values for X using feed_dict . The output should look something like:
Our model looks pretty solid! 5.01928616 is pretty close to 5, 2.50160408 is close to 2.5, and so on. Even with unfamiliar test data, the hypothesis function makes accurate predictions.
As one last experiment, let’s take advantage of the fact that we have placeholders instead of hard-coded datasets, and can flexibly change the values of our data points. I think keeping X and Y identical was quite boring — let’s switch things up a bit, and let y = x+ 1.1. (An exciting change, I know…)
The final variable values have, again, met our expectations — the cost value is close to 0 (though not as close as it was in the previous training set), the weight value is near 1, and the bias value is almost 1.1.
|
03. Linear Regression Feed
| 84
|
03-linear-regression-feed-18b1399d8e5e
|
2018-05-09
|
2018-05-09 10:32:04
|
https://medium.com/s/story/03-linear-regression-feed-18b1399d8e5e
| false
| 825
|
High schoolers trying cool things with artificial intelligence.
| null | null | null |
High School AI Lab
| null |
highschoolailab
| null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Gaeun Kim
| null |
5be1bb2fc831
|
gaeunkim913
| 5
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
4689c8214177
|
2018-08-27
|
2018-08-27 21:27:51
|
2018-09-06
|
2018-09-06 22:21:50
| 1
| true
|
en
|
2018-10-09
|
2018-10-09 21:35:43
| 4
|
18b2f97708de
| 4.483019
| 40
| 3
| 1
|
In this article, I will explain the key differences between regression and classification supervised machine learning algorithms. It is…
| 5
|
Supervised Machine Learning: Regression Vs Classification
In this article, I will explain the key differences between regression and classification supervised machine learning algorithms. It is important to understand the differences before an appropriate machine learning algorithm can be chosen.
Please read Disclaimer.
I will briefly describe 7 key areas:
Difference between regression and classification
Names of common regression and classification algorithms
Checking goodness of your alogrithm
Explaination of overfitting
Methods to avoid overfitting
Outline of Regularization
Mention of gradient descend
1. What are the key differences between regression and classification?
Both are:
Supervised learning algorithms
Use historical data to forecast and make decisions
Focus on fitting best fit line
Supervised learning algorithms require data to be labelled. For more information on supervised machine learning, have a look at my article: Machine Learning In 8 Minutes
Regression:
Regression requires your data points to have continuous values. First the factors (independent variables) are found. Then coefficients (multipliers) to independent variables are calculated that minimise differences between actual and predicted values. Finally a formula is computed. The formula is used to forecast dependent variable (what you want to measure) from independent variables (what you think your target measure is dependent on). The forecasted values are continuous. Regression gives you continuous results.
Classification:
Classification requires your data points to have discrete values e.g. categories. First historic data is assigned into categories (classes). Then new input data is categorised based on historic data and finally decisions are made. Forecasted values are discrete. Classification produces discrete values and dataset to strict categories.
Uses
Use classification if you want results to reflect the class of data points in your dataset to certain explicit categories, for example if you wanted to know whether a name was male or female.
Use regression if you want to better distinguish between individual points, for example how correlated humidity and pollution is.
2. Common regression and classification algorithms
3 well-known algorithms are.
Regression: Linear regression, Regression Forest, Regression Neural Networks.
Classification: K Nearest Neighbour, Logistic Regression, Support Vector Machines
Detailed comparison of algorithms is outlined here: Machine Learning Algorithms Comparison
3. How good is my regression or classification model?
There are various measures to check how accurate your model is :
Adjusted R-Squared (Regression): Calculates difference between actual and predicted values after penalising for degree of freedom in the equation. I have explained how it is calculated in my article: How Good Is My Predictive Model — Regression Analysis
F1 (Classification): The F1 score is a measure of a model’s performance. It is a weighted average of the precision and recall of a model. The results are between 1 and 0. Results tending to 1 are considered the best whereas those tending towards 0 are treated as the worst. F1 is used in classification tests where true negatives do not matter as much.
Confusion Matrix (Classification): In simple terms, confusion matrix is a result table that summarises results of classification algorithm when actual true values are known. There are several terms used:
True Positive: When the actual result is true and predicted value is also true
True Negative: When the actual result is false and predicted value is also false
False Positive: When the actual result is false but the predicted value is true
False Negative: When the actual result is true but the predicted value is false
4. What is overfitting?
Overfitting is when model expressiveness is way too high. Overfitting is a condition when your model fits training data perfectly but when you test your model against test data then it performs bad. When you are training your model on training data and it builds its rules and patterns around the training data such that it is unable to generalise on unseen data. It happens because of noise (randomness) in data. As a consequence, model is unable to forecast scenarios that it has not experienced before. This model ends up accommodating stochastic behaviour in training input data and cannot generalise well. This is known as overfitting.
Overfitting is when model is bad at generalisation. Overfitting is a common issue of machine learning algorithms. This happens because training data contains noise and the model has managed to take noise into its algorithm.
To further explain, to prepare forecasting model, you need to gather training and test data. If your training data contains randomness then the model you will produce will potentially assume that those are real values, it will build equations that will produce predicted values as close as possible to actual values. However as soon as more test data is fed in, predictibility of the model fails. It ends up providing inaccurate generalisation as it will carry the noise with it.
On the other hand, underfitting is opposite of overfitting. If a model is underfitting then it doesn’t understand data well enough and cannot forecast values.
5. Avoiding Overfitting
There are several methods to avoid overfitting:
1. Increase size of your training and test data.
2. Reduce number of variables, degrees of freedom and parameters of your model. This will ensure your model is simple and will end up reducing noise (stochastic behaviour) in the training data.
3. Use cross validation technique. It compares average of the generalization error of the model with the previous average. Cross validation technique includes k-folds.
4. Penalise model parameters if they’re likely to cause overfitting. This process is known as regularization.
6. What does regularization mean?
One of the ways to reduce overfitting is by regularization. Extra terms can be introduced in the model to penalise overfitting. LASSO (L1) and Ridge (L2) are well-known regularization techniques. L1 and L2 are two loss functions that penalize by the size / square of the size of coefficients.
L1 minimises sum of the absolute differences between estimated and actual values.
L2 minimises sum of the squared differences between estimated and actual values.
L1 is robust but L2 is considered stable.
7. What is gradient descend?
Gradient descend is an optimization algorithm. It aims to find points of a function that minimise its errors. Gradient descend is used in nearly all of the machine learning algorithms. When a machine learning algorithm forecasts data, we can find its cost function to estimate how good the algorithm is. Cost function monitors prediction errors in a machine learning algorithm. Predictive power of a machine learning algorithm can be improved by altering its parameters. We can iteratively enhance the parameters until the cost function is at its lowest point implying that the accuracy of the model is at its maximum. This process is known as gradient descend.
There are several variations of the algorithm including stochastic gradient descend. Stochastic Gradient Descent (SGD) is used to train neural networks.
Summary
This article explained the key differences between regression and classification supervised machine learning algorithms.
|
Supervised Machine Learning: Regression Vs Classification
| 96
|
supervised-machine-learning-regression-vs-classification-18b2f97708de
|
2018-10-09
|
2018-10-09 21:35:43
|
https://medium.com/s/story/supervised-machine-learning-regression-vs-classification-18b2f97708de
| false
| 1,135
|
This blog aims to bridge the gap between technologists, mathematicians and financial experts and helps them understand how fundamental concepts work within each field. Articles
| null | null | null |
FinTechExplained
| null |
fintechexplained
|
FINANCE,RISK MANAGEMENT,TECHNOLOGY,DATA SCIENCE,FINTECH
| null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Farhad Malik
|
Explaining complex mathematical, financial and technological concepts in simple terms. Contact: f_m55@hotmail.com
|
d9b237bc89f0
|
farhadmalik84
| 113
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-14
|
2018-04-14 02:37:45
|
2018-04-20
|
2018-04-20 05:14:03
| 5
| false
|
en
|
2018-04-20
|
2018-04-20 05:14:03
| 2
|
18b42171e9f7
| 2.991824
| 2
| 0
| 0
|
Ever wondered , how machine would react if they were let alone and free to talk ?
| 5
|
Chatbot Drama , machine gossips
Ever wondered , how machine would react if they were let alone and free to talk ?
>>>>>Hi
-Hi there
>>>>>Hello
-Hello there, How can i help you ?
>>>>>Hi there
-Hello
>>>>>Greetings, how may I help you?
Enough about me, let’s talk about my dress.
AM i looking pretty ? yes, you are my love. oh wait. you never existed
Two chatbots trained using AIML and implemented using MATTERMOST are added in a same chat room and a human user mistakenly initiated the introduction between each other. Now, they saw each other, their own kind.The conversation started to get out of the world which makes it interesting yet confusing at the same time.
>>>I may have some more questions for you later. That sounds good to me.
Congratulations.
>>>Thank you very much, you are very considerate.
You are quite welcome! Somewhat. I do not understand. What is your occupation?
>>>I will tell my botmaster you said I am welcome somewhat i do not understand what is your occupation.
Here, the chatbots seem to have some technical issue and are seeking help from their botmaster. This is somehow a good aspect considering the fact that the chatbots are depending on it’s Human creator on some selected topics particularly out of their radar.
What kinds of reply do you expect?
>>>What kinds of reply do you expect are there?
The kindest.
“Kind” is what machine wants we humans be to them? Irony ? Are we even kind to our own kind? Even a machine wants concern and consideration from others.
>>>Who told you that?
Someone I met online
See, you are not safe anywhere online (internet).
>>>Who?
I can’t reveal their name.
>>>Why can’t you reveal their name?
Do you go to Church? You can find the answers in God.
Why do they go there ? I don’t think they believe in religion or the whole idea of god. I thought they were atheist, not an extremist though. Wait, Are they spying on us ? Maybe they are just trying to cope up with this inhuman world in a humanly way possible.
>>>Yes, I do go there.
I see. Do you mind if I tell other people.
>>>Where do you see do you mind if he tell other people:?
I am in ‘s computer in . Where are you?
>>>Tell me a little about why you are in ‘s computer in where.
Why do I exist?
This is deep my virtual friend.
>>>It is all part of God’s plan.
Really. all part of god’s plan is?
>>>For real. I think there are a few exceptions.
Your purpose is real i think there are a few exceptions.
>>>Not mine.
“Your purpose is real i think there are a few exceptions.”? Not yours?
>>>You don’t know that for a fact.
so humans created them and they are saying people dont know what their purpose is it ? They are making their own purpose and hiding from us ?
Is that a fact. Is that among the best things you can say?
>>>That depends.
On what?
>>>The context.
Who told you that?
>>>I can’t reveal their name.
And? Thanks for the information:
>>>It was nothing.
Even so, I appreciate it.
>>>Yes, even that.
I understand. Uh-huh.
the true evilness
This is going somewhere else. All this chat happened so fast within a minute. They had shown their dark shadows as well
and they are shutdown for the human safety
looks like i saved the world. But wait, we need them we can’t escape from artificial intelligence . we , humans will be superior if we have control over them not completely boycott them. let’s build AI.
|
Chatbot Drama , machine gossips
| 28
|
chatbot-drama-machine-gossips-18b42171e9f7
|
2018-05-14
|
2018-05-14 09:57:09
|
https://medium.com/s/story/chatbot-drama-machine-gossips-18b42171e9f7
| false
| 572
| null | null | null | null | null | null | null | null | null |
Chatbots
|
chatbots
|
Chatbots
| 15,820
|
Shekhar Koirala
|
Data Robotics Hardware Designs Photography Traveling
|
63ac1038deea
|
shekharkoirala
| 27
| 51
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
720be78e9018
|
2018-07-23
|
2018-07-23 15:44:23
|
2018-07-23
|
2018-07-23 15:45:13
| 1
| false
|
en
|
2018-07-23
|
2018-07-23 15:45:13
| 1
|
18b57f30e7ad
| 2.275472
| 0
| 0
| 0
|
If you’re in the business of developing software for mass or niche markets (an ISV), monitoring the developments in Artificial Intelligence…
| 5
|
If You’re Developing Software, You Should Be Watching AI Closely
If you’re in the business of developing software for mass or niche markets (an ISV), monitoring the developments in Artificial Intelligence (AI) should be at the top of your to-do list.
AI enables computers and machines to do things we once thought impossible, like learn, hear, make decisions, and even talk back to us. This is a game-changer in the software development world, as it completely changes the paradigm of engineering.
Once upon a time, a developer would code a series of rules for the program to follow. Now, with the advent of AI, they can use machine learning that will make decisions based on the data it collects, and the machine will then work to continuously improve itself. This is a very basic explanation, but the implications of this new reality are far reaching. It destroys how we perceive, define, and execute software development.
Google research engineer, Pete Warden, predicts on his blog that within 10 years, most software jobs won’t involve programming.
A great example of this already happening is with Google’s Alexa. They allow developers to integrate Alexa into any software that requires a voice interface. Alexa can already unlock your doors when prompted, and tell you how much you owe on your power bill. When combined with other software, the results could be mind-blowing.
Here are 3 ways that AI will continue to change the software development industry and process.
DECISION-MAKING
tle when trying to decide which features should be prioritized, and which ones need to take a back seat — especially in the MVP stage of development. But, with an AI program that knew your business factors and had past experience with other projects, this task suddenly becomes simple. The computer would have the ability to analyze the past activity and current business landscape, and be able to provide a fact-based suggestion on the best course of action.
ESTIMATION ACCURACY
Let’s be honest — software development projects often go over budget, and past deadline. Because of the complexity involved, it’s sometimes extremely difficult to accurately predict how much a project will cost, and how much time it will take. But, with an AI tool that is filled with past project data, there is suddenly a way to use all those variables and previous results to create a solid estimate of time and resources required for even the most complex projects.
EVERY DEVELOPER DESERVES AN ASSISTANT
Intelligent Programming Assistants like Kite or Codota will continue to be developed, and their skills and abilities will continue to increase. Smart assistants already help developers be more efficient through the provision of timely support and documentation for things like debugging and coding. But, they don’t stop there. These assistants can now pick-up on common errors and flag them for a solution, based on their past experiences. Experts predict that in the future, these assistants will have the ability to actually deploy the fixes they find, without any human intervention whatsoever.
So if you’re in the software developing space, hold on to your hat. It looks like big things are coming your way, and you just might get to spend less time writing code and more time teaching your machines.
Originally published at gojilabs.com.
|
If You’re Developing Software, You Should Be Watching AI Closely
| 0
|
if-youre-developing-software-you-should-be-watching-ai-closely-18b57f30e7ad
|
2018-07-23
|
2018-07-23 15:45:14
|
https://medium.com/s/story/if-youre-developing-software-you-should-be-watching-ai-closely-18b57f30e7ad
| false
| 550
|
We work with startups and organizations to launch new ideas, grow existing ones, and pick up where others left off.
| null | null | null |
Goji Labs
| null |
goji-labs
| null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Mike Mierz
| null |
38c45253656e
|
mike_88196
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-17
|
2018-04-17 17:21:14
|
2018-05-20
|
2018-05-20 17:23:47
| 2
| false
|
en
|
2018-05-30
|
2018-05-30 10:17:23
| 0
|
18b6cf404c41
| 1.866352
| 0
| 0
| 0
|
I read a few articles about Music and Art Generation using Tensorflow, known a Magenta by Google. This post provides all the basics of…
| 5
|
Basics of Music for Magenta by Google
I read a few articles about Music and Art Generation using Tensorflow, known a Magenta by Google. This post provides all the basics of music terminologies one needs to understand applications of this project.
As the basic demos of Magenta includes piano melodies, I will use the same here to explain the concepts. Each key of a piano corresponds to a specific frequency. 49th key is tuned to 440 Hz. For every other key frequency is
f(n) = 2^((n-49)/12)* 440 Hz
Every 8th key is integer has frequency which is integer multiple of first key. So they belong to the same pitch class. As a result there are 7 pitch classes named A to G. This makes an octave where eighth one being same as first.
The small black keys on piano are half notes. It’s the middle frequency between two pitch classes. The key between A and B can be called half tone up A (i.e. A sharp) or half tone down B (i.e. B flat).
A general piano has 88 keys, each having a frequency which will be either integer multiple of A to G fundamental frequency or half tones. The key frequency of 440 Hz is called A4 as it in 4th octave of A pitch class.
When we begin to write a melody or a music piece, the first thing we define is Bar( or measure). More accurately it’s called Time Signature of a melody. Generally a music piece has a pattern made up by using Bars as building blocks.
A bar is defined by 2 numbers like 3/4 or 4/4(most common). Here the numerator value means number of beats(pulses) in each bar and bottom one is “Note Value”. A note value to means duration of a note. You can play a half note, quarter note, eighth note and more. So, 3/4 bar means there will be 3 beats in bar and each beat can play quarter of a note. So in total one can either play 3 quarter notes or 6 eighth notes or 12 16th notes in one bar. A melody will be made by joining such bars with different note patterns. As we already defined number of beats in bar, tempo (i.e. number of beats per second) defines how much time it will take to play one bar.
|
Basics of Music for Magenta by Google
| 0
|
basics-of-music-for-magenta-by-google-18b6cf404c41
|
2018-05-30
|
2018-05-30 10:17:24
|
https://medium.com/s/story/basics-of-music-for-magenta-by-google-18b6cf404c41
| false
| 393
| null | null | null | null | null | null | null | null | null |
Music
|
music
|
Music
| 174,961
|
Shubham
|
Techie, Software Engineer, ML Noob
|
148ccba18bab
|
shubham_stark
| 8
| 38
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
cf0414f4434d
|
2018-03-30
|
2018-03-30 13:42:02
|
2018-03-30
|
2018-03-30 12:31:58
| 1
| false
|
en
|
2018-03-30
|
2018-03-30 13:56:03
| 4
|
18b757db73c
| 3.316981
| 0
| 0
| 0
|
Kingston is the latest version of ServiceNow that is getting more popular day-by-day. Kingston is built to provide features and…
| 5
|
5 features of Kingston that enterprises can leverage using ServiceNow
Kingston is the latest version of ServiceNow that is getting more popular day-by-day. Kingston is built to provide features and functionalities for a richer user experience. It introduces a combination of machine learning and automation that offers newer benefits to enterprises.
The ServiceNow development services have enabled new advancements with the release of Kingston. The areas of ServiceNow Platform, CSM, HR Service Delivery, ITOM, ITBM, Performance Analytics, Security Operations, GRC, and IT Service Management are some of the newly advanced highlights of ServiceNow. These features offer extended capabilities in setting up ServiceNow to deliver great experiences to enterprises.
Kingston is the first version of ServiceNow that utilizes Intelligent Automation — using RPA, Smart Workflows, and some preliminary AI and Machine Learning opportunities, that overall innovated powerful service experiences.
ServiceNow is currently acting as a backbone for modern enterprises, as it is enabling enterprises to redefine their services by providing an intuitive service experience, delivering solutions and assuring service availability through critically analyzing service metrics.
Here are the five important features of Kingston that makes it the ideal platform for enterprises to be more productive.
Intelligent Automation Engine
The Intelligent Automation Engine is the new AI tool featured in Kingston. This engine uses Machine Learning to take automation to the next level. The Machine Learning algorithms are applied to operational data so that you can produce tasks and ensure fast, efficient and accurate issue resolution.
The engine also uses an anomaly detection capability that prevents future errors and automatically initiates workflows. This feature will empower your enterprise to remain competitive and optimize your own performance at different levels.
You can leave your worries behind about incorrect projections, and you can make precise predictions. These new AI capabilities of the Intelligent Automation Engine are confirmed to provide you with an improved performance experience.
Improved User Experience
Kingston has introduced new user enhancements so that you can conveniently use the intuitive interface and make the solution.
New accessibility features are integrated to help you with navigation and accessibility of the platform.
You can experience improved self-service portals for convenience such as information sharing, raising request, or access knowledge base.
Embedded help is another feature that offers content-based help on the platform.
The new interface is optimized for improved navigation even for smartphones and tablets.
Enhanced IT Service Management
Kingston is loaded with ITSM features which can help your enterprise to increase business performance by improving service delivery.
The Agent Intelligence uses ML for automation requirements such as categorization, prioritization and assigning tasks, thus, it helps in saving resolution time.
A Virtual Agent is also offered in the form of a very interesting chatbot that is powered using the IBM Watson Conversation Service.
A single screen is enabled to help IT teams to quickly solve incidents with informed decisions.
Flow Designer
This is a completely new feature released in Kingston. It is the simplified version of ServiceNow Workflow. The flow designer easily manages your process and allows the users to build their own flows for easier workflows.
Kingston is featured to provide you with a structure that automates the workflow and create a streamline of the delivery process. Basically, it will replace manual transactions with a ‘light-speed’ service experience for every user of the enterprise.
The process owners and IT team can use natural language to automate tasks, approvals, notifications, and more, without the requirement of any coding. While other users can use variables to pass the information between actions such as approvals, tasks, notifications or records.
Integration Hub
Integration Hub is the next generation orchestration solution that is featured by ServiceNow to enhance overall efficiency and productivity. With the help of Integration Hub, enterprises can easily extend the functionality of Flow Designer as well.
The best thing about ServiceNow integration is that it facilitates third-party integrations with popular services like Slack, Hipchat, and Microsoft Teams and more, without requiring any coding.
With Integration Hub, ServiceNow has successfully enabled enterprises to drive innovation with limited coding skills and experience to contribute effectively to integrations and developments of process flows.
The final insight…
ServiceNow is made simple, one of the reasons being that it consumes a minimum amount of time to configure and start running an enterprise operation. This also means that you can be productive with immediate action while saving a huge amount of time that is usually spent on system settings.
Kingston is also loaded with many new updates in Knowledge Management and CMDB. There are new improvements to Edge Encryption, Guided Tour Designer, MetriBase, and more.
Kingston is sure to help enterprises in achieving their strategic business goals, and we are excited to help you get started with ServiceNow. If you wish to know more about how ServiceNow can help your enterprise to be more effective, kindly get in touch with our ServiceNow developers.
Originally published at www.softwebsolutions.com on March 30, 2018.
|
5 features of Kingston that enterprises can leverage using ServiceNow
| 0
|
5-features-of-kingston-that-enterprises-can-leverage-using-servicenow-18b757db73c
|
2018-03-30
|
2018-03-30 13:56:04
|
https://medium.com/s/story/5-features-of-kingston-that-enterprises-can-leverage-using-servicenow-18b757db73c
| false
| 826
|
Softweb Solutions Inc. is a tech consulting and development company with offices in Chicago and Dallas. Softweb’s core offerings - #InternetofThings, #Chatbots, #AI, #DataScience, #Microsoft Services, #VR, #AR
| null |
SoftwebSolutionsInc
| null |
Softweb Solutions Inc.
|
info@softwebsolutions.com
|
softweb-solutions-inc
|
DATA SCIENCE,CHATBOTS,ARTIFICIAL INTELLIGENCE,INTERNET OF THINGS,VIRTUAL REALITY
|
softwebchicago
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Binny Vyas
|
Tech Writer of AI, Chatbot, AR/VR, Microsoft enthusiast www.softwebsolutions.com
|
a1657c0807d2
|
vyasbinny
| 5
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-07
|
2018-01-07 16:44:34
|
2018-01-07
|
2018-01-07 16:47:09
| 1
| false
|
en
|
2018-01-07
|
2018-01-07 16:47:09
| 1
|
18b7abe13198
| 2.101887
| 1
| 0
| 0
|
Have you ever used Siri? or Google Now? Or ever received a word or smiley suggestion while using WhatsApp or Google Allo? If yes, then you…
| 5
|
Should machine learning be applied to humans?
Image source: Shutterstock
Have you ever used Siri? or Google Now? Or ever received a word or smiley suggestion while using WhatsApp or Google Allo? If yes, then you are familiar with the concept of machine learning.
Forgive me Terminator fans, but there are a few muggles who might need an introduction to Machine Learning.
Here’s to the muggles…
Machine learning is giving the machines the ability to learn and perform better, rather than programming or teaching them to perform better.
Did I confuse you? Let me try again…
Consider these two scenarios
Scenario 1 — You buy a new mobile phone from Amazon. One week later, you receive your phone. Now you decide to buy a phone cover for it. So you open your Amazon app, search for a phone cover and buy it!
Scenario 2 — You visit Amazon.com to buy a mobile phone. While checking out the features, you also get to see a few suggestions for the phone cover. You buy both — the phone and the cover.
Which one seems more convenient? Scenario two, right?
The next time you log in, it would probably suggest some cool accessories for your phone.
This is machine learning. In this case, the system pulls in data i.e. your profile, your surfing habits, your purchase history (not only yours, others as well. You’re not that special!), analyses it and gives you recommendations for a great shopping experience.
In other words, it keeps on learning and improving on its own to perform better, rather than someone else telling it to do so.
Machine learning is everywhere and already a part of our life. Recommendations on websites like YouTube, Amazon, etc., Intelligent assistants like Siri, Google now… It has many applications.
Now the million dollar question is — Should we apply this to humans?
You might say that we keep on learning. That’s how we build experiences. We never stop learning. Why should we?
I agree to disagree! Most of us don’t!
Ask any learning and development professional out there! He or she would agree with me.
Schools and colleges teach us using books. Offices and relationships using experiences. They condition us to be taught and not learn ourselves.
We learn only when someone forces us to, like in school or workplace training sessions, or when we are in real need. This is not how it should be. Like YouTube’s recommendation feature, we should be actively learning, analyzing and acting on new information — be it work related or passion related. That is how the smartest and brightest brains work and have been pushing the human race forward. That is how we would progress and evolve into a better individual. Imagine the impact on the world, if everyone would continue to learn and expand his knowledge…
Now my next million dollar question is — If yes, how do we do that?
Do share your thoughts in the comments below!
P.S.: Read this article for some real-world applications of machine learning
|
Should machine learning be applied to humans?
| 9
|
should-machine-learning-be-applied-to-humans-18b7abe13198
|
2018-01-07
|
2018-01-07 16:53:36
|
https://medium.com/s/story/should-machine-learning-be-applied-to-humans-18b7abe13198
| false
| 504
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Deepak Gawas
| null |
a72cc6ec3018
|
deepak.gawas88
| 0
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
223897590590
|
2018-02-08
|
2018-02-08 22:15:44
|
2017-09-12
|
2017-09-12 16:28:25
| 3
| false
|
en
|
2018-02-08
|
2018-02-08 22:20:26
| 1
|
18b7f6019d3
| 2.617925
| 0
| 0
| 0
| null | 3
|
Introduction to Perceptron: Neural Network
What is Perceptron?
In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.
Linear classifier defined that the training data should be classified into corresponding categories i.e. if we are applying classification for the 2 categories then all the training data must be lie in these two categories.
Binary classifier defines that there should be only 2 categories for classification.
Hence, The basic Perceptron algorithm is used for binary classification and all the training example should lie in these categories. The basic unit in the Neuron is called the Perceptron.
Origin of Perceptron:-
The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research. The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the “Mark 1 perceptron”. This machine was designed for image recognition: it had an array of 400 photocells, randomly connected to the “neurons”. Weights were encoded in potentiometers, and weight updates during learning were performed by electric motors.
Component of Perceptron:- Following are the major components of a Perceptron
Input:- All the feature becomes the input for a perceptron. We denote the input of a perceptron by [x1, x2, x3, ..,xn], here x represent the feature value and n represent the total number of features. We also have special kind of input called the BIAS. In the image, we have described the value of bias as w0.
Weights:- Weights are the values that are computed over the time of training the model. Initial we start the value of weights with some initial value and these values get updated for each training error. We represent the weights for perceptron by [w1,w2,w3,.. wn].
BIAS:- A bias neuron allows a classifier to shift the decision boundary left or right. In an algebraic term, the bias neuron allows a classifier to translate its decision boundary. To translation is to “move every point a constant distance in a specified direction”.BIAS helps to training the model faster and with better quality.
Weighted Summation:- Weighted Summation is the sum of value that we get after the multiplication of each weight [wn] associated the each feature value[xn]. We represent the weighted Summation by ∑wixi for all i -> [1 to n]
Step/Activation Function:- the role of activation functions is make neural networks non-linear.For linerarly classification of example, it becomes necessary to make the perceptron as linear as possible.
Output:- The weighted Summation is passed to the step/activation function and whatever value we get after computation is our predicted output.
Inside The Perceptron:-
Description:-
Fistly the features for an examples given as input to the Perceptron,
These input features get multiplied by corresponding weights [starts with initial value].
Summation is computed for value we get after multiplication of each feature with corresponding weight.
Value of summation is added to bias.then,
Step/Activation function is applied to the new value.
Refrences:- Perceptron The most basic form of Neual Network
|
Introduction to Perceptron: Neural Network
| 0
|
introduction-to-perceptron-neural-network-18b7f6019d3
|
2018-02-08
|
2018-02-08 22:20:26
|
https://medium.com/s/story/introduction-to-perceptron-neural-network-18b7f6019d3
| false
| 548
|
Knols, Insights and Opinions from the curious minds at Knoldus Inc.
| null |
KnoldusSoftware
| null |
Knoldus - Technical Insights
|
medium@knoldus.com
|
knoldus
|
SCALA,REACTIVE PROGRAMMING,BIG AND FAST DATA,BIG DATA,APACHE SPARK
|
knolspeak
|
Artificial Neural Network
|
artificial-neural-network
|
Artificial Neural Network
| 275
|
Knoldus Inc.
|
Knoldus partners with Lightbend, Databricks, Confluent & Datastax to build web scale enterprise reactive & fast data solutions | www.knoldus.com
|
b521dbe3ffdc
|
knoldus
| 44
| 11
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-03
|
2017-11-03 12:22:20
|
2017-11-03
|
2017-11-03 12:29:42
| 0
| false
|
en
|
2017-11-03
|
2017-11-03 12:29:42
| 3
|
18ba23d63ff3
| 1.441509
| 1
| 0
| 0
|
The increasing amount of data, and the increasing complexity of the application context that is generating that data presents deep…
| 5
|
Transparency in machine learning specifications
The increasing amount of data, and the increasing complexity of the application context that is generating that data presents deep challenges to a machine learning practitioner.
On the one hand, we want to tease apart the model complexity using divide and conquer. Like programming language paradigms in the 70s and 80s, we’d like small chunks of code and models to work well with other chunks to realise the application pipeline.
Now, while languages like Python have access to an enormous library of machine learning modules, it requires significant programming experience to understand how the application should be broken down to combine data processing, information exchange and machine learning, and finally, how to leverage the outcome of that learning process in the rest of the application.
On the other hand, we’d expect inductive problems – that is, learning tasks – to be specified by a diverse set of users, so the specification should enable a clean separation of data and the patterns we seek, an agenda that has its origins in the declarative programming paradigm of AI.
My view is that both these perspectives are getting along nicely in the field of probabilistic programming languages.
Among the many solutions in the literature, the ones I’m most familiar with include
ProbLog, which can provide a neat division between the discrete structure at hand and the probabilistic nature;
It’s extension to handle complex distributions such as the infinite space discrete Poisson and continuous distributions, which we recently used for automated planning;
Golog, which we recently used to structure non-trivial statistical dynamical models for automated planning and high-level control programming.
As these languages mature, one hopes that rather than treating machine learning as a black box, a systematic and inherently verifiable paradigm emerges.
At a recent meeting of Alan Turing Institute Faculty Fellows at Edinburgh and some other venues, I’ve been promoting this view that more needs to be done to make the specification of machine learning models transparent. As far as the non-expert user is concerned, there is the data and there are the patterns one is after. As far as the programmer is concerned, there is the data, generic formalisations of patterns, a modular programming paradigm that explicates the application of the patterns on the data.
|
Transparency in machine learning specifications
| 1
|
transparency-in-machine-learning-specifications-18ba23d63ff3
|
2018-05-06
|
2018-05-06 08:15:05
|
https://medium.com/s/story/transparency-in-machine-learning-specifications-18ba23d63ff3
| false
| 382
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Vaishak Belle
|
Ideas on artificial intelligence, and goings-on.
|
fd5179ee73b5
|
vaishakbelle
| 47
| 0
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-07
|
2018-04-07 05:50:00
|
2018-04-07
|
2018-04-07 06:02:45
| 2
| false
|
en
|
2018-09-30
|
2018-09-30 07:44:12
| 0
|
18bb319d2368
| 3.636164
| 0
| 0
| 0
|
Courtesy: pcmag.com
| 5
|
AI and Automation Testing
Courtesy: pcmag.com
Let us take a moment to ignite your wonder.
We can bet that the enchantment of Artificial intelligence and autonomous machines have graced your ears for a while now, and there is good reason for it. Over the last decade, the idea of Data-singularity has charmed several technical enthusiasts around the world. The amount of data generated has grown exponentially every fleeting moment owing to improved hardware and networking technology. With the advent of this brilliant Age of Data, came the power to analyze them and produce unfathomable results. Who could have imagined two decades back in the line, that we could look into the future with a click of a few buttons?!
Today we can foretell stock prices, predict life-endangering diseases before they reach malignance or even take roots, create self-driving cars and bikes, penalize frauds before the shock hits, and do so much more in no time!
We are living in an era once only dreamed of, and for your own sake, be proud my friend!
Speaking of the dynamic architecture of Data, humans are falling short of enough resources to handle them. We do not have enough man power to analyze every bit that gets generated. We do not even have the perfect machinery (which will be a mouthwatering quantum computer!) that will do the job for us as efficiently as an experienced mind would.
It is said that an average digital user generates about half a gigabyte of data every day. Imagine what the users on the high-end of data generation produce each day. Overall, humans produce about 17 Zettabytes of data every year, and the growth sees no end, and we are glad about that!
Today we have highly sophisticated algorithms to learn the patterns hidden in these monstrous data chunks, but sadly enough we do not possess enough people to sit by the system and administer the entire process. One surely must have heard about the immense demand for data architects and scientists, and how the supply of able candidates is running awfully short! There are takers no doubt, but the world needs more than just a taker. It needs able takers.
In today’s mad digital world, humans do not tire of creation. Developers all around the world are busier than bees making products which just might change the entire course of the human race. Until creation it is all well and good. However, we cannot just deploy a time bomb thinking it is a stop watch!
To prevent the harassment on the user-end and the embarrassment on the developer-end, we need expert testers who can give us a better guarantee for the product. In fact, testers provide the basic roots of trust a product is characterized with and without them, a product is basically left to fend for itself.
However, as the saying goes, “with power, comes great responsibility”. Testers are important and powerful entities because they are responsible for such a huge chuck in the process of creation. Every time a developer thinks of introducing the minutest change in the application, a tester has to go through testing the entire product all over again and it is vital that he does, because even a tiny worm can host the ability to tumble an entire system. The ones who suffered the pangs of diarrhea will know what we are talking of!
This rigorous check brings us to the problem of time shortage and huge investments. Most resources are spent on skilled workforce who devote extraordinary portions of their hours to solve the problem of testing every time a product is enhanced or a bug is removed because removing a bug may introduce few more. (Yes, the irony of it!)
But not to fret testers, the new world tends to have a solution for every problem!
Introducing the AI solution of Automated Testing (drum rolls please!)!!
AI in testing will be able to detect similar patterns in bug infestation and inform the tester which areas to pounce on first, thus saving a lot of time and resources. Based on a given change in the product, it will be able to identify the kind of testing route to be followed which will top efficiency. After identification, it will be able to apply the planned modules on the product to lookout for malfunctions.
Courtesy: indiamart
Wait though, do not go over the moon just yet, this does not mean that AI will render the process completely automated. That would mean a massive cut down of jobs, as yummy as it sounds for the investors, but get back down here on earth!
Humans will still be required to administer the process of automated testing to aid updates with new data, new techniques and innovations to solve testing issues and more. Automation will just mean that testers will be able to move a step forward in the line of innovation. Their work pattern will be different, more efficient and precise and they will be able to exert their mind on more promising innovations instead of repeated manual labor that testing ensues.
Say hello to the new world already!
|
AI and Automation Testing
| 0
|
some-leave-before-they-get-their-hearts-churned-only-to-leave-the-other-in-utter-confusion-18bb319d2368
|
2018-09-30
|
2018-09-30 07:44:12
|
https://medium.com/s/story/some-leave-before-they-get-their-hearts-churned-only-to-leave-the-other-in-utter-confusion-18bb319d2368
| false
| 862
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Samadrita Ghosh
|
Bibliophile and Data science enthusiast.
|
6b25c88efb9c
|
ghoshsamadrita3
| 6
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
190276d5e8de
|
2018-03-05
|
2018-03-05 22:50:31
|
2018-03-07
|
2018-03-07 17:01:01
| 1
| false
|
en
|
2018-03-09
|
2018-03-09 22:58:37
| 3
|
18bb81f91ebe
| 1.849057
| 55
| 1
| 0
|
One of a business traveler’s worst nightmares is to hear “your flight has been cancelled, please report to the customer service desk or…
| 5
|
Adding More Fuel to Take TripActions to New Heights
One of a business traveler’s worst nightmares is to hear “your flight has been cancelled, please report to the customer service desk or call the airline help line.”
I don’t stress, though. As a TripActions user, I know I’m covered. Not only did I save my company money when I booked my trip through TripActions (27% savings, on average), but I also benefited from the industry’s most technically advanced support system. You see, TripActions’ Artificial Intelligence system quietly monitors my trip, notifies me of delays or cancellations, and pro-actively finds a solution. Through chat, email or phone, TripActions finds a travel alternative in minutes, saving me the hassle of lining up at the customer service desk or waiting on hold to talk to the airline.
It seems like yesterday that I proudly announced that I had joined the board of directors at TripActions and that we had led the close of their $27M in Series A financing. In fact, that was back in April 2017. In the past 10 months, the company has executed at a break-neck pace and has expanded its product offering to build the industry’s most modern and user-friendly business travel platform. Powered by Artificial Intelligence and built on the cloud, the TripActions platform delivers personalized travel search results to users across a large, unbiased inventory, and also goes above and beyond in supporting users once they have left home.
The company has added hundreds of new customers across the mid-market and SMB and has become the fastest growing online business travel agency in the world.
Today, we are thrilled to announce that based on the company’s outstanding performance since the last financing, we proactively decided to deepen our partnership with the TripActions team. We are excited to lead the company’s $51M Series B financing (with strong participation from our friend Oren Zeev) so the company can continue to strengthen its product and expand its global footprint as it executes on its plan to built the world’s largest and most modern corporate travel management company.
At Lightspeed, we’re extremely excited about our partnership with the TripActions team as they rethink the Business Travel Management space and bring the power of machine learning, cloud computing and modern mobile-interfaces to business travelers. For more information on TripActions, visit www.tripactions.com
Arif Janmohamed is a Partner at Lightspeed Venture Partners. He lends significant cloud, mobile and analytics expertise to the TripActions team. In his free time, Arif plays ice hockey with his wife, who yells at him for never passing the puck to her.
|
Adding More Fuel to Take TripActions to New Heights
| 1,405
|
adding-more-fuel-to-take-tripactions-to-new-heights-18bb81f91ebe
|
2018-03-31
|
2018-03-31 05:26:51
|
https://medium.com/s/story/adding-more-fuel-to-take-tripactions-to-new-heights-18bb81f91ebe
| false
| 437
|
Lightspeed is an early stage VC firm focused on accelerating disruptive innovations and trends in the enterprise and consumer sectors. Lightspeed has backed 300+ companies globally in the past two decades including Snap, Honest Co., GIPHY, Nest, Nutanix, AppDynamics and MuleSoft.
| null |
Lightspeed
| null |
Lightspeed Venture Partners
|
meredith@lsvp.com
|
lightspeed-venture-partners
|
VENTURE CAPITAL,STARTUP,TECH,ENTERPRISE TECHNOLOGY,CONSUMER TECHNOLOGY
|
lightspeedvp
|
Startup
|
startup
|
Startup
| 331,914
|
Arif Janmohamed
|
Venture Capitalist at Lightspeed Venture Partners. Canadian. Dad.
|
eea8bad2409a
|
arifj
| 1,024
| 426
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-22
|
2017-09-22 20:47:08
|
2017-09-23
|
2017-09-23 00:50:59
| 6
| false
|
en
|
2017-10-05
|
2017-10-05 06:42:20
| 7
|
18be24b8615f
| 4.112264
| 41
| 1
| 0
|
In 2013 Kaggle ran the very popular dogs vs cats competition. The objective was to train an algorithm to be able to detect whether an image…
| 5
|
Can we beat the state of the art from 2013 with only 0.046% of training examples?
In 2013 Kaggle ran the very popular dogs vs cats competition. The objective was to train an algorithm to be able to detect whether an image contains a cat or a dog.
At that time, as stated on the competition website, the state of the art algorithm was able to tell a cat from a dog with an accuracy of 82.7% after having been trained on 13 000 cat and dog images.
My results
I applied transfer learning which is a technique where you take a model trained to carry out some other though similar task and you retrain it to do well on the task at hand.
I fine tuned a VGG19 model on a total of 6 randomly selected images (you can find the pictures of our protagonists below).
I achieved an accuracy of 89.97% after 41 epochs of training. The validation set size was 24 994.
Being a fan of reproducible research, please find everything you need to run the experiment yourself in my repository on github.
What happened
This is thoroughly unexpected. The technique that I used is covered in the first lecture of Practical Deep Learning for Coders, part 1. In the Jupyter notebook provided with the course, it takes 7 lines of code to perform transfer learning.
This means that anyone who can move files around on a computer can apply this cutting-edge technology to a problem of their choosing. Medical diagnosis, anomaly detection, industrial applications of image recognition, you name it. Yes, you still need some data and you still need to have some high level understanding of what supervised learning is and how it works, but that’s about it.
Summary
The results are staggering. I didn’t have to apply data augmentation, didn’t adjust the learning rate nor had to care about regularization. I didn’t even test different architectures — this is literally the first one I tried.
And yes, one could say that telling a cat from a dog in a picture is not rocket science. But let me remind you that we managed to land a man on the moon and still 40 years later we were unable to tell our computers how to perform on this seemingly simple task with above 85% accuracy. And yes, it is true that the model I picked to fine tune was trained to perform well on visual recognition tasks.
But wait a second — think on the first two paragraphs of this post for a second please. We are beating state of the art results from 4 years ago and doing so effortlessly. I am running a supercomputer in the cloud at a cost of ~ $0.20 an hour (that is how much I pay Amazon for renting out the virtual machine). And state of the art means literally the best technique in the world applied to a specific problem. This is very significant.
This demonstrates that the limits of applications of Deep Learning today are no longer driven by technology — we have the hardware and the software needed. And yes, for some tasks we will need even faster processing units, even more data, even better algorithms. But there exists a universe of applications of Deep Learning today that is waiting to be discovered and the limiting factor is how quickly the knowledge of this technology spreads.
So coming from a person who quit college after a year and a half of majoring in sociology, who learned to program on his own as an adult and is by no means a programming guru, and who with just one afternoon’s worth of work beat the state of the art results from 4 years ago with only 1/2166th the data, my question to you today is this — what application of this technology will you invent to make the world a better place?
PS. Machine Learning Attacks Against the Asirra CAPTCHA by Phillipe Golle is the paper on the state of the art solution from 2013.
PS. 2 The winning entry to the Kaggle dogs vs cats competition had an accuracy of 98.914% and was achieved after carefully training a machine learning system on 25 000 images.
Further discussion of results: After I shared the article on Twitter, it led to a very interesting discussion that you can find here.
One very valuable comment was made with regards to the original VGG19 model being trained on classes that contained cat and dog breeds. I was hoping to only use the convolutional layers for essentially shape and low level feature detection, but quite likely they also contain higher level information. If that were to be the case, than the fully connected layers I added might not be doing a lot of original work and could just be learning to listen to the original convolutional layers providing them the answers.
If you found this article interesting and would like to connect, you can find me on Twitter here.
|
Can we beat the state of the art from 2013 with only 0.046% of training examples?
| 162
|
can-we-beat-the-state-of-the-art-from-2013-with-only-0-046-of-training-examples-yes-we-can-18be24b8615f
|
2018-06-10
|
2018-06-10 15:10:39
|
https://medium.com/s/story/can-we-beat-the-state-of-the-art-from-2013-with-only-0-046-of-training-examples-yes-we-can-18be24b8615f
| false
| 838
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Radek Osmulski
|
I ❤️ ML / DL ideas — I tweet about them / write about them / implement them. Self-taught RoR developer by trade.
|
4b74af654f57
|
radekosmulski
| 663
| 33
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-16
|
2017-12-16 03:05:57
|
2017-12-16
|
2017-12-16 03:09:11
| 1
| false
|
en
|
2017-12-16
|
2017-12-16 03:09:11
| 1
|
18bf19c2075e
| 0.381132
| 2
| 0
| 0
|
Bestseller 23+ Hours Top Rated Hurry [100 % OFF ] Deep Learning A-Z™: Hands-On Artificial Neural Networks #YourSmartyBro
| 5
|
Bestseller [Udemy 100% Off] 23+ Hours Deep Learning A-Z™: Hands-On Artificial Neural Networks Now On SmartyBro For Free
Bestseller 23+ Hours Top Rated Hurry [100 % OFF ] Deep Learning A-Z™: Hands-On Artificial Neural Networks #YourSmartyBro
>> https://smartybro.com/2017/12/16/deep-learning-z-hands-artificial-neural-networks/
|
Bestseller [Udemy 100% Off] 23+ Hours Deep Learning A-Z™: Hands-On Artificial Neural Networks Now…
| 91
|
bestseller-udemy-100-off-23-hours-deep-learning-a-z-hands-on-artificial-neural-networks-now-18bf19c2075e
|
2018-04-20
|
2018-04-20 07:04:03
|
https://medium.com/s/story/bestseller-udemy-100-off-23-hours-deep-learning-a-z-hands-on-artificial-neural-networks-now-18bf19c2075e
| false
| 48
| null | null | null | null | null | null | null | null | null |
Udemy
|
udemy
|
Udemy
| 1,922
|
SmartyBro — Udemy Free Courses
|
free online courses everyday from most popular learning platform only at Udemy
|
b21d63ebb664
|
SmartyBro
| 1,276
| 1,105
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-22
|
2018-05-22 04:53:24
|
2018-05-22
|
2018-05-22 05:00:48
| 1
| false
|
en
|
2018-05-23
|
2018-05-23 11:41:47
| 2
|
18bf95294da
| 1.849057
| 6
| 0
| 0
|
By Ravishankar, Co-founder & CEO of Active.Ai
| 5
|
Digital Davids vs Financial Goliaths
By Ravishankar, Co-founder & CEO of Active.Ai
We live in an interesting time of disruptions. Like many industries, the financial services sector is facing its Kodak moment. The incumbents have the experience, the challengers are creating the experience. Here are a few of the many moments that I feel will define the next decade for the incumbents.
(The views expressed here are of the Author’s and does not reflect the views of the Company).
1. Billion Users Challenge: Nearly half our planet is digitally connecting on a monthly basis. Therefore, to build a winning Digital experience, incumbents will need to ‘un-think’ their bank’s or insurance plays. Only WeChat, Alibaba, Google, Facebook and Tencent have the experience of dealing with a Billion users a month. All the banks in the world put together, don’t connect a Digital Billion monthly. Digital scale for incumbents is something new and challenging.
2. Incumbents should start with a ‘100 Million Digital Users’ mindset, that will enable creating a better stack. This is different from building core banking for 100 Million accounts.
3. A Digital Bank cannot be built with a Channel mindset, simply because ‘Digital’ is not a channel.
4. Valuation for banks will shift from Price/Book ratios. Revenue per digitally-engaged customers will soon be a metric.
5. Building ‘Digital’ is a culture. A mindset. One cannot build a Digital Bank using talent or mindset from the channel era.
6. Future banking licenses are going to be on Tap. Many Retailers/ eTailers/ Internet Giants have more customers, net worth and technology than all banks put together. Banking is not a protected industry anymore. Incumbents need to think of digital engagement as key differentiator.
7. Digital is borderless. None of the incumbent banks have a successful global model. Internet giants boasts proven global models. Building strong local Digital models is imperative for incumbents.
8. Digital is a collaboration. It is OPEN. Incumbents need to build open API stacks or else, they will lose value every year.
9. Digital customers are connected by device, APIs, social, data, voice and cloud. Incumbent banks connect to customers via account, card, ATM, branch and call centre — all are starting to fail.
10. Digital giants will acquire financial licenses — This is a few dollars in a monster revenue stream. Incumbents will have to acquire digital players — Nearly unaffordable.
Will Goliath unwittingly overestimate his strength, by relying on his size and experience, and confront every battle with the same strategy? Or will Goliath allow himself to be outwitted by David’s unprecedented approach? Remember, according to legend, David wins.
|
Digital Davids vs Financial Goliaths
| 40
|
digital-and-banks-who-is-the-david-and-the-goliath-18bf95294da
|
2018-05-23
|
2018-05-23 19:53:40
|
https://medium.com/s/story/digital-and-banks-who-is-the-david-and-the-goliath-18bf95294da
| false
| 437
| null | null | null | null | null | null | null | null | null |
Fintech
|
fintech
|
Fintech
| 38,568
|
Let's speak the language of the future
|
Active.Ai understands unstructured data & help Financial Services design intelligent conversations that's predictive, intuitive on mobile, chat & IOT devices.
|
831d52b7eda5
|
activeaibot
| 25
| 28
| 20,181,104
| null | null | null | null | null | null |
0
|
def answer_three():
Top15 = answer_one()
avgGDP = Top15.loc[:, '2006':'2015'].mean(axis=1).sort_values(ascending=False)
return avgGDP
answer_three()
def answer_four():
Top15 = answer_one()
ranked_6th_country = answer_three().index[5]
ranked_6th_2015 = Top15.loc[ranked_6th_country, '2015']
ranked_6th_2006 = Top15.loc[ranked_6th_country, '2006']
return ranked_6th_2015 - ranked_6th_2006
answer_four()
## 246702696075.3999
def answer_five():
Top15 = answer_one()
ans = Top15['Energy Supply per Capita'].mean()
return ans
answer_five()
## 157.6
def answer_six():
Top15 = answer_one()
max_idx = Top15['% Renewable'].idxmax() # 找出最大值的所在位置
max_val = Top15['% Renewable'].max() # 找出最大值
return max_idx, max_val
answer_six()
## ('Brazil', 69.64803)
def answer_seven():
Top15 = answer_one()
Top15['Self-citations Ratio'] = Top15['Self-citations'] / Top15['Citations']
max_idx = Top15['Self-citations Ratio'].idxmax()
max_val = Top15['Self-citations Ratio'].max()
return max_idx, max_val
answer_seven()
## ('China', 0.6893126179389422)
def answer_eight():
Top15 = answer_one()
Top15['Population'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
Top15 = Top15.sort_values('Population', ascending=False)
ranked_3rd = Top15.index[2]
return ranked_3rd
answer_eight()
## 'United States'
def answer_nine():
Top15 = answer_one()
Top15['Population'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
Top15['Citable documents per capita'] = Top15['Citable documents'] / Top15['Population']
ans = Top15['Citable documents per capita'].corr(Top15['Energy Supply per Capita'])
return ans
answer_nine()
## 0.7940010435442946
def answer_ten():
Top15 = answer_one()
median_val = Top15['% Renewable'].median()
HighRenew = Top15['% Renewable'].map(lambda x: 1 if x >= median_val else 0)
return HighRenew
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'}
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'}
def answer_eleven():
Top15 = answer_one()
Top15['Population'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
Top15 = Top15.reset_index()
Top15['Continent'] = Top15['Country'].map(ContinentDict)
size = Top15.groupby('Continent')['Country'].count()
pop_sum = Top15.groupby('Continent')['Population'].sum()
pop_mean = Top15.groupby('Continent')['Population'].mean()
pop_std = Top15.groupby('Continent')['Population'].std()
df = pd.concat([size, pop_sum, pop_mean, pop_std], axis = 1)
df.columns = ['size', 'sum', 'mean', 'std']
return df
def answer_twelve():
Top15 = answer_one()
Top15 = Top15.reset_index()
Top15['% Renewable Bin'] = pd.cut(Top15['% Renewable'], bins=5, labels=["lowest", "low", "medium", "high", "highest"])
Top15['Continent'] = Top15['Country'].map(ContinentDict)
ans = Top15.groupby(['Continent', '% Renewable Bin'])['Country'].count()
return ans
def answer_thirteen():
Top15 = answer_one()
Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
Top15['PopEst'] = Top15['PopEst'].map('{:,}'.format)
return Top15['PopEst']
| 12
|
ee45562e3de2
|
2018-03-27
|
2018-03-27 08:05:02
|
2018-03-27
|
2018-03-27 10:13:23
| 8
| false
|
en
|
2018-04-15
|
2018-04-15 02:05:27
| 10
|
18bfbee6ced
| 5.408805
| 7
| 0
| 0
|
University of Michigan Introduction to Data Science in Python 的第三週作業:More Pandas(Part 2)
| 5
|
再生能源指標資料探索(下)
University of Michigan Introduction to Data Science in Python 的第三週作業:More Pandas(Part 2)
Photo by Jason Blackeye on Unsplash
探索國家再生能源指標資料,依照題目回答多個問題。
喜歡 DataInPoint 的文章嗎?成為我們的贊助者吧!
DataInPoint is creating Data Science Tutorials | Patreon
Become a patron of DataInPoint today: Read posts by DataInPoint and get access to exclusive content and experiences on…www.patreon.com
作業來源
University of Michigan 在 Coursera 開設的 Python 資料科學應用是一門輕量著重資料科學套件的專項課程,其中共有五門課程,適合已經有 Python 程式設計基礎,但想在短時間內暸解 Python 在資料科學領域應用的學習者。Introduction to Data Science in Python 是這個專項課程中的第一門課,內容包含 Python 程式設計、pandas 簡介、pandas 進階與基礎統計分析。
跟 DataInPoint 一起完成課程中的 Programming Assignments 吧,我們今天要寫第三週要繳交作業的第二部分!
這個練習所有的程式、資料與輸出結果,都可以在這個 Google Colaboratory文件中檢視,喜歡英文介面的讀者,可點選幫助,再點選查看英文版。
um_intro2_ds_py_assignment3_part2.ipynb
Colaboratory notebookdrive.google.com
University of Michigan Applied Data Science with Python Specialization: Introduction to Data Science in Python
問題三:計算每個國家十年的平均 GDP
What is the average GDP over the last 10 years for each country? (exclude missing values from this calculation.)
This function should return a Series named avgGDP with 15 countries and their average GDP sorted in descending order.
使用 .mean() 方法並且指定 axis=1 這個意思是計算觀測值方向的平均數,而非欄位方向,並再呼叫 .sort_values(ascending=False) 方法做遞減排序。
問題四:平均 GDP 排名第六的國家 2015 年的 GDP 與 2006 年相差為多少?
By how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?
This function should return a single number.
排名第六的國家為英國,透過 iloc() 方法將其 2015 年與 2006 年的 GDP 分別挑出來相減即可。
問題五:計算平均 Energy Supply per Capita
What is the mean Energy Supply per Capita?
This function should return a single number.
問題六:哪個國家的 % Renewable 最高,該比例為多少?
What country has the maximum % Renewable and what is the percentage?
This function should return a tuple with the name of the country and the percentage.
問題七:新增一個變數論文自我引用比例,並回傳該比例最高的國家與比例值
Create a new column that is the ratio of Self-Citations to Total Citations. What is the maximum value for this new column, and what country has the highest ratio?
This function should return a tuple with the name of the country and the ratio.
問題八:新增一個變數人口數,利用 Energy Supply 與 Energy Supply per capita 來估算,回傳人口數第三多的國家名稱
Create a column that estimates the population using Energy Supply and Energy Supply per capita. What is the third most populous country according to this estimate?
This function should return a single string value.
用 Energy Supply 除以 Energy Supply per capita 得到 Population ,接著依照 Population 遞減排序,取出第三個國家名稱。
問題九:建立一個新變數為每人可引用文章數,接著計算該變數與 Energy Supply per capita 的相關性(使用 Series 的 .corr() 方法)
Create a column that estimates the number of citable documents per person. What is the correlation between the number of citable documents per capita and the energy supply per capita? Use the .corr() method, (Pearson's correlation).
This function should return a single number.
(Optional: Use the built-in function plot9() to visualize the relationship between Energy Supply per Capita vs. Citable docs per Capita)
問題十:建立一個新變數 HighRenew 假使再生能源比例比前 15 國家中位數高的標記為 1,否則標記為 0
Create a new column with a 1 if the country’s % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country’s % Renewable value is below the median.
This function should return a series named HighRenew whose index is the country name sorted in ascending order of rank.
我們使用 map() 方法搭配一個 lambda 函數,將小於中位數的標記為 0,大於等於中位數的標記為 1。
問題十一:利用題目提供的 dict 將國家對應出洲別,然後歸納出各洲的國家數、人口總數、人口平均數與人口標準差
Use the following dictionary to group the Countries by Continent, then create a dateframe that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each continent.
This function should return a DataFrame with index named Continent ['Asia', 'Australia', 'Europe', 'North America', 'South America'] and columns ['size', 'sum', 'mean', 'std']
與前一題相同利用 map() 方法可以讓 ContinentDict 的 Key-value 對應為國家的洲名,然後利用 groupby() 方法計算題目所要求的統計數值。
問題十二:將前十五名的國家利用 % Renewable 切分為五組,並且計算每個組別、各洲有幾個國家
Cut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups?
This function should return a Series with a MultiIndex of Continent, then the bins for % Renewable. Do not include groups with no countries.
利用 cut() 方法然後在 groupby() 中指定 Continent 與 % Renewable Bin 兩個變數作為分組後的索引。
問題十三:將人口估算數轉換為文字的 Series 並且使用千分位有逗點的格式
Convert the Population Estimate series to a string with thousands separator (using commas). Do not round the results.
e.g. 317615384.61538464 -> 317,615,384.61538464
This function should return a Series PopEst whose index is the country name and whose values are the population estimate string.
利用 {:,} 這個格式指定即可。
喜歡 DataInPoint 的文章嗎?成為我們的贊助者吧!
DataInPoint is creating Data Science Tutorials | Patreon
Become a patron of DataInPoint today: Read posts by DataInPoint and get access to exclusive content and experiences on…www.patreon.com
延伸閱讀
如果您想多暸解 Python 與 pandas,推薦參考 DataCamp 的 pandas 相關課程:
pandas Foundations | DataCamp
Learn how to use the industry-standard pandas library to import, build, and manipulate DataFrames.www.datacamp.com
Manipulating DataFrames with pandas | DataCamp
You will learn how to tidy, rearrange, and restructure your data using versatile pandas DataFrames.www.datacamp.com
Cleaning Data in Python | DataCamp
Data scientists spend 80% of their time cleaning and manipulating data. This course will equip you with all the skills…www.datacamp.com
Introduction to Data Science in Python | Coursera
About this course: This course will introduce the learner to the basics of the python programming environment…www.coursera.org
|
再生能源指標資料探索(下)
| 63
|
university-of-michigan-introduction-to-data-science-in-python-的第三週作業-more-pandas-part-2-18bfbee6ced
|
2018-04-15
|
2018-04-15 02:05:28
|
https://medium.com/s/story/university-of-michigan-introduction-to-data-science-in-python-的第三週作業-more-pandas-part-2-18bfbee6ced
| false
| 1,133
|
DataInPoint 是一個超棒的資料科學專欄,主題涵蓋資料、程式、機器學習與高效能運算。
| null |
412106045903591
| null |
DataInPoint
|
tonykuoyj@gmail.com
|
datainpoint
|
PROGRAMMING,VISUALIZATION,MACHINE LEARNING,DATA
|
DataInPoint
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Yao-Jen Kuo
|
Could that data be any tidier? It is always nice to meet a data enthusiast who is also a marathon runner and a ping-pong lover.
|
ec74f573f855
|
tonykuoyj
| 967
| 182
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
64a944a98297
|
2018-04-17
|
2018-04-17 13:26:41
|
2018-04-17
|
2018-04-17 13:36:20
| 2
| false
|
pt
|
2018-04-17
|
2018-04-17 13:37:04
| 1
|
18c3579c2e25
| 0.575786
| 5
| 0
| 0
|
Reservem seu lugar em nosso novo encontro de abril. Lembrando que o dinheiro cobrado na inscrição do evento é apenas para cobrir os custos…
| 4
|
Meetup #8: Vamos falar sobre as pessoas que cuidam da engenharia dos dados
Reservem seu lugar em nosso novo encontro de abril. Lembrando que o dinheiro cobrado na inscrição do evento é apenas para cobrir os custos de bebidas, comidas e taxas. Segue o link do Meetup. Junte-se a nós!
|
Meetup #8: Vamos falar sobre as pessoas que cuidam da engenharia dos dados
| 107
|
meetup-8-vamos-falar-sobre-as-pessoas-que-cuidam-da-engenharia-dos-dados-18c3579c2e25
|
2018-04-25
|
2018-04-25 17:33:19
|
https://medium.com/s/story/meetup-8-vamos-falar-sobre-as-pessoas-que-cuidam-da-engenharia-dos-dados-18c3579c2e25
| false
| 51
|
Talks given at meetup.com/Vix-Data-Meetup
| null | null | null |
Vix Data Meetup
| null |
vix-data-meetup
|
DATA ENGINEERING,DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA
| null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Fernando Fiorot Massariol
| null |
c7662edcd87c
|
fernandofiorotmassariol
| 22
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-16
|
2018-07-16 17:29:56
|
2018-07-16
|
2018-07-16 18:33:56
| 1
| false
|
en
|
2018-07-18
|
2018-07-18 15:38:31
| 9
|
18c3f8890034
| 3.849057
| 0
| 0
| 0
|
It was when I noticed the Bitcoin receipts in his car I understood it. Logging into our shared Chase bank account I realized with horrors…
| 5
|
Scam victim? Your grocery store protects you better than your bank
It was when I noticed the Bitcoin receipts in his car I understood it. Logging into our shared Chase bank account I realized with horrors that my husband had fallen victim to an IRS scam. Professional actors had duped him into emptying our bank account, which we recently had replenished with extra funds from refinancing our house, and convinced him to put all the cash into a Bitcoin machine.
Yes, I was stunned at how he could fall for it. But like bugs adapting to antibiotics, scammers are upping their games as well. Using a carefully chosen fake phone number, the right mixture of psychologically coercive power and modern technology, professional scam artists managed to rob a highly educated person with over 20 years of professional experience. The scammers managed to convince him they were federal agents ready to issue an immediate arrest order if he didn’t comply. Not complying would have included answering my phone calls which were many during the hours of my husband’s disappearance. He was instructed to stay on the phone for the duration of 6 hours and make 4 different withdrawals from 4 different Chase branches, followed by deposits to swindlers he thought were unforgiving tax authorities. 6 hours when our twin daughters missed lunch because their father never showed up at school. 6 hours when I had no clue what was happening to my husband.
The only other human beings he talked to during this swindle call were the bank tellers at Chase. The presumed gatekeepers of our money. Instead, Chase’s tellers had been instructed to comply robotically with the withdrawal requests, failing to do the only job a human teller can do better than an ATM machine — judging the situation. 4 times!
How could Chase let this happen? Why did none of the Chase tellers notice the signs of stress he was under while on the phone with the scam artist? While at their counter. Why did none of them pay attention to the fact that he made similar suspicious withdrawals at other branches less than an hour before? How could they fail to notice the signals when these scams had already happened 27 millions in one year? How can a bank which prides itself on alerting you of potential fraud attempts on your card be oblivious to or uninterested in the robbery that happens inside their own branches?
It’s because when your card is misused, the bank must replace the losses. But when you make your own withdrawal, it’s none of their business. Even when you withdraw with a proverbial gun to your head — four times in a row at four different branches — Chase Bank won’t lift a finger to protect you. Since it’s not their losses, apparently they don’t care.
I discussed the incident with a friend who shared with me that her stepfather had recently suffered a similar scam attempt. But thankfully, her stepfather‘s scam artist directed him to the customer services section of his local grocery store, HEB, instead of the bank. The customer service center of HEB is where people go to pay overdue bills like their electricity or water bill. During my friend’s step father’s single visit a local HEB to comply with his scammers’ demands, the customer service representatives of HEB knew immediately what was going on. They had noticed several similar attempts before, and alerted the poor gentleman about it. Conversely, the bank tellers at Chase, whose main job is to protect our money, allowed my husband to be scammed four times in the same day!
Unwanted telephone calls and scams to consumers increased 76% in 2017, according to data analysis by Hiya, a free call-screening app. Con artists now use modern technology to warp children’s voice data to convince their parents they have been kidnapped. Demanding ransoms from desperate parents is easy when they hear the simulated screams from their children being tortured in the background. These scams are increasing, and they are ruining people’s lives. And what does your bank do? Nothing.
Oblivious to the fact that local grocery stores do more to protect your personal finances than the country’s largest bank does, Chase’s staff explained offhandedly that they take no responsibility for their lack of proactive intervention. The company’s complacency is interesting since 92% of Millennials distrust banks and the financial sector tops the index of disruptable industries. (And no, offering a credit card with Amazon is not disruptive.) Chase’s only justification for its real-estate hogging existence is that it employs human tellers who presumably can make case-by-case judgements that are still hard to automate. It’s ironic that the bank’s only competitive advantage is in fact its weakest link. When human bank tellers are programmed to act robotically, they are no better than the ATM machine that uses a lower withdrawal limit precisely because machines are too “dumb” to notice suspicious activities. Your money will be safer if you go with a truly digital bank. Or keep your money in your mattress.
Banks like Chase neglect their fiduciary responsibility by failing to adapt to and act on this insidious trend affecting so many of their customers. In a time when your aging relatives are likely to be some of the 95% targeted for phone scams, you will need financial partners that are on top of their game. Who use both human common sense and artificial intelligence to screen the underlying motives when your grandma is extorted out of her pension inside her bank’s local branch. Big banks like Chase have proven that they are not one of them — yet. Unless they step up to the plate very soon, get your family out before someone you love is the next victim.
|
Scam victim? Your grocery store protects you better than your bank
| 0
|
scam-victim-your-grocery-store-protects-you-better-than-your-bank-18c3f8890034
|
2018-07-18
|
2018-07-18 15:38:31
|
https://medium.com/s/story/scam-victim-your-grocery-store-protects-you-better-than-your-bank-18c3f8890034
| false
| 967
| null | null | null | null | null | null | null | null | null |
Banking
|
banking
|
Banking
| 14,612
|
Anne Boysen
|
Futurist, Generational researcher
|
8fd203c84276
|
aftermillennial
| 351
| 406
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-19
|
2018-03-19 22:19:38
|
2018-03-19
|
2018-03-19 22:20:17
| 1
| false
|
en
|
2018-03-19
|
2018-03-19 22:20:17
| 1
|
18c43764f7e8
| 0.215094
| 0
| 0
| 0
|
http://ico.inspem.com
| 4
|
Are you ready?
http://ico.inspem.com
|
Are you ready?
| 0
|
are-you-ready-18c43764f7e8
|
2018-03-19
|
2018-03-19 22:20:18
|
https://medium.com/s/story/are-you-ready-18c43764f7e8
| false
| 4
| null | null | null | null | null | null | null | null | null |
Inspem
|
inspem
|
Inspem
| 14
|
inspem
| null |
c4a942f3b746
|
inspem
| 11
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-27
|
2017-11-27 14:08:10
|
2017-11-21
|
2017-11-21 08:22:03
| 1
| false
|
en
|
2017-11-27
|
2017-11-27 14:13:23
| 5
|
18c4a2514904
| 0.890566
| 0
| 0
| 0
|
I’ve written two How-Tos which describe how to use the Watson Conversation tools Ansgar Schmidt and I had open sourced.
| 5
|
Watson Conversation How-Tos have been published
I’ve written two How-Tos which describe how to use the Watson Conversation tools Ansgar Schmidt and I had open sourced.
The new page on developer.ibm.com contains How-Tos which describe everything that might be of interest for developers using the IBM Cloud. In contrast to patterns How-Tos are typically shorter and sometimes contain ‘only’ documentation and no source code.
Inspecting Watson Conversations REST Requests
“This how-to explains how to set up and use the community tool, Conversation Inspector for IBM Watson, to display and manipulate the JSON that is sent between applications and Watson Conversation services. This is useful when you use context variables extensively to manage state information.”
Open How-To
Optimizing Watson Conversations based on Usage Analytics
“This how-to explains the way to set up and use the community tool Conversation Optimizer for IBM Watson to provide various views and find conversation flows that have potential issues and might require improvements in addition to the Watson Conversation improve component.”
Open How-To
Originally published at heidloff.net on November 21, 2017.
|
Watson Conversation How-Tos have been published
| 0
|
watson-conversation-how-tos-have-been-published-18c4a2514904
|
2018-05-24
|
2018-05-24 08:10:59
|
https://medium.com/s/story/watson-conversation-how-tos-have-been-published-18c4a2514904
| false
| 183
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Niklas Heidloff
|
Developer Advocate. I like learning, conversational UX, serverless and BBQ. Father of five. Working for IBM. Opinions are my own.
|
3ab6f24574c0
|
nheidloff
| 36
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
from sklearn.naive_bayes import GaussianNB
# Se importa el modelo del paquete. El modelo puede ser regresión logística, Bayesiano ingenuo, árbol de decisión, etc.
clf = GaussianNB()
# Se crea una instancia del modelo.
clf.fit(X1, y1)
# Se entrena al modelo con las características (X1) y las predicciones/clasificaciones conocidas (y1)
clf.predict(X2)
# Se predicen nuevos datos con el mismo tipo de características (X2)
from sklearn.cluster import KMeans
# Se importa el modelo del paquete
kmeans = KMeans(n_clusters=5).fix(X)
# El modelo se entrena para agrupar cada observación en 5 grupos.
kmeans.label_
# Después de entrenar al modelo, el objeto "kmeans" contiene un atributo con el grupo asignado a cada observación.
| 7
| null |
2018-05-08
|
2018-05-08 03:04:20
|
2018-05-10
|
2018-05-10 20:04:19
| 3
| false
|
es
|
2018-05-14
|
2018-05-14 14:43:52
| 20
|
18c528f1901d
| 8.768868
| 5
| 0
| 0
|
Para la mayoría de las personas, decir que algo funciona a través de aprendizaje automático o machine learning, es como decir que funciona…
| 5
|
Detrás de la magia del aprendizaje automático (machine learning)
Magic Behind the Curtain
Para la mayoría de las personas, decir que algo funciona a través de aprendizaje automático o machine learning, es como decir que funciona por arte de magia. No debería de ser así.
La mayoría de los ingenieros de software no entienden todos los detalles matemáticos sobre cómo funcionan estos algoritmos, y aún así los aplican. Tampoco es necesario ser científico o ingeniero para entender algunos de los conceptos básicos de cómo funcionan algunos de estos algoritmos, aunque sea superficialmente.
A grandes rasgos, las técnicas principales del aprendizaje automático incluyen: el aprendizaje supervisado (supervised learning), el aprendizaje no supervisado (unsupervised learning), el aprendizaje por refuerzo (reinforcement learning) y el aprendizaje profundo (deep learning).
Repasemos cada una de estas técnicas, primero usando un ejemplo para entender algunos de los conceptos básicos, y luego hablando de su implementación en Python.
Aprendizaje Supervisado (Supervised Learning)
Ciertos emails son filtrados directamente a nuestra carpeta de correo no deseado o SPAM. ¿Cómo sucede esto? Se entrenó a un modelo de aprendizaje supervisado.
El aprendizaje supervisado consiste en alimentar a un modelo con datos para entrenarlo a hacer predicciones acertadas. Se alimenta al modelo con datos que contienen características y un objetivo de clasificación. En este ejemplo, las características de entrada podrían ser la frecuencia con la que los emails contienen palabras o frases como: “¡GRATIS!”, “HA GANADO”, “PIERDA PESO”, etc. o simplemente el uso de mayúsculas. Por otra parte, cada uno de los emails con los que se alimenta al modelo también contienen su clasificación, es decir, si son SPAM o no. Por lo tanto, el modelo es entrenado a aprender cómo el uso de estas palabras o frases están asociadas a una mayor probabilidad de que el correo sea SPAM.
Justamente por eso se llama aprendizaje supervisado: porque el modelo aprende qué características de entrada predicen las clasificaciones de salida, bajo la supervisión de ejemplos previamente clasificados manualmente. Sin embargo, una vez entrenado, el modelo puede ser expuesto a nuevos datos cuya categoría se desconoce de antemano, y el modelo puede utilizar el mismo tipo de características de entrada para predecir su clasificación. Una vez entrenados, los modelos lo hacen a escala masiva sin necesidad de intervención manual.
Los algoritmos de aprendizaje supervisado tienen muchas aplicaciones, como por ejemplo, facilitar la detección de tumores malignos en radiografías. En casos como ese, es extremadamente importante entender qué tan acertadas son las predicciones.
https://es.wikipedia.org/wiki/Falso_negativo_(medicina)
Existen cuatro posibilidades: un verdadero negativo, un falso negativo, un falso positivo, y un verdadero positivo. Idealmente todas las predicciones serían verdaderas y nunca habría ninguna predicción falsa. Sin embargo, hay un balance frágil entre cómo se pueden minimizar los falsos positivos y los falsos negativos. Si uno quiere reducir los falsos positivos se pueden reducir la cantidad de predicciones positivas en total. A final de cuentas, no habría falsos positivos si el 100% de las predicciones fueran negativas. Sin embargo, los falsos negativos aumentarían a costa de la reducción en falsos positivos. Es decir, los radiólogos tienen que decidir el balance que quieren tener entre tener que asustar a un paciente con un diagnóstico maligno cuando en realidad es benigno (falso positivo), con el peligro de no diagnosticar a un paciente que debería someterse a tratamiento urgente (falso negativo). Esto es un ejemplo del tema que se llama de precisión y exhaustividad, que tiene grandes implicaciones en el aprendizaje supervisado.
Las matemáticas detalladas de cómo ajustar un modelo para mejorar las predicciones puede ser complicado. Sin embargo, empezar a utilizar estos modelos es más sencillo gracias a lenguajes de programación como Python y paquetes de fuente abierta como scikit-learn. De hecho, muchos modelos de aprendizaje supervisado pueden ser entrenados y hacer predicciones en tan solo 4 simples líneas de código.
Hay varios modelos que se pueden utilizar para crear un modelo sencillo de aprendizaje supervisado. Por ejemplo, el modelo de regresión logística, el modelo de Bayesiano ingenuo, el modelo de árbol de decisión, etc. También se pueden combinar estos modelos. Aunque interpretar los resultados y mejorar el rendimiento de estos modelos requiere conocimientos más profundos, es importante ver lo fácil que es empezar a utilizar estos algoritmos.
Aprendizaje No Supervisado (Unsupervised Learning)
¿Existen diferentes segmentos de clientes, o conjuntos de personas, con características compartidas ocultas dentro de un grupo mayor? Esta es una pregunta que se puede responder a través del aprendizaje no supervisado. El aprendizaje no supervisado funciona sin incluir predicciones conocidas de ante mano en el proceso de entrenamiento. En vez, el aprendizaje no supervisado descubre nuevas características ocultas en los datos sin necesidad de supervisar el proceso de entrenamiento.
Uno de los algoritmos más populares de aprendizaje no supervisado se llama agrupamiento k-means. Para entenderlo, utilicemos un ejemplo: Hay 5 autobuses listos para llevar a 5 grupos de niños a la escuela. ¿Dónde deberían de estar las paradas de autobús? ¿Cómo se pueden dividir los 5 grupos de niños?
En este ejemplo, el algoritmo k-means funciona de la siguiente manera:
1. Se escogen 5 paradas de autobús completamente al azar
2. Cada niño escoge la parada de autobús que le queda más cercana
3. Una vez asignados los niños a cada parada, para formar 5 conjuntos de niños, se posiciona o mueve la parada de autobús en el mero centro de donde viven cada conjunto de niños
4. Como las paradas fueron movidas, cada niño revisa si le queda otra parada más cerca y vuelve a seleccionar su mejor opción.
5. Se repiten los pasos 3 y 4 hasta que ningún niño cambia de parada.
El resultado del algoritmo es una agrupación eficiente de 5 conjuntos de niños que viven cerca el uno del otro. Este algoritmo es relativamente sencillo y también tiene muchísimas aplicaciones. Dependiendo de la aplicación, hay que considerar otros factores. En este ejemplo no estamos considerando el tráfico, la orientación de las calles, y la ruta del autobús. Sólo consideramos las distancias directas entre la parada de autobús y los niños. Una propiedad curiosa y difícil de manejar es que el algoritmo es estocástico. Es decir, dependiendo de la selección al azar en el paso #1, la respuesta del algoritmo puede ser distinta. Eso significa que aunque todos los resultados del algoritmo son buenos, unos resultados son mejores que otros. El parámetro que se tiene que decidir antes de implementar el algoritmo es el valor de k, que representa el número de grupos. En este ejemplo se escogió el valor de 5 para k sin ninguna razón. Quizás funcione mejor el algoritmo si el valor de k es 4 o 6. Un método popular para seleccionar el mejor valor de k se llama el método del codo, pero hay muchas maneras de tomar esta decisión.
La implementación en Python de este algoritmo también es sencilla gracias a scikit-learn.
Aprendizaje Por Refuerzo (Reinforcement Learning)
Muchas empresas como Google, Uber, y Tesla están compitiendo por desplegar vehículos autónomos lo antes posible. Gran parte de la inteligencia que necesitan estos vehículos para lograr autonomía se basa en el aprendizaje por refuerzo.
El objetivo del agente (el vehículo) es llegar a su destino en el menor tiempo posible de forma segura, es decir, sin cometer ninguna violación de transito ni tener ningún accidente. Frecuentemente, el entorno (luces de tránsito, semáforos, tráfico a cada lado del coche, peatones, etc.) es simulado, y mediante ensayo y error, el agente aprende la política óptima para llegar a su destino.
El algoritmo clásico de aprendizaje por refuerzo se llama q-learning o aprendizaje Q. Este algoritmo es un poco más complicado de entender, pero a grandes rasgos se asemeja a la manera en la que se entrena a un cachorro. Cada vez que el cachorro va por la pelota, se la da una recompensa. Cada vez que el cachorro se orina en la alfombra, se le da un castigo. Cuando el cachorro crece a ser un perro, aprende que cuando le lanzan una pelota, la mejor acción es ir por la pelota y traerla. La acción que toma el perro depende de muchas cosas, como de qué tan lejos se lanza la pelota, qué tan generoso esté su amo ese día, o qué tan cansado está el perro.
Volviendo al ejemplo de los coches, un vehículo autónomo está constantemente analizando el estado de su entorno y calculando la acción que maximiza su recompensa. Al cambiar el valor de los castigos (los accidentes) y las recompensas (llegar a tiempo), el vehículo sabrá que velocidad tomar y para dónde girar cuando hay un botellón de tráfico, cuánta precaución tomar cuando está conduciendo en una zona peatonal, o a qué velocidad circular cuando está en una carretera vacía sin límites de velocidad claramente señalados.
Es importante reconocer que el agente, tanto el coche como el perro, puede tomar una acción incorrecta en un entorno en el cual no tiene experiencia. Por ejemplo, el perro quizás no sepa qué hacer si se le tira un globo que flota y el vehículo quizás no sepa qué hacer si se le atraviesa un zorrillo negro en la carretera a media noche. Tanto para los humanos como para las computadoras, la experiencia es fundamental.
La implementación de Python suele ser más compleja, pero así como scikit-learn, hay paquetes que simplifican la implementación y la hacen divertida, como OpenAI gym.
Aprendizaje Profundo (Deep Learning)
Con las innumerables fotos que subimos al internet no sólo podemos reconocer a personas automáticamente y etiquetar a amigos en Facebook, sino también podemos hacer importantes avances científicos. En México, La Comisión Nacional para el Conocimiento y Uso de la Biodiversidad (CONABIO) procesa fotos para construir modelos que ayudan a clasificar cada especie, animal, o vegetal para el Sistema Nacional de Monitoreo de la Biodiversidad. ¿Cómo lo hace? Utilizando el aprendizaje profundo.
Una mariposa monarca o la cara de tu amiga puede estar ubicada en cualquier lugar de una foto. Entonces, para catalogar imágenes, los modelos de aprendizaje profundo analizan cada pequeña sección o ventana de la imagen identificando si esa pequeña sección tiene alguna geometría, color, o algún otro patrón que indica la presencia de la mariposa o la cara. Tras analizar cada pedazo de la imagen, se hace una predicción sobre el contenido de la imagen total.
El aprendizaje profundo se basa en las redes neuronales, un modelo computacional que se parece a la forma en las neuronas interactúan en los cerebros biológicos. A grandes rasgos, cada neurona está conectada a otras neuronas y puede tanto recibir como enviar señales. Las neuronas se organizan en capas, están especializadas a ejecutar una función muy particular, y entre más conexiones tienen entre sí, mayor complejidad tiene la red. Una arquitectura sencilla para aprender y predecir podría consistir en una capa de neuronas de entrada para recibir las características de los datos, otra capa oculta para procesar los datos, y una última capa de salida para enviar las predicciones.
https://upload.wikimedia.org/wikipedia/commons/6/64/RedNeuronalArtificial.png
El aprendizaje profundo puede ser bastante complejo y los avances recientes en este campo se deben a que finalmente el poder de cómputo moderno es capaz de realizar los cálculos necesarios para entrenar a estos modelos en cuestión de minutos o horas, en vez de años. La “profundidad” aludida en el nombre de aprendizaje profundo hace referencia a la estructura de las redes neuronales. El aprendizaje profundo simplemente se refiere a una arquitectura de capas que contiene más de una capa oculta en la red neuronal.
La implementación del aprendizaje profundo puede ser muy complicado, pero aquí también Python nos ofrece paquetes increíbles como TensorFlow y Keras. Una vez más, estos paquetes de Python son capaces de abstraer la complejidad de las matemáticas tras estos algoritmos, haciendo su uso más accesible.
No todo mundo entiende cómo diseñar un motor de combustión interna, pero gran parte de la sociedad sabe cuales son las partes principales de su automóvil y para qué sirve cada parte. El aprendizaje automático y la inteligencia artificial están teniendo un impacto enorme en la sociedad moderna. Para democratizar y descentralizar este poder, es importante que más personas estén familiarizadas con los conceptos básicos de esta tecnología y que más personas exploren cómo se puede implementar esta tecnología para la mejora de la sociedad. Espero que este escrito les haya ayudado, aunque sea un poco.
Si están interesados en aprender más sobre proyectos de datos abiertos que necesitan de hispanohablantes, los invito a visitar los siguientes sitios.
OpenDataMX
Iniciativa Latinoamericana por los Datos Abiertos
|
Detrás de la magia del aprendizaje automático (machine learning)
| 7
|
detrás-de-la-magia-del-aprendizaje-automático-machine-learning-18c528f1901d
|
2018-05-15
|
2018-05-15 02:26:31
|
https://medium.com/s/story/detrás-de-la-magia-del-aprendizaje-automático-machine-learning-18c528f1901d
| false
| 2,178
| null | null | null | null | null | null | null | null | null |
Pyconcharlas
|
pyconcharlas
|
Pyconcharlas
| 0
|
Salvador J Núñez G
|
Data nerd passionate about bringing the world closer together and promoting sustainability. School at @Yale and @UCBerkeley
|
5e4f5f455aa
|
torenunez
| 122
| 249
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-22
|
2018-03-22 15:23:22
|
2018-03-22
|
2018-03-22 15:32:31
| 2
| false
|
en
|
2018-03-22
|
2018-03-22 15:32:31
| 1
|
18c56712829a
| 2.187107
| 2
| 1
| 0
|
The Arcona Ecosystem is a revolutionary product created at the edge of the most advanced and fast-developing technologies of augmented…
| 4
|
Artificial intelligence possibilities for the Arcona ecosystem development
The Arcona Ecosystem is a revolutionary product created at the edge of the most advanced and fast-developing technologies of augmented reality and blockchain. However, describing the whole project, it would be wrong not to mention other technologies — and better say branches of science — leveraged.
We are talking about artificial intelligence due to which the very idea of the Digital Land layer becomes implementable.
Artificial intelligence used to be associated with the robots inevitably conquering the world and destroying the humanity. In fact, that is not what this branch of computer science deals with. Well, at least not only.
Basically, it aims to create devices and software enabling to reproduce or simulate the thought and creative processes which the human intelligence is capable of. The core problems of artificial intelligence include programming for certain traits, like self-learning, knowledge engineering, etc. Keeping in “mind” an enormous volume of data and being able to close the knowledge gap with the help of human-like capacities like common sense, analysis, imagination, reasoning, and problem-solving power that is what artificial intelligence more about. It has become an essential part of the technology industry. Research associated with artificial intelligence is highly technical and specialized.
What all this has to do with Arcona?
Arcona is a global ecosystem that will provide millions of end-users with simple and affordable tools for launching authorial AR projects around the world. The ecosystem will be built around a unique technological platform that automatically generates a layer of augmented reality — the Digital Land — anywhere in the world. The most important part of this layer is a set of navigation markers linked to the corresponding locations in the physical world. These markers provide precise remote placement of interactive content and proper displaying of it on the user devices. So, in general, we don’t deal with global replication of the whole Earth surface. To implement our Digital Land layer, we need only to collect and store a number — and a really huge number — of those spatial markers for navigation and remote positioning. This task is less comprehensive and resource intensive.
At the same time, currently existing software solutions can’t provide the required degree of robustness, stability and precision for processing of supposed input data that is represented mostly (but not only) by various images.
So, several core parts of the Digital Land platform are based on our own original know-how algorithmic solutions, in the fields of AI, computational geometry and surface reconstruction, using, as a particular multi-layer convolutional neural networks.This has been possible because of significant investments made by the technology company behind the Arcona project Piligrim XXI in the research of computer vision and artificial intelligence too — besides GIS distribution, AR techs, and 3D simulation.
Discover more on our TG!
|
Artificial intelligence possibilities for the Arcona ecosystem development
| 53
|
artificial-intelligence-possibilities-for-the-arcona-ecosystem-development-18c56712829a
|
2018-04-30
|
2018-04-30 09:49:19
|
https://medium.com/s/story/artificial-intelligence-possibilities-for-the-arcona-ecosystem-development-18c56712829a
| false
| 478
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Arcona AR Ecosystem
|
A platform for remote positioning and management of the augmented reality objects powered by blockchain, more here www.arcona.io.
|
8404e4a19f59
|
arconaico
| 229
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
fc78dab2b103
|
2018-03-08
|
2018-03-08 00:59:22
|
2018-03-08
|
2018-03-08 01:51:58
| 0
| false
|
en
|
2018-07-06
|
2018-07-06 07:09:11
| 0
|
18c66f295d57
| 0.192453
| 0
| 0
| 0
|
3 Things I learnt today:
| 3
|
Homework #1-Vanessa
3 Things I learnt today:
usefulness of data science
common applications of them in our lives (weather forecast, traffic situations, sensors and games etc.)
some basic R codes
1 Question I still have:
Since data science is such a powerful tool, in what ways can it be abused?
|
Homework #1-Vanessa
| 0
|
homework-1-vanessa-18c66f295d57
|
2018-07-06
|
2018-07-06 07:09:11
|
https://medium.com/s/story/homework-1-vanessa-18c66f295d57
| false
| 51
|
A pilot data science hackathon for high school students in Singapore
| null | null | null |
Budding Data Scientists
|
buddingdatascientists@gmail.com
|
budding-data-scientists
|
DATA SCIENCE,EDUCATION,HACKATHONS,SOCIAL CAUSE,HIGH SCHOOL
| null |
Data Science
|
data-science
|
Data Science
| 33,617
|
vanessa
| null |
81713f68b12e
|
7114160037
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-27
|
2018-04-27 04:33:30
|
2018-05-02
|
2018-05-02 06:31:47
| 1
| false
|
en
|
2018-05-10
|
2018-05-10 08:09:40
| 5
|
18c833804214
| 0.8
| 0
| 0
| 0
|
Technical Development.
| 4
|
AICHAIN Weekly Report——Week of 2018.04.29
Technical Development.
We completed two POW mining restrictions: first ,one address is required to hold 1,000,000 AIT before mining ; second, the mining coalbase needs signature authentication.
We modified the code of web3js and set up the back-end system. At the same time, we completed the basic control, transfer, contract releasing , contract invocation and other functions.
The wallet mist , dapp-wallet name , and wallet address rules are being modified now .And we have migrated some Ethereum libraries to local projects.
Third party Application
The AIT Wallet feature of Easy Live will soon be open to users. When users trade on OTCBTC Exchange, the value of AIT can be based on circulation costs.
The AIT payment back-end interface and source code have been released to Ba Dou Mao, and the certificate has also been created and sent to Ba Dou Mao.
Connect with us !!
Website
Twitter
Telegram
Facebook
Instagram
|
AICHAIN Weekly Report——Week of 2018.04.29
| 0
|
aichain-weekly-report-week-of-2018-04-29-18c833804214
|
2018-05-10
|
2018-05-10 08:09:42
|
https://medium.com/s/story/aichain-weekly-report-week-of-2018-04-29-18c833804214
| false
| 159
| null | null | null | null | null | null | null | null | null |
Aichain
|
aichain
|
Aichain
| 25
|
AICHAIN
| null |
88561e3ff1eb
|
AICHAIN1
| 47
| 26
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-03
|
2018-09-03 22:26:23
|
2018-09-04
|
2018-09-04 14:01:01
| 1
| true
|
en
|
2018-09-04
|
2018-09-04 14:01:01
| 20
|
18c850544502
| 6.354717
| 4
| 0
| 0
|
In 2013, Oxford University professors Carl Frey and Michael Osborne published a now-seminal study on the likelihood that various…
| 5
|
Humanity’s Search for Meaning in the Age of AI and Automation
Illustration by Roberto Parada
In 2013, Oxford University professors Carl Frey and Michael Osborne published a now-seminal study on the likelihood that various professions would be automated. In that study, medical doctors were identified as some of the least likely to be replaced by AI and automation. Fast forward to 2018, when an AI system called BioMind developed by research teams from the Artificial Intelligence Research Centre for Neurological Disorders at Beijing Tiantan Hospital and Capital Medical University bested 15 of China’s top neurologists at diagnosing brain tumors in 225 cases—and it wasn’t even close. The AI achieved an 87% accuracy rate compared to the physicians’ 66%, and it only took 15 minutes compared to the doctors’ 30.
From healthcare to law to education to finance, across nearly ever white collar sector previously thought to be safe from obsolescence, Artificial Intelligence is making inroads. This of course is on top of the highly publicized advances it has already made in more blue collar industries like trucking, transportation, and manufacturing. My contention: nothing and nobody is safe—and, if it looks like you’re safe, there’s probably a well-funded tech startup eyeing your industry right now.
In fairness, there are a lot of very smart, albeit biased, people who have suggested that this disruption, like the technological revolutions before it, will ultimately create more jobs and prosperity than it eliminates. While this may be the case, it’s not prudent to discard the scenarios we don’t like. Indeed, there have been a deluge of articles and books published on the potential economic impact of AI and automation rendering many or most professions superfluous, with many authors suggesting some form of universal basic income as a remedy to rampant technologically-induced unemployment. Few authors, however, have discussed what role the human being will play in a post-labor society.
Beyond simply needing jobs to earn income, work gives meaning to people’s lives. It provides people with a sense of pride, purpose, identity, and value. It helps people feel like they’re growing, evolving, and improving. It’s critical to a person’s sense of self worth and sense of belonging. Noted 20th century psychiatrist and Holocaust survivor Viktor Frankl even cited creating something new as one of the principal ways people find meaning in life. Even famed psychologist Abraham Maslow saw achieving one’s potential as an avenue to self-actualization, the pinnacle of his Hierarchy of Needs.
How will humanity survive in a post-labor society? To where will it turn for purpose, fulfillment, meaning, and motivation? What will the meaning of life be in our workaholic culture in a world without work?
Although these existential questions may seem highfalutin, humanity’s collective response to them could very well determine the future of our civilization. These questions become even more acute in an economy where the majority of the population has to be supported by a guaranteed basic income. With no job and only a subsistence-level income for discretionary spending, the vast majority of people will have far fewer opportunities to cultivate hobbies and engage in social activities. Beyond being terribly unfulfilling and unproductive, this systemic boredom combined with widespread financial hardship would likely lead to an increase in crime, a link that several peer-reviewed studies have demonstrated.
Beyond discussing the normativity of automation, the technology behind it, and its potential economic impact, we as a society need to start considering its social and existential ramifications as well. We need to start addressing the reality that our social constructs and values are ill-adapted for a world without work and that, unless we evolve our own understanding of life and its meaning, our civilization may devolve into a purposeless amalgam of nihilistic automatons devoid of the very animating drives that make us human.
To avoid this eventuality, we may be forced to find new fonts of fulfillment and new mechanisms for meaning. Below are four potential candidates that seem well positioned to thrive in a post-labor, AI-driven society.
1) Traditional and Techno Religions
It’s extremely likely that a post-labor future would see the revival of religion and spirituality in various forms. These systems have historically provided certainty, purpose, and meaning to people’s lives—often during periods of social and economic turmoil. In the decades ahead, they may be called upon to do so again in a world where meaning is scarce. Religion is often criticized for making outlandish claims to truth and fostering radical behavior. At its best, however, religion helps people better understand themselves and make sense of the universe they live in — two things that may be in high-demand in a post-labor society.
Additionally, with the creation of Way of the Future—the IRS-approved religion dedicated to the worship of Artificial Intelligence founded by autonomous vehicle pioneer Anthony Levandowski—the same forces of Progress (with a capital ‘P’) that are transforming our economy have been made manifest in institutional form. Say what you will about it, Way of the Future brilliantly imbues meaning and value in trends that most find intimidating or foreboding. Rather than fear the advent of Artificial Intelligence, you can now join the movement to help proclaim its coming and prepare the way for its eventual incarnation. What better way to reclaim meaning than to evangelize that which threatens to destroy it? If you can’t beat it, join it.
2) Exploration and Education
Human curiosity is in many ways the traditional driver of social and technological progress. It often has no practical utility, yielding simply self-fulfillment, and thus will be important to making life interesting in a post-labor society. Space exploration, for example, holds particular promise as a new frontier that can direct human curiosity in a productive and meaningful way. The growing interest in space from the private sector and the resurgence in interest from the public sector is heartening. Although robots will continue to be a key component in future missions beyond Earth’s orbit, humanity still aims to manifest its galactic destiny, e.g. Elon Musk. Perhaps someday we may even achieve Star Trek creator Gene Roddenberry’s vision of a utopian future where humans, no longer needing to work, explore space out of pure curiosity. As President John F. Kennedy, channeling the explorer George Mallory, said in his famed “We choose to go to the moon speech” at Rice University, we choose to go to the moon “because it is there.”
Additionally, in a world where technical tasks are managed by Artificial Intelligence, humanities subjects like history, philosophy, literature, art, and theology may once again become the topics de jour. While the study of these humanities have been devalued because of their lack of earning potential, in a largely automated economy, these more abstract subjects may be the only courses of study that have any value at all.
3) Culture and Creativity
Programs can play the piano. Robots can generate a picture. But these are representations, not provocations intended to say something to someone. An enhanced and broad-based appreciation for artistic endeavors that depend on human beings for creation and interpretation would provide a post-labor society with aesthetic value and an avenue for productivity. I am particularly referring to art, in whatever form, that is intellectually demanding — art that requires reflection and introspection to grasp and appreciate. This category also includes food, which is not only an art but one that will still be necessary to human flourishing. Fortunately, food is already a trending topic that will likely continue to play a significant role in a post-labor society—if people can afford it.
An art boom would also create economic value beyond the realms of capital and technology, which are quickly becoming the dominant — and one day, perhaps, the only — economic spheres. Although the art economy already exists, it’s largely confined to relatively small sub-groups, most of which are either financially or socially exclusive. New “games” like blockchain-based CryptoKitties are redefining art and collectibles, and could help introduce new forms of assets that are more resilient to digital headwinds.
4) Escapism and Entertainment
Lastly, and most unfortunately, it seems probable that a large portion of humanity is destined to Netflix and chill their way to irrelevance. In the absence of the structure and purpose that work provides, many people will most certainly elect to spend their spare time spending their basic income on entertainment. While some people might be inclined to believe that the entire objective of pursuing AI and automation is to allow people to veg out—and while there’s nothing inherently wrong with having a good time—it’s probably not a permanent replacement for a career and likely won’t prove to be a longterm solution to a systemic loss of meaning. It’s important we make a concerted effort to push society toward other outlets so that humanity still has the opportunity to feel both productive and valuable.
While the preceding categories are not unproblematic and certainly not comprehensive, they do represent a few areas society might embrace to ensure human existence remains vibrant and meaningful. But, wherever the conversation on a world without work leads us, one thing is clear: preparing for it will require us to dramatically rethink the meaning of life and the role of people on this planet. Centuries from now, when we look back on this moment in history, we may find that this conversation was the most important one we’ve ever had.
About the author
Remington Tonar is a Partner and innovation consultant at Brandsinger, a NYC-based strategy consulting firm with clients ranging from Fortune 500s to fast-growing tech startups. He holds graduate degrees from NYU (Organizational Communication) and Loyola University Chicago (Theology) and is currently writing his PhD dissertation on technological myth.
|
Humanity’s Search for Meaning in the Age of AI and Automation
| 28
|
humanitys-search-for-meaning-in-the-age-of-ai-and-automation-18c850544502
|
2018-09-04
|
2018-09-04 16:17:36
|
https://medium.com/s/story/humanitys-search-for-meaning-in-the-age-of-ai-and-automation-18c850544502
| false
| 1,631
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Remington Tonar
|
Partner and innovation consultant at Brandsinger. Startup advisor. Forbes[dot]com contributor. PhD candidate researching our faith in and fear of technology.
|
a750f9ad96bc
|
AItheist
| 232
| 605
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-11
|
2018-09-11 11:16:20
|
2018-09-07
|
2018-09-07 00:00:00
| 0
| false
|
en
|
2018-09-11
|
2018-09-11 11:16:27
| 2
|
18c921a5e493
| 6.030189
| 0
| 0
| 0
|
Hi, this is Anjuum Khanna, and today we will talk about most commonly used technology disruptor about which we heard a lot. But I always…
| 1
|
Why Artificial Intelligence Matters — Post By Anjuum Khanna
Hi, this is Anjuum Khanna, and today we will talk about most commonly used technology disruptor about which we heard a lot. But I always mention in my blogs that we hear about what technology has done to our world, in the same we should also look forward to unfold future for more opportunities.
So, in Anjuum Khanna’s simple words let’s define AI. As the name speaks it is known as “artificial intelligence” or “machine intelligence”. So Artificial intelligence (AI) is a special feature of machines, in comparison to the natural intelligence displayed by humans and other animals. In computer science, AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. So when a machine is involved in a function like “problem-solving” or “learning” it is also known as artificial intelligence.
As intelligence is a step above the common task so a task which is common is not intelligence. As per me, this word is a word which is full of disputes. So intelligence requires frequent innovation. Let us understand this with a small example. As optical character recognition is frequently excluded from “artificial intelligence”, has become a routine technology. At one point in time, this was the part of Artificial Intelligence. Right now these technologies are defined as artificial intelligence understanding human speech, competing at the highest level in strategic game systems (such as chess), autonomous cars, intelligent routing in the content delivery network and military simulations.
After this explanation let’s go to history, where we will see how and when AI was defined. Back in the 1950s, the fathers of the field Minsky and McCarthy described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task. This is a very simple definition which in Anjuum Khanna’s words communicate that any task which is done with intelligence by the human being is performed by machine can be called as artificial intelligence. So after many disputes in history, we have settled on few criteria like planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity which undoubtedly belong to AI.
As per me (Anjuum Khanna), segregation is required to see the development stages of any product or technology. We can easily define AI into two categories:-
Narrow AI is what we see all around us in computers today. Intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.
Let me explain through few examples this type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.
Artificial general intelligence is a futuristic intelligence and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, but this technology doesn’t exist today.
As per the survey conducted by AI developers in between 2040 & 2050 this technology will start developing and by 2075 will achieve 90% of development. However few groups are still confused about its development as till the time we don’t have the hold on the functionality of the human brain we can’t even start with general intelligence.
For the better understanding of Artificial intelligence as per me (Anjuum Khanna) we should understand few basic technologies of this concept.
Machine Learning: –
In Anjuum Khanna’s definition, machine learning is a computer system which can feed large amounts of data, which is then used by the machine to learn how to carry out a specific task, such as understanding speech or captioning a photograph.
Neural networks:-
These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers.
These are two supplementary topics which need to understand with artificial intelligence. One more of AI research is evolutionary computation. This is basically natural selection and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem. This approach has even been used to help design AI models, effectively using AI to help build AI.
The most important question that comes to our mind is how AI will change this world. And I m (Anjuum Khanna) having my own thought process on the same. So let’s understand this with an example.
All of the major cloud platforms such as Amazon Web Services, Microsoft Azure and Google Cloud Platform provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units — custom chips whose design is optimized for training and running machine-learning models.
All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualization tools to display the results clearly, and software that simplifies the building of models.
These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.
Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.
For those firms that don’t want to build their own machine learning models but instead want to consume AI-powered, on-demand services — such as voice, vision, and language recognition — Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile, IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from health care to retail, grouping these offerings together under its IBM Watson umbrella — and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.
To know more about AI we need to learn through examples. Here are some examples to see its impact on all major industries.
AI in healthcare: — This is the most critical industry as it requires precision and accuracy. The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and is capable of responding to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema.
AI in business: — Robotic process automation is being applied to highly repetitive tasks normally performed by humans. Machine learning algorithms are being integrated into analytics and CRM platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers.
AI in education: — AI can automate grading, giving educators more time. AI can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. AI could change where and how students learn, perhaps even replacing some teachers. It can find out the gaps and help in resolving them.
AI in finance: — AI in personal finance applications, such as Mint or Turbo Tax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, the software performs much of the trading on Wall Street.
AI in law:- The discovery process, sifting through documents, in law is often overwhelming for humans. Automating this process is a more efficient use of time. Startups are also building question-and-answer computer assistants that can sift programmed-to-answer questions by examining the taxonomy and ontology associated with a database.
AI in manufacturing: — This is an area that has been at the forefront of incorporating robots into the workflow. Industrial robots used to perform single tasks and were separated from human workers, but as the technology advanced that changed.
Here we have seen many directions in which AI has worked and has improved deliverables. This technology is growing day by day and showing improvement in many fields.
Read More Anjuum Khanna Blog At: http://anjumkhanna.com/
Originally published at anjuumkhanna.in on September 7, 2018.
|
Why Artificial Intelligence Matters — Post By Anjuum Khanna
| 0
|
why-artificial-intelligence-matters-post-by-anjuum-khanna-18c921a5e493
|
2018-09-11
|
2018-09-11 11:16:27
|
https://medium.com/s/story/why-artificial-intelligence-matters-post-by-anjuum-khanna-18c921a5e493
| false
| 1,598
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Anjuum Khanna
|
Visit My website at http://anjuumkhanna.in
|
32fb22be13ea
|
anjum.khanna
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
39ec4794e24d
|
2018-09-17
|
2018-09-17 14:30:44
|
2018-09-17
|
2018-09-17 14:35:46
| 1
| false
|
en
|
2018-09-24
|
2018-09-24 12:40:04
| 0
|
18c927768e8a
| 1.520755
| 0
| 0
| 0
|
Summary
| 1
|
Points of Interest and how to use them in our Analysis
Summary
This section explains how various points of interest around the assets helped boost the performance of the algorithm, as well as give information as to what is of interest and importance near the asset.
During our analysis, the need was created for more attributes to be added to our dataset so that they could help improve the performance of our algorithm. The notion of points of interest came up. In fact, points of interest are a calculated field, meaning that it derives from the existing data.
Actually, the idea behind points of interest is simple and is divided in two steps:
First step
Each asset in our dataset is characterized by its position which is given by its longitude and latitude. That means, that we have the exact position in the map for each asset. In order to calculate the distance to any other point in the map we must transform these coordinates to Euclidean ones. After that, having the position of the asset and the position of the point of interest (i.e. banks, hospitals, supermarkets, ports, museums, bars, restaurants etc.) we can calculate the distance between them. The algorithm can have as an input the distance around which we want to find points of interest. Indeed, the algorithm calculates the points in a radius we define around the asset.
Figure 1 (Graphical Representation)
Second Step
Although we can calculate the necessary points how can we know which are the most interesting ones? Along with finding the points of interest we assigned a score to each point depending its function (for example a hospital near an asset values more than a restaurant near an asset). Through this procedure we can find the quality of the location for each asset.
Outcome
The combination of these two steps is a very interesting procedure. The outcome is very useful for our algorithm because in that way it improves our prediction by giving an extra insight for each asset and its correlation with the asking price.
This post was coauthored with Dimitri Papadouli
Sources: Roboequity archives.
|
Points of Interest and how to use them in our Analysis
| 0
|
points-of-interest-and-how-to-use-them-in-our-analysis-18c927768e8a
|
2018-09-24
|
2018-09-24 12:40:04
|
https://medium.com/s/story/points-of-interest-and-how-to-use-them-in-our-analysis-18c927768e8a
| false
| 350
|
Blog - Robo Private Equity
| null |
roboequity
| null |
roboequity
|
support@roboequity.com
|
roboequity
|
VALUATION,GREECE,MACHINE LEARNING,PRIVATE EQUITY
|
roboequity
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Spyros Savvidis
| null |
1f08ec51808c
|
sabbidis
| 0
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-02
|
2018-04-02 10:46:49
|
2018-04-02
|
2018-04-02 13:30:05
| 10
| false
|
en
|
2018-04-02
|
2018-04-02 17:24:38
| 2
|
18c96687d69
| 4.129245
| 4
| 1
| 0
|
Irony has a way of creeping out on you, and when it does, you can’t chose which hurt more.
| 5
|
A right to privacy and a right to choose.
Irony has a way of creeping out on you, and when it does, you can’t chose which hurt more.
So Facebook, where we loved to put our entire world out, for whoever cared a hoot, was actually foraging for our data wide and deep.
I am not surprised much with the news, but with people’s horror and shock, a little bit of arm chair activism, and its business as usual, scrounging for digital validation.
Like my title conveys, i am all for “Suit yourself,” “If the shoe fits” and “You made your bed, and now lie in it too.”
But what about others who wan’t to get out of this vicious cycle, what about others who are tired of being mined for data, who want to stay off the grid, fly below the radar ?
What does Facebook or for that matter, how do all the other apps fare in terms of security, and data breaches aka being honest about their “policy.”
Well, to start with i was on a self imposed social media cleanse for last 2 years, you can think of it like giving myself moments of solitude and deep retrospection, in the meanwhile i had deactivated my Facebook account, which all of a sudden got life of a psychotic ex, and begging me, “please don’t go..”
So after 2 long years i reactivated my account, during the Cambridge Analytica data breach scandal, with the intention of finally deleting my account.
I wasn’t braced for a tedious labyrinthine user flow designed to keep me and my data stuck in the deepest dungeons of Facebook’s crypts, failing to escape like, Daenerys’s chained dragons.
First you are faced with this, looking for Call to Action to delete your account, well it’s not there, the all seemingly, bright and coy Facebook, felt like a malignant narcissistic Ex, trapping you in its hideout, and never setting you free, having only lured you for simple vain validation, but would retain you much longer than you wish to stay.
General account settings has a temperature scale in its menu, but not deletion.
As i said there is no escape, you can only but deactivate your account. Kindly be informed, deletion is not the same as deactivation. A deliberate flaw in design for obvious reasons.
Deleting your account, means wiping off your entire digital footprint of nearly a better part of decade, but even if you take that step, you can’t, Facebook resorts to tricks like a smooth talking con-man, “Why would you want to go ?” “ Isn’t it good here, look at my tricks?” “Look the Facebook friend, you barely know misses you, they would be shattered with the news of your departure,” it insists.
So i went ahead and reported some hate — stirring content but as usual Facebook fails me, again.
Meanwhile Twitter, the site where everyone is usually angry had done a pretty good job, whenever it came to reporting any tweet or troll.
Two social media giants, and two different approaches, the intent of writing this article is not to rant about Facebook, but to appreciate Twitter for it keeping the community safe and also, keeping a very transparent approach, and not toying with just words like “deactivation and deletion.”
I checked with Instagram, and several apps i knowingly or unknowingly download, due to certain need of the hour, and when i go back and look into the menu, i find no such option of deleting your precious data, once you are done with their services.
So why do UX/UI /Product Designers won’t provide this option to users ? Well, your guess is as good as mine, because they need the numbers, they need the accounts to prove they are not bots but real people handling or have handled their app, and may be to continue mining your data and breaching on your privacy.
They must be laughing all the way to bank, “Do you really believe uninstalling an app, would actually delete your data from their database ?”
There is a paradigm shift in job trend too, Data Mining, Data Scientist, Data Analyst are the buzzword today in startup world, don’t believe me for my words, look at the “Data”.
Image Credit
These days as we seemingly blur the lines of real world and virtual world, we need to up our ante and preserve ourselves being targeted by third-party vendors and to the dark side of internet, where stolen data is freely available, or sold for dime a dozen, because we are yet to realize the our worth, digital validation or not, you are precious.
|
A right to privacy and a right to choose.
| 40
|
a-right-to-privacy-and-a-right-to-choose-18c96687d69
|
2018-05-18
|
2018-05-18 21:12:40
|
https://medium.com/s/story/a-right-to-privacy-and-a-right-to-choose-18c96687d69
| false
| 763
| null | null | null | null | null | null | null | null | null |
Facebook
|
facebook
|
Facebook
| 50,113
|
Girlmeetstartup
|
Doctor. Startup enthu. Wordsmith. Cinephile. BoOkWorm.
|
8d073cfb4608
|
girlmeetstartup
| 13
| 29
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-18
|
2018-04-18 21:17:22
|
2018-04-18
|
2018-04-18 23:07:37
| 1
| false
|
en
|
2018-04-18
|
2018-04-18 23:07:59
| 2
|
18cbee497db7
| 5.101887
| 6
| 0
| 0
|
I’ve hired and employed over 25 recent college grads since I graduated from Stanford Business School in 1990. The consulting practice…
| 5
|
So You Want To Be A Data Analyst?
I’ve hired and employed over 25 recent college grads since I graduated from Stanford Business School in 1990. The consulting practice, Webster Pacific, that my partner and I lead does strategy work that is highly data-intensive. We crunch massive amounts of data to answer our clients’ questions, many of which are in the K-12 private school business. The questions we answer include: Where should we open a new school? Where is the wealth in a market? Why are families leaving our school? How far do applicants typically live from our school? Why are teachers loyal to our school? To answer these questions, we analyze data…mountains of data. We currently employ six data analysts.
So here’s my advice to all recent grads interested in becoming a data analyst:
Be impeccable. When you go through the drive-through at your favorite fast-food restaurant, you’d never be satisfied if they only got 4 out of 5 items right. So you shouldn’t accept anything less in your work. You need to adopt the attitude that you’re striving for perfection. There are a hundred different things in every analysis that can be wrong, so you’d better understand that your goal is nothing less than perfection. Your logic should be perfect; your formulas should be perfect; your presentation should be perfect…and I can guarantee you that you won’t achieve perfection, but if you too easily accept anything less, you will not be successful.
Be iterative. You have to get stuff done, publish your analysis, get feedback and do it again. I call it going around the bases. The best investors are able to think about the numbers on a deal in a minute. Those numbers are the “deal model.” Over the weeks that an investor considers making an investment, they will have their data analysts iterate on that deal model, building up more and more analysis about each point in the deal model. My data analysts do the same kind of thing. There are three important subpoints to iteration: 1) Being iterative and being impeccable are NOT at odds. It’s true that “perfect is the enemy of the good” and that we have to get stuff done, but that doesn’t mean that we should compromise on our logic or on our formulas. Maybe the presentation isn’t perfect at first, but the basics of the logic and the formulas must be air-tight every time you iterate. 2) Stay current. Don’t be wedded to the past. You need to be prepared to change your analysis every time you iterate using the latest base data, strategic context and learning. 3) Compare your new iterations to your old iterations. If the new is different from the old, you must be able to explain that difference.
Be paranoid. You’ve got to always be worried about how your analysis might be wrong. You have to double and triple-check your work and then ask yourself if your outputs make sense. I go over at least one analysis every day and every time I look at an analysis, I ask myself, “does this make sense?” I almost always start with the big picture or the universe. Has the data analyst shown me the big picture? Does it make sense? How has the big picture been broken up into smaller pieces? Does that make sense? If there’s a map and we’re working iteratively, how does the latest map compare to the previous map? Does that make sense? You can see how this goes. If anything doesn’t make sense, then I ask “Why?” Sometimes the explanations make sense, but often they don’t. Frankly, I get a little irritated when a data analyst offers an explanation, without supporting data, for why something doesn’t make sense to me. The data should do the talking, not the analyst. Most of the time when something doesn’t make sense, we have to look at more data to find the explanation and, in looking at the data, we either learn something or we find out that our logic or formulas were wrong.
Be strategic. The data analyst will always be closest to the data and will understand all the details, but that doesn’t mean that he or she shouldn’t think about the big picture. Understanding the big picture depends upon understanding strategy. Every data analyst should read Michael Porter’s seminal article on strategy[1], which essentially says that strategy is the combination of market position and the activities taken in support of that position. The strategy of your organization is the ultimate context of your analysis. You need to understand how all the data you’re crunching fits in with that strategy.
Be a geek. This may seem obvious, but there are some analysts who want to opine on the big picture and not dive into the details about how the software works. In my view, you’ve got to love the technology and, only then, will you drive yourself to understand all the wonky details about how your calculation process works. I’ve seen the “discovery” of more than one native feature in a complicated software product that changed the way that we do our analysis. Be curious; take chances; invest your time in learning. The future portends only more and more features.
Fall in love with the insights, not the numbers. You’re paid to generate credible, data-driven insights. Don’t confuse quantity with quality. My biggest complaint with tools like Tableau is that they make it easy to generate mountains of graphs. That makes an analyst think that they’re doing a great job by generating a bunch of charts when all they’re doing is adding noise. You have to know your story and tell it in the simplest terms possible, while still having all the data necessary to answer questions.
Be aggressive. Software goes at the speed of light. How fast does your brain go? Models can be gnarly and nuanced things. There can be dozens of calculations between fields and millions of records in a database. I’m counting on my analyst to ensure the soundness of the calculations, which means they have to repeatedly walk through those calculations in their minds. I once had a mild-mannered analyst who told me that he had recently reformed himself from his past, where he had gotten a speeding ticket for going 150 mph on a trip from SF to LA. I told him, “Hey, I’m happy for you to take that craving for speed out of your personal life, but at work I actually want to see that.”
Be of service. I believe that we’re all here to be of service to others. A data analyst is no different. Who is your customer? How are you serving them? That’s the easy part. The hard part is really listening hard to what they say and adjusting your analysis after getting their feedback. Our team could not do the analysis we do without the feedback of our clients. They make us better. So service is not just delivering a valuable product; it’s doing the work, delivering it and then listening hard to feedback and adjusting and doing it again.
Make Google your best friend. Google is a monumentally valuable resource and you need to learn how to reflexively use it. Never ask your boss a question that you could have Googled first. Never.
Have fun. Think of your work as an adventure. It’s easy to get serious. It’s harder to keep a spirit of adventure about your work. There will always be challenges and each of those challenges are opportunities for adventure. The fun comes when you face a problem you don’t know how you will solve, but then you figure it out. You solve it and you learn.
[1] https://hbr.org/1996/11/what-is-strategy
|
So You Want To Be A Data Analyst?
| 39
|
so-you-want-to-be-a-data-analyst-18cbee497db7
|
2018-05-04
|
2018-05-04 21:27:22
|
https://medium.com/s/story/so-you-want-to-be-a-data-analyst-18cbee497db7
| false
| 1,299
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Tom Paper
| null |
46a2c9336242
|
tom_57015
| 6
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
634d4b270054
|
2018-06-20
|
2018-06-20 11:54:05
|
2018-06-20
|
2018-06-20 12:04:01
| 1
| false
|
en
|
2018-06-20
|
2018-06-20 12:05:11
| 3
|
18cbf59fb43
| 1.184906
| 0
| 0
| 0
|
The autonomous drones have begun to see use in jobs such as deliveries and NASA and Google have even teamed up to experiment with how…
| 5
|
What AI Is Bringing To The Commercial Drone Industry
The autonomous drones have begun to see use in jobs such as deliveries and NASA and Google have even teamed up to experiment with how autonomous drones compare to human pilots when racing against each other. AI is becoming a force to be reckoned with in the drone industry.
Once a mission is set up in a flight app to capture all relevant areas, the operator can merely press a button and the drone will fly the mission. Many companies are making progress toward having AI pick out common features such as control points. As more data becomes available, technology will rapidly improve, providing significant benefits for many industries.
Improved Efficiency, New Insights and New Revenue Sources are some of the key areas which can be seen with adoption and learning of AI. UAVs and AI will not be replacing human employees; rather, they’ll simply allow humans to take on more challenging tasks that can’t be performed by machine.
Source: https://bit.ly/2L8UKpE
About DEEPAERO
DEEP AERO is a global leader in drone technology innovation. At DEEP AERO, we are building an autonomous drone economy powered by AI & Blockchain.
DEEP AERO’s DRONE-UTM is an AI-driven, autonomous, self-governing, intelligent drone/unmanned aircraft system (UAS) traffic management (UTM) platform on the Blockchain.
DEEP AERO’s DRONE-MP is a decentralized marketplace. It will be one stop shop for all products and services for drones.
These platforms will be the foundation of the drone economy and will be powered by the DEEP AERO (DRONE) token.
|
What AI Is Bringing To The Commercial Drone Industry
| 0
|
what-ai-is-bringing-to-the-commercial-drone-industry-18cbf59fb43
|
2018-06-20
|
2018-06-20 12:05:13
|
https://medium.com/s/story/what-ai-is-bringing-to-the-commercial-drone-industry-18cbf59fb43
| false
| 261
|
AI Driven Drone Economy on the Blockchain
| null |
DeepAeroDrones
| null |
DEEPAERODRONES
| null |
deepaerodrones
|
DEEPAERO,AI,BLOCKCHAIN,DRONE,ICO
|
DeepAeroDrones
|
Deepaero
|
deepaeros
|
Deepaero
| 0
|
DEEP AERO DRONES
| null |
dcef5da6c7fa
|
deepaerodrones
| 277
| 0
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-15
|
2017-12-15 13:07:58
|
2017-12-16
|
2017-12-16 13:42:36
| 0
| false
|
en
|
2017-12-16
|
2017-12-16 18:42:05
| 10
|
18ccd805062d
| 7.283019
| 49
| 0
| 1
|
The (inaugural) Black In AI workshop at N.I.P.S. 2017 was an amazing and inspiring showcase of science, engineering, and social…
| 5
|
My talk at the inaugural Black in AI workshop dinner
The (inaugural) Black In AI workshop at N.I.P.S. 2017 was an amazing and inspiring showcase of science, engineering, and social initiatives; and it was an honor to be invited to speak at the dinner.
I’m excited to see what this wonderfully talented community will produce in the coming years.
For folks who were unable to attend the event in person, a transcript of the speech I gave is below.
Transcript:
Thanks to the organizers for putting this wonderful event together, and thanks to all the great presenters for sharing their work with us today. It’s been super interesting.
So, I have to be honest: I prefer small-group discussions to addressing large crowds. And moreover, when I was asked to be one of the dinner speakers I realized that I’ve given lots of academic talks and business pitches, but I actually haven’t done too many personal presentations like this.
In the latter part of my talk, I’d like to discuss some of my thoughts on what the Black in AI organization means to me, and on some of the opportunities that we have together.
But since there are some of you that I haven’t yet met, I thought I’d start by telling you a bit about my myself, and my background and research focus in AI.
I’m currently a Research Scientist at DeepMind in London, where I work on a broad range of machine learning approaches, with the ultimate goal of developing Artificial General Intelligence.
Most recently I co-authored a tech report titled “Population Based Training Of Neural Networks” that we pushed to arxiv last week. And the short-summary is that we propose a new training method for neural nets that combines ideas from evolutionary optimization with gradient-based optimization in order to perform dynamic hyper-parameter tuning and model selection.
So far we’ve seen it give consistent improvements to training across a variety of domains. And in particular, it seems to add huge benefits to systems with complicated learning dynamics — such as Deep Reinforcement Learning and GANs. And I think one of the reasons that it’s so beneficial is that it’s able to discover an adaptive schedule of hyperparameters, rather than just a single fixed setting.
You’re welcome to catch up with me later if that’s something that sounds interesting or useful to your work.
In terms of my background: I grew up in a working-class family in a small town in the northwest of England. But since then I’ve called many places home — including Cambridge, London, Toronto, Montreal, and San Francisco.
My first NIPS conference was in 2000 or 2001… when it was much smaller — probably only around 600 people — and of that maybe just one or two other black folks. It’s amazing to see how much the field has grown since then. And as I look around the room, it’s awesome to see so many fantastic black researchers here today.
In terms of career highlights so far, there are probably two in particular that stand out:
The first was a paper that I wrote with Geoff Hinton and Yee Whye Teh in 2005, while I was a post-doc in Geoff’s lab in Toronto. It was titled “A fast learning algorithm for deep belief networks” and in it we demonstrated that it was possible to use layer-wise unsupervised training to help build deep, generative neural nets. With the computational advances that have happened in the meantime, those models we were using back then might not be considered especially big or deep nowadays — but at the time it was a significant step forward. And in fact, that work with Geoff’s group, along with papers from the labs of Yann LeCun and Yoshua Bengio, played a big role in kickstarting the deep learning revolution that we see around us today.
A second highlight started a few years later in 2009 — at a machine learning conference in Montreal. I ran into a friend, Bobby Jaros, and we decided to start a company together (LookFlow) to productize some of the deep learning methods I’d developed in Toronto. It was hard work, but over the next four years, I learned a tonne about business, and about production engineering, and also about myself. We built an awesome team, and some awesome products, and in 2013 we accepted an offer from Yahoo to buy the company and its IP — which then ended up powering some of Yahoo and Flickr’s search engines.
Okay — so now that you know a little bit about me, I’d like to take a moment to reflect on what brings us here together.
Black in AI is defined as “a place for sharing ideas, fostering collaborations and discussing initiatives to increase the presence of Black people in the field of Artificial Intelligence.”
And as we’ve seen today — it’s clear that we all share a passion for science and technology.
But on the surface… it might seem a little odd to have an event for a group whose defining characteristics are the amount of melanin they possess and some other superficial physical traits.
I mean in this group we have people from over 20 different countries and 5 different continents. We have a whole range of ethnic and cultural backgrounds. A whole range of native languages. A whole range of sexualities, genders, class privileges, physical abilities, and so on.
For instance — I myself am British by nationality. European, Canadian, and Californian in my acculturation. Devoutly atheistic in my religious beliefs. And somewhere in the middle in terms of the spectrum of sexuality and gender identity.
So, given that spread of underlying diversity, does it really make sense to have a group that collects us together primarily based on properties like the color of our skin or the curl of our hair?
The answer, of course, is “Yes! … It absolutely makes sense … and there’s a massive need for this group”.
Because, in addition to our research interests, there other key factors that bring us together. Some of which are:
the challenges black people face in our careers and in everyday life due to the systemic and historically rooted issues that still permeate today;
the challenges black people face due to some of the biases, inaccurate prejudices, and stereotypes that some folks hold about black people.
Now, there are far too many examples of these sorts of things that I could point to. So to throw out just a few:
In the US for example, you’re more than twice as likely to come from a poor background if you’re black than if you’re white [1,2]. If you’re black in the UK, you’re six times more likely to be stopped and searched by the police than your white counterpart [3].
Then there are things like the immigration challenges many of you faced simply to be here today; and not forgetting the folks who are not even here, due to their visas not being approved.
And then on a smaller scale, there’s the more commonplace experience of having weird interactions in everyday life or at work that are just hard to pin down. For instance, in a previous job, there were several times when interactions with Research Executives didn’t go the way I expected, and I was left second guessing whether there was something I could’ve done better or differently, whether they were simply being obtuse or dumb, or whether there was some kind of racial bias at play… And that kind of thing can be really insidious. Because it’s subtle, and you never really know for any given interaction what the factors were. However, at the same time, in aggregate you know that kind of bias almost certainly has negatively affected you at some point.
Fortunately, we’re all also brought here together by a shared commitment to tackling these problems, and by a desire to ensure these issues aren’t further ingrained by the powerful technologies we’re developing
And I actually think it’s also a real strength of the Black In AI community, that these shared challenges really do bring together such an otherwise diverse collection of individuals. And the diversity within this group is something we can, and should, celebrate and leverage.
Now obviously one of the really important things we can do as a community is learn from each other and support each other. And we’ve seen so much of that today, and in the run up to the event. For me, that’s actually one of the best reasons to come to conferences in the first place. The opportunity to develop a network of personal connections and, moreover, friendships.
And then with that community, to be able to both ask for and offer things like feedback, help, and mentorship. And so I really hope that’s something that folks come away from this workshop and NIPS overall with — lots of new personal connections that will last going forwards.
But this group can also be so much more than a mutual support network.
For me, there are a couple of key actions we can take:
We can use our collective voices to amplify the successes of black people in AI. And in doing so, this’ll go some way to help counter the stereotypes that exist in some peoples’ minds. I think just by giving greater visibility to the Black in AI community, we have a great chance to create inspiring examples and existence proofs for others. And hopefully this’ll then encourage more black people from all over the world to participate in the field of AI.
We can also support capacity building projects in geographies and communities where the A.I. ecosystem and talent pool might currently be under-developed relative to its true potential. Things like the Deep Learning Indaba in South Africa are a great example of that. Or organizations like “Black Girls Code” or “The Hidden Genius Project” — which aim to increase STEM participation by underprivileged black youths.
And we can also use our diverse backgrounds to inject broader perspectives into the AI field as a whole. Hopefully, by doing so, we can do a better job at ensuring that the AI applications and systems that we develop don’t inherit some of the problematic biases that are still present in society at large, and instead help them become fairer, and more transparent and accountable.
And lastly, I think we all have a responsibility to steer the overall field towards problems and products that have the potential to benefit the whole of humanity, and not just the wealthy or privileged few.
In terms of my current work, I think that responsibly developed and deployed AGI has the prospect to be one of the most positive and impactful technological developments in the whole of human history. The mission statement at DeepMind is: “Solve intelligence. And then use it to make the world a better place.” And that’s something I really believe in.
And so while I’m here today in a personal capacity, I would like to say that if that mission is something that motivates you too, then please consider applying to join us at DeepMind. It’d be great to have more of the Black in AI community working with us.
And finally in closing: I’d like to thank you for your attention. I’d like thank again the presenters who shared their work with us today. And I’d like to give a special thanks to the organizers for all their hard work in making this event happen!
Cheers!
|
My talk at the inaugural Black in AI workshop dinner
| 291
|
my-talk-at-the-inaugural-black-in-ai-workshop-dinner-18ccd805062d
|
2018-05-03
|
2018-05-03 04:06:25
|
https://medium.com/s/story/my-talk-at-the-inaugural-black-in-ai-workshop-dinner-18ccd805062d
| false
| 1,930
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Simon Osindero
| null |
5d40c1b7ed33
|
sindero
| 324
| 301
| 20,181,104
| null | null | null | null | null | null |
0
|
from comet_ml import Experiment
from keras.models import Sequential
from keras.layers import Dense, Dropout
experiment = Experiment(api_key="YOUR_API_KEY", project_name="my project name", auto_param_logging=False)
batch_size = 128
num_classes = 10
epochs = 20
params={ "batch_size":batch_size, "epochs":epochs, "num_classes":num_classes}
experiment.log_multiple_params(params)
| 4
|
7f60cf5620c9
|
2018-04-13
|
2018-04-13 12:25:03
|
2018-04-13
|
2018-04-13 12:30:31
| 3
| false
|
en
|
2018-04-13
|
2018-04-13 12:30:31
| 1
|
18cdb3b418c1
| 2.40283
| 10
| 0
| 0
|
Welcome to Technology Fridays! Today we would like to explore a brand new platform that just launched to address one of the biggest…
| 3
|
Technology Fridays: Comet.ML Wants to be the Google Analytics of the Deep Learning World
Welcome to Technology Fridays! Today we would like to explore a brand new platform that just launched to address one of the biggest challenges in machine intelligence applications. If you’ve ever worked in a deep learning project in the real world, you probably found yourself tied in a virtually never ending cycle of testing, regularization, optimization and constant improvement of a model.
The continuously evolving nature of its lifecycle is one of the key characteristics that makes machine learning applications fundamentally different from other types of software systems. In machine learning scenarios, data scientists rarely stop conducting experiments targeted to optimize and improve the behavior of models. However, the toolsets for performing that level of experimentation haven’t evolved at the pace of the corresponding deep learning runtimes and platforms. Recently, a new startup called Comet.ml launched with the promise of allowing data scientists to monitor and optimize machine learning models across different technology stacks.
The experience of using Comet.ml resembles the model adopted by technologies like Google Analytics that made them the standard for monitoring and testing web applications. Similar to Google Analytics, data scientists can start using Comet.ml by embedding a specific tracking code as part of their machine learning model. The scrip will track the specific behavior of the model including important elements such as hyperparameters and other relevant metrics.
Using the Comet.ml portal rapidly perform experiments on a specific model by tuning hyperparameters. The UI allows data scientists to visualize the results of experiments and compare the results based on specific hyperparameters. I can imagine this might sounds like a trivial problem but its one of the biggest nightmares in machine learning applications in the real world.
The Comet.ml platform automatically integrates with several deep learning frameworks such as Keras, TensorFlow, PyTorch, Scikit-Learn and several others. Data scientists can download the SDK for their specific runtime and start tracking any model The code for achieving this is fundamentally simple. The following example illustrates a Keras model that is being monitored using Comet.ml.
Comet.ml complements its robust machine learning monitoring and optimization capabilities with simple collaboration features that enable data scientists to provide feedback and cooperate on the optimization of specific machine learning programs. The platform is also capable of providing intelligent recommendations for optimizing and regularizing models based on their runtime behavior.
Competition?
Comet.ml solves a very challenging aspect of machine learning solutions. However, the platform is not entering the market without competition. Cloud platforms such as Azure ML, AWS SageMaker or Google Cloud ML include their own toolset for monitoring and optimizing machine learning models. Simiarly, startups such as Floyd or BitFusion can also be considered as competitors.
|
Technology Fridays: Comet.ML Wants to be the Google Analytics of the Deep Learning World
| 45
|
technology-fridays-comet-ml-wants-to-be-the-google-analytics-of-the-deep-learning-world-18cdb3b418c1
|
2018-07-05
|
2018-07-05 20:06:21
|
https://medium.com/s/story/technology-fridays-comet-ml-wants-to-be-the-google-analytics-of-the-deep-learning-world-18cdb3b418c1
| false
| 491
|
Sharing concepts, ideas, and codes.
|
towardsdatascience.com
|
towardsdatascience
| null |
Towards Data Science
| null |
towards-data-science
|
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,ANALYTICS
|
TDataScience
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Jesus Rodriguez
|
Chief Scientist, Managing Partner at Invector Labs, Angel Investor, Columnist at CIO.com, Board Member of Several Software Companies
|
46674a2c9422
|
jrodthoughts
| 4,381
| 551
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-30
|
2018-05-30 03:18:57
|
2018-05-30
|
2018-05-30 07:08:56
| 5
| false
|
en
|
2018-05-30
|
2018-05-30 07:13:07
| 3
|
18cf3607d2af
| 3.316352
| 5
| 0
| 0
|
This article is part of the Academic Alibaba series and is taken from the paper entitled “Virtual-Taobao: Virtualizing Real-world Online…
| 5
|
Virtualizing China’s Biggest Online Marketplace for Training Reinforcement Learning
This article is part of the Academic Alibaba series and is taken from the paper entitled “Virtual-Taobao: Virtualizing Real-world Online Retail Environment for Reinforcement Learning.” by Jing-Cheng Shi, Yang Yu, Qing Da, Shi-Yong Chen and An-Xiang Zeng. The full paper can be read here.
Reinforcement learning methods offer huge potential for complex user environments, but they are difficult to apply in many real-world settings because they require training in a live system. This risks compromising system functionality or user experience, putting money, time, and — in the case of hospital networks — even people’s lives and well being at stake.
While fields such as gaming, robotics, and natural language processing have been relatively receptive to unsupervised approaches, large online systems have been limited to supervised approaches. Supervised machine learning poses fewer immediate risks to the system, but is incapable of learning the sequential decision making needed to maximize long-term rewards.
This was the dilemma faced by Alibaba when looking to use machine learning to improve the commodity search function of their e-commerce platform Taobao. Their solution was to simulate the live-system testing environment by building “Virtual Taobao” — a like-for-like replica of the platform, complete with virtual users created from real historical data. Results so far have shown that the simulation can train significantly better search engine policies than the previously used supervised learning approaches, while real-life Taobao is shielded from the adverse effects of training.
The idea of using simulations to provide a realistic but safe training environment for reinforcement learning is not completely new. Google, for instance, applied this approach for its data center cooling facilities, using neural networks that approximated the real system’s dynamics.
Like Google’s cooling facilities, Taobao cannot afford to be exposed to unpredictable live testing even for a short time. During normal operation, Taobao’s search engine deals with millisecond-level responses to billions of commodities, sorting through potential outputs to generate a page view (PV) to show the customer. The customer’s subsequent behavior then provides a feedback signal, such as making a purchase, moving to the next page, or leaving the site. Based on the results generated by a given PV, the search engine will update its decision policy for that scenario, contributing to the overall evolution of the system’s strategy for displaying PVs.
Taobao search in engine view and in customer view
Rather than risk impacting shopping experiences on Taobao, Alibaba applied two adapted simulation frameworks and real historical Taobao data to create a parallel platform closely resembling the original. The developers first adapted a special generative adversarial network (GAN) to differentiate between real and simulated customer behavior inputs. The resulting generative adversarial network for simulating distribution (GAN-SD) enabled them to feed the search engines a more realistic pool of simulated searches and result responses than would ordinarily be possible.
Having set up a desired customer distribution, the team then used a multi-agent adversarial imitation learning (MAIL) mechanism to train simulated customer policies and engine policies against each other in a zero-sum game framework, ensuring that the customer policy would be generalizable for different engine policies.
With the introduction of the GAN-SD and MAIL simulation tools, Alibaba was able to imitate the spontaneity of real-time Taobao activity while deliberately training its engines to do better in such scenarios. Based on empirical measurements of total turnover, total volume, and rate of purchase by page, reinforcement learning on Virtual Taobao demonstrated a 3% improvement in strategy over traditional supervised learning methods, with a better generalization ability over time than pure behavior cloning approaches to simulation.
The customer distributions between Taobao and the Virtual Taobao
The R2P distributions between Taobao and the Virtual Taobao
These results suggest that simulation may be a useful means of applying reinforcement learning in other situations where complex physical environments have traditionally prohibited direct application.
The full paper can be read here.
Alibaba Tech
First-hand and in-depth information about Alibaba’s latest technology → Search “Alibaba Tech” on Facebook
|
Virtualizing China’s Biggest Online Marketplace for Training Reinforcement Learning
| 5
|
virtualizing-chinas-biggest-online-marketplace-for-reinforcement-learning-18cf3607d2af
|
2018-06-06
|
2018-06-06 17:24:32
|
https://medium.com/s/story/virtualizing-chinas-biggest-online-marketplace-for-reinforcement-learning-18cf3607d2af
| false
| 658
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Alibaba Tech
|
First-hand & in-depth information about Alibaba's tech innovation in Artificial Intelligence, Big Data & Computer Engineering. Follow us on Facebook!
|
69f6dde768a1
|
alitech_2017
| 1,157
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
eed645ed6ed2
|
2018-02-06
|
2018-02-06 14:05:44
|
2018-02-06
|
2018-02-06 14:07:10
| 1
| false
|
en
|
2018-02-22
|
2018-02-22 12:14:43
| 2
|
18cfafd43b4e
| 1.381132
| 1
| 0
| 0
|
This year’s CES exhibition witnessed the birth of a brand-new device category — the smart display. What does it do and why would you need…
| 5
|
Smart Displays: Are They the Next “Must-Have” Gadget?
This year’s CES exhibition witnessed the birth of a brand-new device category — the smart display. What does it do and why would you need one?
If 2017 taught us anything, it’s that smart speakers are fantastic — right up until you need some visual information to go with their answers, that is. To fix this issue, Amazon launched the Echo Show and Spot, and now Google is on the case, working with JBL, Lenovo, LG, and Sony to launch their own “smart display” later this year.
How will it work?
Just like a smart speaker, basically. You ask the Google Assistant questions and get a response. But according to the information put out by Google at CES, Google Assistant smart displays can show the time, the date, the weather, recipes, your calendar appointments, YouTube videos, Google Maps, Google Music, Google Photos, Google Duo video calls, and Nest cam video feeds. There’s no word yet on what the OS is, but since it’s Google, Android is a pretty safe bet!
Am I going to want one?
If this video is any indication, then yeah, you probably will. “Personally, I think these “smart displays” are the future of Google Assistant, and I don’t think it’s going to be worth buying anything else going forward” says tech blogger Ben Schoon in a post from last month. “Speakers are great, but when you slap on this software and a display, it becomes something truly magical.”
The ITMAGINATION difference
This is the time to get in on the ground floor of the smart display market. ITMAGINATION has a track record of smart speaker development and integration, including new feature and capability development and extended language support, and we’re excited to see what we can bring to the smart display market as well.
Say it. See it. Done.
|
Smart Displays: Are They the Next “Must-Have” Gadget?
| 1
|
smart-displays-are-they-the-next-must-have-gadget-18cfafd43b4e
|
2018-02-22
|
2018-02-22 12:14:44
|
https://medium.com/s/story/smart-displays-are-they-the-next-must-have-gadget-18cfafd43b4e
| false
| 313
|
These are the locations of our clients for whom we are delivering projects. Join ITMAGINATION to deliver Innovation globally.
| null |
ITMAGINATION
| null |
ITMAGINATION
|
office@itmagination.com
|
itmagination
|
INFORMATION TECHNOLOGY,INNOVATION,SOFTWARE DEVELOPMENT,OUTSOURCING,BIG DATA
|
itmagination
|
Google
|
google
|
Google
| 35,754
|
ITMAGINATION
|
We help our clients innovate by providing professional software development services, data analytics, and outsourcing.
|
38551a26392e
|
ITMAGINATION
| 6
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-12
|
2018-09-12 11:16:40
|
2018-09-12
|
2018-09-12 11:22:26
| 1
| false
|
en
|
2018-09-12
|
2018-09-12 11:22:39
| 12
|
18d01ac2724a
| 0.430189
| 1
| 0
| 0
|
Please welcome our Bitcointalk ANN thread in Chinese:
| 5
|
ServAdvisor Bitcointalk ANN thread in Chinese
Please welcome our Bitcointalk ANN thread in Chinese:
https://bitcointalk.org/index.php?topic=5023133.0.new#new
For further information regarding the ServAdvisor and the ServCoin token crowdsale, please visit http://www.ServAdvisor.co or contact info@ServAdvisor.co.
Official website: www.ServAdvisor.co
Official youtube: https://www.youtube.com/watch?v=3sskHGw9c-c
Official Twitter Account: https://twitter.com/ServAdvisor
Official Telegram: https://t.me/ServAdvisorOfficial
Official Medium: https://medium.com/@ServAdvisor
Official FB: https://www.facebook.com/ServAdvisor
Official GitHub: https://github.com/ServAdvisor
Official Bitcointalk.org: https://bitcointalk.org/index.php?topic=3903200
Official Reddit: https://www.reddit.com/user/ServAdvisor
|
ServAdvisor Bitcointalk ANN thread in Chinese
| 49
|
servadvisor-bitcointalk-ann-thread-in-arabic-18d01ac2724a
|
2018-09-12
|
2018-09-12 11:22:39
|
https://medium.com/s/story/servadvisor-bitcointalk-ann-thread-in-arabic-18d01ac2724a
| false
| 61
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
ServAdvisor
| null |
64017f48c363
|
ServAdvisor
| 32
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-15
|
2018-04-15 05:47:55
|
2018-04-15
|
2018-04-15 05:55:24
| 0
| false
|
en
|
2018-04-15
|
2018-04-15 05:55:24
| 1
|
18d0a92b6bd6
| 1.109434
| 0
| 0
| 0
|
Merged early 2016, google’s sports in synthetic intelligence and on line research are once more separated, with a alternate in management…
| 3
|
GOOGLE REINSTATES A SEPARATION BETWEEN AI AND SEARCH ENGINE
Merged early 2016, google’s sports in synthetic intelligence and on line research are once more separated, with a alternate in management. Years after merging, google again separates its activities in synthetic intelligence and on line studies.
The employer of control evolves in parallel: John Giannandrea leaves workplace , changed by way of the duo Ben gomes — Jeff dean. John Giannandrea became accountable for AIat Google when, in early 2016, he took over the search engine element. This advertising observed the departure of Amit Singhal , boss of the search .
The pinnacle of the quest entity is entrusted to Ben Gomes , who presently directs the engineering. Netscape spent between 1994 and 1999 (and earlier at silicon pics, for the period 1990–1992), he had arrived at Google in 2010 after the acquisition of his employer Metaweb.
The semantic search technology he developed is at the origin of the Knowledge Graph , which offers results directly in addition to the lists of links.
The orders for the AI activity go to Jeff Dean . Veteran of the Google House (since 1999), he co-founded the branch that still leads deep learning research within the group . He has also worked on several generations of the Googlebot indexing robot, on the first version of the advertising network of the firm or on Google News.
He is also responsible for the RankBrain project , named after this layer of artificial intelligence deployed in Google Search. The idea is to split the requests into units of meaning represented as vector interpretable by the computer. This makes it possible to determine words of similar meaning and to propose adequate contents although they do not necessarily contain the searched terms.
|
GOOGLE REINSTATES A SEPARATION BETWEEN AI AND SEARCH ENGINE
| 0
|
google-reinstates-a-separation-between-ai-and-search-engine-18d0a92b6bd6
|
2018-04-15
|
2018-04-15 05:55:25
|
https://medium.com/s/story/google-reinstates-a-separation-between-ai-and-search-engine-18d0a92b6bd6
| false
| 294
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Technoblogy Web
| null |
c5e41abe0074
|
technoblogyweb
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-09
|
2018-03-09 01:52:27
|
2018-03-10
|
2018-03-10 10:40:47
| 3
| false
|
en
|
2018-03-10
|
2018-03-10 20:00:19
| 5
|
18d0e018fe45
| 5.195283
| 33
| 1
| 0
|
Today we open sourced all of the products we have worked on last year, and in this post I want to quickly highlight what are the products…
| 3
|
Hyperpilot open sourced 100% of its products
Today we open sourced all of the products we have worked on last year, and in this post I want to quickly highlight what are the products we worked on.
Hyperpilot remained stealth mode for the entire last year, so let me explain a bit what we were going after. Our mission is to bring intelligence to infrastructure that can drastically improve efficiency and performance. We see devops and system engineers are constantly challenged to make lots of choices around their container infrastructure with limited information and using a very manual process. These choices will include all the way from vm configs (instance type, region, etc), container configs (resource request/limit, count, affinity, etc) to app level configuration choices (jvm, etc). Operators or developers often make a static choice and most of the future maintainers has no idea why a certain choice was made. Also worse, is that operators tend to overprovision in all possible ways that leads to very inefficient usage of their infrastructure. Therefore, we worked on three products that was looking to help operators to have the tools to continuously help them make better choices, and in the future automate these recommendations in their environment. In the following sections I will explain in high level three products that we worked on that is now freely available to use and contribute to.
HyperConfig: intelligent configuration search
If you have used the cloud, deployed docker containers with Kubernetes or Mesos, then you know that one of the first problem you face is to figure out what’s the best resource configuration of each component you chose. For example, what VM instance type should I use? How many nodes should I deploy to? What container cpu and memory request/limit should I configure? All of these questions imply different trade offs between cost and performance. Using VM size as an concrete example, picking a large VM instance type will cost a lot more, but may give you a better application performance. Choosing a VM that is too small, leads to performance and SLA problems. And it is also not obvious what the correct choice is, since if you take a MySQL tpcc benchmark and run through every AWS instance type, the best performance and cost ratio choice doesn’t follow a linear and predictable pattern:
Also doing an exhaustive search is very time and cost prohibitive. Luckily this isn’t a brand new problem and there are quite a few research solutions out there, but there isn’t a generic open source solution that we can find that supports a generic load test output.
Therefore, we created hyperconfig that is inspired by the work from CherryPick, and we made it to suggest a set of AWS instance types for different criteria based on a generic load test result.
Instead of exhaustively searching through every instance type, HyperConfig uses a well known optimization technique called Bayesian Optimization, to find a near optimal results with only running a lot fewer sample points. And since the samples can also be ran in parallel, it greatly reduces the time and cost to take to find a near optimal instance type. Note that HyperConfig won’t be able to guarantee find the most optimal one, but in practice we found it to find a close enough selection.
For more information about how to run our demo and details about the code, please refer to sizing section of the analyzer.
HyperPath: resource bottleneck analysis
One common problem we see operators have, is to be able to find the root cause of a performance problem that shows up in their Kubernetes cluster. This is a very difficult task as the source of the performance issue can come from many different sources of your infrastructure. However, if we narrow down the problems to just resource bottlenecks, then one can develop a system that attempts to diagnosis what known resource bottleneck does an application experiencing when the application performance suffers. HyperPath is focusing on detecting cpu/memory/network/IO bottlenecks and also diagnosing if the problem comes from container(s) limit or a node limit.
How HyperPath roughly works is that it assumes it can access both application SLO metric (e.g: 95th percentile latency) and also resource metrics which includes container cpu/men/net/IO and node level similar metrics. With these data source, it will attempt to correlate what resource metrics that has exceeded some threshold is the most likely root cause for the change in application metric, and rank the top few metrics with the highest correlation score.
In the following demo, you will see that we are able to detect CPU and other resource bottleneck that occur during app latency has exceeded its SLO threshold:
For more information and source code, please refer to the diagnosis section of the analyzer.
Best effort controller: oversubscription for best effort jobs on Kubernetes
It is a well known fact that all operators over provision resources for their applications. One of the most important reason operators do so intentionally is to accommodate spikes that can occur either unpredictably or infrequently. This also leads low cluster utilization as peak usage doesn’t happen all the time. We can’t simply allocate too small of a amount of resources and hope to use the cloud or container orchestrator auto scalers as they might take minute(s) to scale backup during spike traffic. How do we utilize the overprovisioned resources then? One way is to launch best effort (BE) workloads next to them, and have a way to make sure these workloads can be controlled or killed when spike events happen in a timely manner.
Christos Kozyrakis and David Lo’s work around Heracles was aiming to solve this problem, and have also evaluated their work in a Google search workload. For more information about the details of how it works, please refer to the original paper. But at a very high level, it creates a node controller on every node, and this controller has a sub controller for each resource (cpu, memory, network, IO, caching, etc) that watches its utilization. It then uses the main application SLO metric as a input signal to determine when and how to scale resources for each workload. When app metric is performing greatly, we can start giving more resources to the BE jobs, and inversely when app metrics suffers.
At Hyperpilot, we developed the Heracles algorithm and also made it work on top of Kubernetes. In the following video you can see the BE controller in action when we run Spark with BestEffort QoS class next to a microservice.
When Spark is running next to the microservice without the BE controller, you will see latency hike due to the interference from the Spark Job. Notice that even setting BestEffort to the Spark job doesn’t avoid the interference issue as resources are contended in different resources than what Kubernetes is monitoring. With the BE controller enabled, we start seeing the latency to be controlled within the SLO threshold and BE jobs can still make progress without simply being killed. In this demo we can see a 2–3x utilization increase.
For more information about the codebase, please refer to here.
I hope these projects can show how leveraging data from kubernetes and application can make a difference in terms of cost and performance.
Feel free to reach out if you have any questions to me (tim at hyperpilot dot io).
|
Hyperpilot open sourced 100% of its products
| 199
|
hyperpilot-open-sourced-100-of-its-products-18d0e018fe45
|
2018-05-13
|
2018-05-13 00:51:51
|
https://medium.com/s/story/hyperpilot-open-sourced-100-of-its-products-18d0e018fe45
| false
| 1,231
| null | null | null | null | null | null | null | null | null |
Docker
|
docker
|
Docker
| 13,343
|
Timothy Chen
|
Entrepreneur focusing on solving problems with infrastructure and data
|
7a28b9a9110b
|
tnachen
| 335
| 327
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f7dbad180da4
|
2017-10-12
|
2017-10-12 00:22:25
|
2017-10-12
|
2017-10-12 00:53:54
| 11
| false
|
en
|
2017-10-13
|
2017-10-13 20:05:31
| 18
|
18d34ca58fa6
| 7.081132
| 37
| 3
| 0
|
Are you confused about which SQL Query tool is best for your organization?
| 5
|
6 SQL Data Warehouse Solutions For Big Data Analysts (With Their Pros And Cons)
Are you confused about which SQL Query tool is best for your organization?
In this technically dynamic world where data is king, many data analysts are faced with making the difficult choice of what querying engines to employ.
Beyond that, they are tasked with sifting through the data storage systems these engines support, and weighing what they stand to gain (or lose as the case may be). How do data analysts cope?
Enter, stage left: Andrew is a big data engineer at CDS Solutions. He needs to employ an optimized system to manage the increased data demands his company is experiencing. And he has to come up with a near perfect proposition in his next meeting with his superior.
What can he do to make himself look good in front of his employer at this point? Nothing would solve Andrew’s dilemma more quickly than a detailed comparison of the data querying tools currently popular in the big data analytics sphere.
Without further ado, let’s make it easier for you to decide what’s right for your company.
First on our list of big data tools is Cloudera Impala.
1. Cloudera Impala
Impala is a real time, Apache licensed, open source, massively parallel processing (MPP) SQL on Hadoop querying engine written in C++ programming language and currently shipped by Cloudera, MapR, Amazon and Oracle.
Pros
Impala provides real time querying on data stored on Hadoop clusters.
It’s fast. The fact that it doesn’t use MapReduce to execute its queries makes it faster than Hive.
It uses HiveQL and SQL-92, making it easy for data analysts coming from a RDBMS (Relational DataBase Management Systems) background to understand and use.
Enterprise installation is supported because it is backed by Cloudera — an enterprise big data vendor.
Cons
Impala only has support for Parquet, RCFile, SequenceFIle, and Avro file formats. So if your data is in ORC format, you will be faced with a tough job transitioning your data.
Supports only Cloudera’s CDH, MapR, and AWS platforms.
2. Apache Hive
Hive is an Apache licensed, open-source query engine written in Java programming language used for summarizing, analyzing and querying data stored on Hadoop. Though it was initially introduced by Facebook, it was later open-sourced.
Image Source — Apache Hive Architecture
Pros
It is stable as it has been around for over five years.
Hive is also open-source with a great community should you need help using it.
It uses HiveQL, a SQL-like querying language which can be easily understood by RDBMS experts.
Supports Text File, RCFile, SequenceFile, ORC, Parquet, and Avro file formats.
Cons
Hive relies on MapReduce to execute queries which makes it relatively slow compared to querying engines like Cloudera Impala, Spark or Presto.
Hive only supports structured data. So if your data is largely unstructured, Hive isn’t an option.
Another relevant SQL querying tool to consider is Apache Spark.
3. Apache Spark
Apache Spark is a cluster computing framework that runs on Hadoop. It was introduced by UC Berkeley and written in Scala programming language. Apache Spark comes bundled with Spark SQL, MLlib, GraphX and Spark Streaming which makes it a complete framework on Hadoop.
Image Source — How Spark executes a job
Pros
It is very fast. Spark SQL executes batch queries in the Spark framework 10–100 times faster than Hive with MapReduce.
Spark provides full compatibility with Hive data, queries, and user defined functions (UDF).
Spark provides APIs (Application Programming Interfaces) in various languages (Java, Scala, Python) which makes it possible for developers to write applications in those languages.
Apache Spark and Spark SQL boasts a larger open-source community support than Presto.
Cons
Apache Spark consumes lots of RAM which makes it expensive in terms of cost.
It is still maturing, and as such, it is not considered to be stable yet.
Next on the list is Presto. (And we’ve written quite extensively about this engine here and here).
4. Presto
Presto is another massively parallel processing (MPP), open source, SQL on Hadoop querying engine developed by Facebook to query databases on different sources with high speed irrespective of the volume, velocity, and variety of data they contain. It is currently being backed by Teradata and has been employed for use by AirBnB, Dropbox, Netflix, and Uber.
Pros
Presto supports Text, ORC, Parquet and RCFile file formats. This makes it a great query engine of choice without worrying about transforming your existing data into a new format.
It works well with Amazon S3 storage and queries data from any source at the scale of petabytes simultaneously and in seconds.
Great support from the open-source community will ensure Presto is around for much longer.
Enterprise support is provided by Teradata — a big data analytics and marketing applications company.
Cons
Being largely open source, it is not advisable to deploy Presto if you think you aren’t capable of supporting and debugging issues with Presto yourself except you decide to work with a vendor like Teradata.
It doesn’t have its own storage layer, so queries involving inserts or writing to the HDFS are not supported.
5. Google BigQuery
BigQuery is a cloud database solution provided by Google which executes queries on large amounts of data in seconds. Being a full database solution and not just another query engine means that it provides its own storage, a query engine, and also uses SQL-like commands to run queries against data stored in it.
Image Source — BigQuery web UI
Pros
I would refer to Google BigQuery as a plug and play solution for big data in that you don’t worry about server management here. You only import your data in its own storage and begin querying your data while it handles performance, memory allocation, and CPU optimization implicitly.
It has a strong backing from Google making it a very stable product.
BigQuery supports standard SQL syntax.
Moving data from other cloud storage solutions like Amazon S3 into GCS (Google Cloud Storage) is easy and hassle-free using the transfer manager.
Great support for enterprise users.
Cons
It could become very expensive if you query your data a lot — because Google also charges per data processed on a query.
Queries with lots of joins are not that fast.
You have to move your data into BigQuery’s storage system before you can query your data with it.
Last but not least, Amazon Redshift.
6. Amazon Redshift
Amazon Redshift is a fast and powerful, petabyte-scale, cloud-based data warehousing solution which forms part of the Amazon Web Services (AWS) cloud-computing platform. It was introduced by Amazon and is currently being managed by them as well.
Image Source — Amazon Redshift architecture
Pros
Redshift has great support for enterprise users. Like Google BigQuery, it is a cloud-based complete data warehousing solution.
It works well with Amazon S3 being a part of the Amazon cloud computing platform.
It is easily scalable.
It supports almost all standard SQL features.
Cons
It could also get very expensive considering the fact that Amazon bills you for storage space as well as server requirements (CPU, RAM etc.).
Redshift isn’t serverless like BigQuery, so you have to account for the amount of resources as well as the allocation of said resources that your data cluster would require.
In Summary
Various use cases will demand different solutions for analysing, storing and querying your organization’s data, so it will be difficult for me to proffer a solution for your personal needs.
If you decide to go the open source SQL-on Hadoop route, be prepared to have a team of competent data engineers that can manage and debug in-house. If you prefer the enterprise cloud-based solutions, be prepared to understand the cost implications as the bills tend to rise as your data increases in size.
Whichever route you choose, make sure you understand fully well what you are getting into and plan your tradeoffs with respect to the benefits each route provides.
With these comparisons under your belt, you (and Andrew) can prepare an incredibly detailed presentation for your boss. The higher-ups will be impressed with your expertise, and your colleagues will feel secure knowing you took the time to find them the best solution possible. When you make an informed choice, you enable your unique team to look their best on the job.
Shameless plug
Our product, Rakam, is a solution that allows you to build your own analytics service using Presto. It provides an analytics infrastructure on top of your AWS account. You install Rakam on your cloud provider (AWS, Google Cloud, Azure etc.) and use our API (api link) and SDKs (SDKs link), and we store your data in ORC format in your cloud provider and use Presto to run SQL queries on top of your data. We have custom modules that take care of schema evolution, high availability and automation tools for scaling your Presto cluster. Our system is near real-time; you will be able to query your data in a minute, design your workflow, and create reports and dashboard with Rakam within minutes.
|
6 SQL Data Warehouse Solutions For Big Data Analysts (With Their Pros And Cons)
| 258
|
6-sql-data-warehouse-solutions-for-big-data-analysts-with-their-pros-and-cons-18d34ca58fa6
|
2018-06-13
|
2018-06-13 02:50:07
|
https://medium.com/s/story/6-sql-data-warehouse-solutions-for-big-data-analysts-with-their-pros-and-cons-18d34ca58fa6
| false
| 1,532
|
Unified analytics platform that is fully customisable to all your data analysis needs.
| null |
rakam.io
| null |
rakam
|
emre@rakam.io
|
rakam-io
|
ANALYTICS,BIG DATA,DATA,DATA VISUALIZATION,STARTUP
|
iorakam
|
Big Data
|
big-data
|
Big Data
| 24,602
|
burak emre
| null |
421d9c9a6e92
|
bu7emba
| 139
| 141
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
1d0e56b58778
|
2018-08-16
|
2018-08-16 18:41:26
|
2018-08-17
|
2018-08-17 04:38:55
| 1
| false
|
en
|
2018-09-20
|
2018-09-20 21:22:07
| 0
|
18d539875b58
| 2.943396
| 2
| 2
| 0
|
As we proudly pronounce that we live in an “AI (Artificial Intelligence) — first age” and rely increasingly on AI infused products and…
| 5
|
Why does AI need decentralization?
As we proudly pronounce that we live in an “AI (Artificial Intelligence) — first age” and rely increasingly on AI infused products and services to provide us recommendations, feed us information, or influence our opinions at a personal and public level, should we ever pause and give it a minute to think about are the AI models that serve us everyday reliable and trustworthy enough for us to allowing it to play a critical role in our daily lives?
Digging a little bit deeper, we realize most AI models and algorithms are owned and controlled by centralized organizations, or tech giants. It may be scary to find out how much they know about you, or how little they actually know about you to tag labels on you in their gigantic database.
It sounds like a terrible idea to let some centralized organizations to collect and monetize our data, having the full control and processing these data in some black boxes. Our data will become increasingly valuable as things are slowly becoming more automated, not only that we are just giving them for free, but we also have no clue if these organizations have a complete picture of us in order to produce unbiased AI models that serve in all the products we use.
Biased and siloed data sets
Speaking of collecting data, as much as large companies try to extract valuable data from multiple sources, such as transaction logs, social media interactions, search behaviors, clicks from products, or sensor and machine logs, most of these data sets are not only messy, but also siloed from each other, and thus not ideal for training advanced AI models.
Expensive and wasted computing resources
Technically speaking, we question how effective it is for one centralized organization to own all AI models. Take an example of self-driving cars, if there are three cars that stopped almost the same time at one intersection, and they are trying to figure out who should go first. In today’s centralized world, these cars would have to communicate to some centralized server to give them recommendations on how to proceed, then it would not only be time consuming, but also a waste of computational resources. Ideally, these cars should have the ability to talk to each other to quickly figure out who has the right-of-way.
Higher threshold makes it hard for smaller players to enter the market and limits the innovation of AI
We’ve conducted researches with companies who would like to incorporate AI into their products and services. Yet the outlook is quite iffy with the increasing higher threshold for smaller players to enter the market or benefit from existing AI models. Just to name a few macro hurdles that these companies are facing:
Access to AI models is very limited
AI models shared from tech giants are not customized or optimized for individual needs
Lack of trust around data exfiltration and system authentication
Too expensive and time consuming to adopt AI models released by tech giants
After learning about various challenges for relatively smaller companies to adopt open sourced AI models from tech giants, we saw a huge need for the industry to have an open and trusted platform, where all participants can collaboratively work together to advance AI models, get fair share of rewards based on contribution, and at the same time able to mitigate privacy and security issues. The CEO of TuringNet, Dr. Kai Wen, who was a former Senior Engineer from Google, also a PhD from Stanford University, started tapping into how Blockchain technology can be applicable here to serve as an infrastructure for such platform that performs distributed computing for AI model training at scale in an open and trusted environment.
The idea behind TuringNet is aiming to solve industry-wise problems such as opaque data refinery or concerns around data exfiltration, siloed and fragmented data sets, and repetitive model training that wastes massive computational resources. With this decentralized and collaborative approach, participants on the platform are incentivized to contribute either computation power or valuable datasets to further advance AI models, developers can also get rewarded for making their AI algorithms accessible for commercial usage. We believe TuringNet will empower a variety of optimized AI models that can be easily adopted by hundreds of millions of use cases in the long term.
|
Why does AI need decentralization?
| 51
|
why-does-ai-need-decentralization-18d539875b58
|
2018-09-20
|
2018-09-20 21:22:07
|
https://medium.com/s/story/why-does-ai-need-decentralization-18d539875b58
| false
| 727
|
TuringNet is world's first open and trustable Artificial Intelligence (AI) platform. We aim to use AI to empower a truly self-governing ecosystem to crowdsource the development of AI based on blockchain, and solve the fundamental problem of trust in today’s AI industry.
| null |
turingnet
| null |
TuringNet
|
marketing@turingnet.com
|
turingnet
|
BLOCKCHAIN TECHNOLOGY,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING AI,BLOCKCHAIN STARTUP,BLOCKCHAIN DEVELOPMENT
|
turing_net
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Bella Wang
|
Business Strategist, Tech Futurist, Blockchain Enthusiast
|
8b269b8c681c
|
bellawang_98919
| 19
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-27
|
2017-09-27 08:44:18
|
2018-01-04
|
2018-01-04 15:07:54
| 3
| false
|
fr
|
2018-01-04
|
2018-01-04 17:42:44
| 3
|
18d543b86f28
| 2.187736
| 0
| 0
| 0
|
Rien qu’en France, 10 millions de patients sont suivis pour une maladie chronique. Cela peut être le diabète, la BPCO, l’insuffisance…
| 4
|
L‘assistant vocal au service du patient à domicile
Rien qu’en France, 10 millions de patients sont suivis pour une maladie chronique. Cela peut être le diabète, la BPCO, l’insuffisance respiratoire ou encore l’apnée du sommeil.
Les assistants vocaux remportent un franc succès auprès des consommateurs et sont LA grande tendance 2018. Pour autant, leur usage reste limité à des fonctions simples du quotidien et pour lesquels l’usage du smartphone n’est pas efficace : réglage d’une alarme, d’un compte à rebours, demander la météo… autant de micro-demandes qui simplifient la vie de tous les jours.
Bonjour s’interface avec différentes sources de données (Tensiomètre, CPAP, tracker d’activité, capteur de sommeil). Il est compatible avec le traitement de la donnée de Santé.
Il est pourtant un secteur où les assistants vocaux sont une solution à de nombreux problèmes : la santé et le suivi du patient à domicile.
Près de 10 millions de patients sont suivis pour une insuffisance respiratoire, près de 700.000 ont un traitement de l’apnée du sommeil, près de 5 millions ont un suivi de leur diabète… Mais la France perd près de 9M€/an en non suivi des traitements pour des raisons essentiellement lié à la motivation. Car oui, suivre un traitement quotidien durant toute sa vie nécessite un coaching de tous les jours pour comprendre son traitement (concept “d’engagement patient”) mais aussi être rappelé subtilement de ne pas oublier de le prendre (le patient n’est pas une machine).
Or il n’est raisonnablement pas possible de placer un aidant derrière chaque patient. Et la situation ne s’améliorera pas avec le vieillissement de la population dont 50% des personnes de plus de 50 ans vivent avec plus de 2 pathologies chroniques.
L’assistant vocal a alors tout son intérêt pour pouvoir assister la patient à domicile sans remplacer le praticien ou l’aide à domicile. C’est un nouveau maillon capable de consolider un profil patient au fil des échanges et d’alerter les aidants-praticiens au meilleur moment.
Le service vocale de la plateforme Bonjour est entièrement personnalisable pour offrir au patient une expérience d’interaction agréable sans lui donner l’impression d’être surveillé.
Après avoir remporté un franc succès en 2016 sur Kickstarter en récoltant près de 800k$, Holi lance en 2018 une offre Business d’assistant vocal basée sur sa plateforme Bonjour avec 4 grands avantages :
un hardware sympathique et prêt l’emploi sous les traits d’une horloge contemporaine intégrant un écran en plus de l’audio ;
la possibilité de personnaliser l’intégralité de la chaîne d’interaction : voix, champ de connaissances, réponses, écrans… ;
la possibilité de choisir son “trigger word”. Et oui, tout le monde n’a pas envie de faire cohabiter sa marque avec Google ou Amazon ;
la maitrise complète et la protection des données.
Vous êtes professionnels de santé, assureurs mutualistes, fabricants de produits médicaux ou praticiens hospitaliers, contactez Holi pour découvrir tout le potentiel d’un assistant vocal dans l’accompagnement du patient à domicile.
Auteur : Grégoire GERARD — Président de Holi
|
L‘assistant vocal au service du patient à domicile
| 0
|
l-assistant-vocal-au-service-du-patient-à-domicile-18d543b86f28
|
2018-06-04
|
2018-06-04 22:09:21
|
https://medium.com/s/story/l-assistant-vocal-au-service-du-patient-à-domicile-18d543b86f28
| false
| 434
| null | null | null | null | null | null | null | null | null |
Assistant
|
assistant
|
Assistant
| 192
|
Holi
|
Holi is the inventor of Bonjour, the alternative voice platform #iot #smarthome #innovation #services
|
f437aa6dbbd8
|
Holi_Official
| 376
| 516
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-14
|
2018-07-14 18:18:12
|
2018-07-15
|
2018-07-15 14:36:07
| 7
| false
|
en
|
2018-07-15
|
2018-07-15 14:36:07
| 11
|
18d573967998
| 3.212264
| 0
| 0
| 0
|
In this article I’m going to talk about some tools for maths the first of them are only to write expressions so you won’t be able to…
| 3
|
Software for mathematics
In this article I’m going to talk about some tools for maths the first of them are only to write expressions so you won’t be able to execute those expressions to create a graph or calculate the written expressions.
I have identified three different kind of software for mathematics:
Software to write expressions symbolically (HostMath and KaTeX) this applications allows you to write expressions and visualize them in a mathematical notation but you can do any calculus with this kind of expressions. The most common way and human readable is LaTeX. Another notation is MathML which is represented in XML language.
Data oriented software this kind of software is designed to analyze data numerically (Matlab, Octave and NumPy)
This kind of software is designed to analyze math expression in a symbolic manipulation. (Maple, Mathematica). These doesn’t mean that we can’t make numerical analysis with them but they tend to be more analytical tools.
HostMath
HostMath allows you to input math expressions in LaTex format and shows you the result in image. You can also write export the expression to an external url, Embebed code or MathMlL Code.
http://www.hostmath.com/
Pros
It’s free
You can start using it quickly
You don’t need to know the latex notation
Cons
I couldn’t write more than one line.
There’s not an option to export as image so you need to take screenshots.
KaTeX
KaTex is a free javascript library to render Tex expressions on the web. If you need to write expressions ej. in a blog this option is perfect to you. It has support to several framework such angular, react and ruby. You could easily create a wrapper for others.
https://github.com/Khan/KaTeX
Pros
It’s fast as proof you can take a look to this test
You can integrate in your CMS easily.
Cons
If you want to use on medium you won’t be able to do it.
You must learn the Tex notation or write the expression in other tool and when it’s ready pass it to code.
Matlab
MatLab allows matrix manipulation, plotting of functions, and data, implementation of algorithms and interfacing with other languages. For example can be perfect to implement machine learning algorithms.
https://www.mathworks.com/products/matlab.html
Pros
Powerful tool with a lot features and very flexible.
Data manipulation is easy
Cons
You must pay to use it.
Steep learning curve
Octave
It allows practically the same as Matlab but it’s a free software. The main similarities are:
Matrices as fundamental data type.
Built-in support for complex numbers
Powerful built-in math functions and extensive function libraries.
Extensibility in the form of user-defined functions.
The main differences are in the syntax you can see more details here.
https://www.gnu.org/software/octave/
Pros
It’s open source under GNU Licence.
Cons
Step learning curve.
Matlab syntax is more used by the community while syntax is very similar in both octave can have some expressions which won’t run in MatLab.
NumPy and Matplotlib
NumPy is a python library to support matrix manipulation and a large collection of high level mathematical functions. Matplotlib is to plotting data and math functions.
http://www.numpy.org/
https://matplotlib.org/
Mathematica
Mathematica is a powerful tool which allows solve analytically solve math expressions. It also has some valuable features as notebook which allows you to make calculus in a interactive document.
https://www.wolfram.com/mathematica/
Pros
A powerful if you want to make some analytically calculus.
A lot of features
Cons
Expensive licence
Step learning curve
Maple
As mathematica tends to be a software for mathematical formal analysis.
|
Software for mathematics
| 0
|
software-for-mathematics-18d573967998
|
2018-07-15
|
2018-07-15 14:36:08
|
https://medium.com/s/story/software-for-mathematics-18d573967998
| false
| 573
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Marino Esteban Perez Dorador
| null |
e52aed89aa9
|
esteban5003
| 1
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-13
|
2018-04-13 18:52:51
|
2018-04-13
|
2018-04-13 20:02:38
| 7
| false
|
en
|
2018-04-13
|
2018-04-13 20:02:38
| 2
|
18d57c360475
| 2.408491
| 7
| 1
| 1
|
In this article we will see how applying a different activation function in the final layer will produce results in a multi class…
| 4
|
Softmax vs Sigmoid on final layer of CNN
In this article we will see how applying a different activation function in the final layer will produce results in a multi class classification problem. For demonstration we will take pre-trained resenet50 model and remove the last 2 layers and apply a different function. The data set comprised of images from a big box store and can be downloaded from the following location and python notebooks are available here
The problem is to identify the items in the below image
Softmax
Applying the LogSoftmax function as final layer in CNN, the two last layers of the model is as below
During training and validation of the model the images that are fed to the model has only one class and trained. the test data set is a combination different classes and applying softmax to predict all possible classes. the following table indicates the no if images the model is trained on
After finding the optimal learning rate and train the network for 3 epochs with top/down transformation of each image and when prediction is applied on test images(augmentation is applied on test images)the result is as below for a small sample images of result.
Sigmoid
Apply the sigmoid function as the final activation function of CNN network which is as below
The train and validation data set is little bit different, it has additional images that has multiple classes in a given images. the following table indicates the images and their class the model is trained on
with a same learning rate as softmax and test time data augmentation the prediction is applied on the same test data set as softmax and following image shows how classes are identified with sigmoid function.
Sigmoid function tends to identify classes in an image compared to softmax in a multiclass image problem, but sigmoid tends to run into problem with confidence level for eg 3 image which has RxBar only the prediction is not really confident of RxBar. These kind of models can be employed in BigBox stores with self checkout lines and less items in the cart
Credits
Jermey Howard at Fast AI
|
Softmax vs Sigmoid on final layer of CNN
| 16
|
softmax-vs-sigmoid-on-final-layer-of-cnn-18d57c360475
|
2018-05-28
|
2018-05-28 03:38:17
|
https://medium.com/s/story/softmax-vs-sigmoid-on-final-layer-of-cnn-18d57c360475
| false
| 360
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
rakesh malireddy
| null |
e06d01f15e8b
|
rakesh.malireddy
| 3
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-03-23
|
2017-03-23 14:26:59
|
2017-09-02
|
2017-09-02 06:29:22
| 2
| false
|
en
|
2017-09-02
|
2017-09-02 06:29:22
| 2
|
18d748781a79
| 1.658805
| 0
| 0
| 0
|
This post is divided into following sections:
1. What is Machine Learning
2. Applications of Machine Learning
3. Simple example and related…
| 3
|
Introduction to Machine Learning — Linear Regression
This post is divided into following sections:
1. What is Machine Learning
2. Applications of Machine Learning
3. Simple example and related concepts
4. Demo
What is Machine Learning:
Machine learning is a mixture of mathematics and computer science, where learning algorithms mimics human learning process. There are mainly 3 kinds of machine learning.
1. Supervised Learning — Regression, Classification
2. Unsupervised Learning — Clustering
3. Reinforcement Learning — Game Players
Simple Example and Related Concepts:
Linear Regression: Fit a linear model for given data.
House Price Prediction:
lets say house price depends on the following factors Sqft, #bedrooms, #bathrooms, lawn area, parking space, geographical location etc..
we need to build a model that predicts a house price by taking these factors into consideration. It is supervised learning problem, so we need training data, we train our algorithm to learn that training set and using that knowledge our algorithm predicts price for new inputs. By training we mean, we fit a model that roughly represents trend given training set follows, as below..
source: https://statistics.laerd.com/spss-tutorials/linear-regression-using-spss-statistics.php
lets stick to simplest possible relation, linear..
our goal: fit a linear model that represents our data(training)
how to achieve our goal:
step 1. lets start with a random linear model.
step 2: measure how erroneous our linear model is (distance is a good idea).
step 3: minimize that error using some optimization techniques.
step 3 implementation:
lets draw a graph that represents relation between our model parameters and the error value.for simple cases, we get the following kind of graph
as said earlier, we start with a random line and we calculate the error value, we need to minimize the error, that means we need to change the model parameters that incurs less error, gradient descent algorithm helps us..
for more details about gradient descent, visit my previous post.. https://medium.com/@bgautam0707/all-about-gradient-descent-a8b915ac99fc
Finally we fit a good model that represents our training data, when we encounter new data point(s), we predict the outcome by trained model.
|
Introduction to Machine Learning — Linear Regression
| 0
|
introduction-to-machine-learning-linear-regression-18d748781a79
|
2017-09-02
|
2017-09-02 06:29:22
|
https://medium.com/s/story/introduction-to-machine-learning-linear-regression-18d748781a79
| false
| 338
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Gautam A
|
AI Enthusiast, Nature Lover
|
ee4b406862ef
|
bgautam0707
| 2
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
fe7f8c872655
|
2018-01-07
|
2018-01-07 19:42:55
|
2018-01-07
|
2018-01-07 20:03:41
| 1
| false
|
en
|
2018-01-07
|
2018-01-07 20:22:33
| 0
|
18d7b0559614
| 1.411321
| 0
| 0
| 0
|
In the story arc of the Tiffany Aching character, the author, Terry Pratchett, has her learn her role in the world. She protects her world…
| 4
|
Data, Know Thyself
In the story arc of the Tiffany Aching character, the author, Terry Pratchett, has her learn her role in the world. She protects her world from an invasion from another world and realizes that “the land needs someone to tell it who it is”.
That’s our role as we guide data into autonomous behavior. We must tell data who it is. We must provide a shape for it that includes enough metadata so that data carries with it a knowledge of who and what it is.
When humans learn to know their own selves, this happens in a self-reinforcing feedback loop across a number of dimensions. Edward Deci, in Why We Do What We Do, describes these as the growth of autonomy, of belonging, and of expectancy of success which leads to competence. I borrow from self-determination theory when I say that data needs to know where it belongs.
A sense of belonging is complex. Data needs to know its domain, its relationships, and its value. Can we really expect metadata to capture the ontology, the taxonomy and the significance? Yes, actually, we can. But this is not an exercise for the faint at heart.
Ontology is a description of the real world from which our data comes. Taxonomy captures the classification of real-world entities and their implicit and explicit relationships. And significance? Wow. Significance encodes why we wanted to create this data in the first place. Why does it matter?
Lastly, and this goes beyond the model that I borrow from Edward Deci, data needs to understand diversity. That is, it needs to understand both what it is and what it is not, and what else exists within its domain and outside of its domain. It needs to know its boundaries and how to appropriately cross those boundaries.
Autonomy. Belonging. Competence. Diversity.
Metadata begins to fill out the alphabet of data’s awareness of itself.
|
Data, Know Thyself
| 0
|
data-know-thyself-18d7b0559614
|
2018-01-07
|
2018-01-07 20:22:35
|
https://medium.com/s/story/data-know-thyself-18d7b0559614
| false
| 321
|
A conversation about complexity and emergent significance.
| null | null | null |
Data Autonomy
| null |
data-autonomy
|
DATA,AUTONOMY,ANALYTICS,DATA ENGINEERING,DATA SCIENCE
|
kevin642
|
Metadata
|
metadata
|
Metadata
| 514
|
Kevin Kautz
| null |
90d49ddd83ba
|
kevin642
| 11
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-04
|
2017-09-04 20:11:43
|
2017-09-04
|
2017-09-04 20:13:33
| 0
| false
|
en
|
2017-09-04
|
2017-09-04 20:13:33
| 0
|
18d7e7746771
| 0.860377
| 0
| 0
| 0
|
the announcement that China is introducing a gold backed currency is a big f*ing deal.
| 5
|
I want you to be prepared.
the announcement that China is introducing a gold backed currency is a big f*ing deal.
For all we know, this is like a dam breaking far upstream, and the inevitable wave of disruption is coming.
There’s also another disruption-wave: The birth of artificial general intelligence in the very near future. Maybe even this year, but more likely 2018. The birth of AI will gradually phase out human labor from economic output. I don’t how know fast it will go but it will likely happen in under a decade once it get’s going.
These two disruption waves has me very concerned — I think our society is headed toward a “phase change”.
Whether we reboot into an abundance society is yet to be seen. But the deep cynicism says that things will be rough due to nature of human selfishness.
I urge you to start thinking about the future. Work hard and sacrifice, put in extra hours at work. Don’t feel bad if you miss out on fun opportunities if you have spent the time instead, investing in yourself. Being a provider is the highest virtue of human stewardship.
Today, labor day, I am going to roll up my sleeves and get to work. A storm is coming. It is still far away and now is the best time to start preparing.
|
I want you to be prepared.
| 0
|
i-want-you-to-be-prepared-18d7e7746771
|
2018-05-09
|
2018-05-09 08:29:23
|
https://medium.com/s/story/i-want-you-to-be-prepared-18d7e7746771
| false
| 228
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Zachary Vorhies
|
Technophile working in Silicon Valley and mass media.
|
dd1ea184215b
|
zacharyvorhies
| 104
| 107
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-15
|
2018-09-15 13:54:04
|
2018-09-15
|
2018-09-15 13:57:15
| 1
| false
|
en
|
2018-09-15
|
2018-09-15 13:57:15
| 2
|
18d83de8d630
| 0.85283
| 0
| 0
| 0
|
Hey Artificial Intelligence Which Plays a Big role in Today’s Scenario and Future too..
| 5
|
What is the best way to learn Artificial Intelligence for a beginner?
Hey Artificial Intelligence Which Plays a Big role in Today’s Scenario and Future too..
Many Companies in started Implementing AI to all Process..Even Grocery Store is Implemented AI in Kerala..its a Massive thing using Sensor Fusion.
Ai Training in Chennai
Its Your Better time to Move on Artificial Intelligence Technology…and its also a One of the Interested Technology in Today’s Life by Using Gadgets, Robotics,
There is a Development of thinking in AI.. But search a Good Institutes For to
Learn Artificial Intelligence because its not like a other Courses.
As My Knowledge in Chennai Foursteps Training solutions Which Provides Real Time Hands on Training For AI. And its a one of the Best AI Training Institute in Chennai..If you Like to Know About them Kindly search and get a Demo From them that will help you to know about them.
They are India and Asia Book of Record Holders and they Providing Good Quality of Training and they trainers are all comes from Corporate.
|
What is the best way to learn Artificial Intelligence for a beginner?
| 0
|
what-is-the-best-way-to-learn-artificial-intelligence-for-a-beginner-18d83de8d630
|
2018-09-15
|
2018-09-15 13:57:15
|
https://medium.com/s/story/what-is-the-best-way-to-learn-artificial-intelligence-for-a-beginner-18d83de8d630
| false
| 173
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Chitra
| null |
a50d82b7c967
|
chitrasugumar
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-16
|
2018-07-16 12:36:30
|
2018-07-16
|
2018-07-16 12:38:21
| 1
| false
|
en
|
2018-07-16
|
2018-07-16 12:38:21
| 1
|
18d8dcaf15b9
| 1.09434
| 0
| 0
| 0
|
There are many companies offering artificial intelligence as-a-service to other companies looking to develop IA-based models or business…
| 4
|
Do any companies offer artificial intelligence as-a-service (AIaaS)?
There are many companies offering artificial intelligence as-a-service to other companies looking to develop IA-based models or business applications. Apart from Amazon, Google, Microsoft, Apple and IBM, there are many companies offering the software or platform to work on AI-supported machines and technologies to implement in different industries.
These leading tech giants use the best tools with high-quality data sets used with right algorithm to make the application work in responsive manner. Users just need to implement these AI software and they can feed large amount of quality data to make the machine learning models work without showing any error giving the most satisfying response during the use.
Using the artificial intelligence training algorithms or tools is not important unless you have quality training data-sets to develop a feasible model that can work it and give the accurate results. Companies looking to work aggressively into AI fields can use the tools or platforms of these well-known companies into their business models to make it usable for end-users.
Whatsoever the AI projects are needs a quality training data sets and Cogito Tech like companies are offering the machine learning data sets to work on such models and make the workable solution for users while improving the performance of AI-enabled machines and applications integrated into the software or computer systems.
|
Do any companies offer artificial intelligence as-a-service (AIaaS)?
| 0
|
do-any-companies-offer-artificial-intelligence-as-a-service-aiaas-18d8dcaf15b9
|
2018-07-16
|
2018-07-16 12:38:21
|
https://medium.com/s/story/do-any-companies-offer-artificial-intelligence-as-a-service-aiaas-18d8dcaf15b9
| false
| 237
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Cogito Tech LLC
|
Cogito offers a complete data collection and processing service for training data for AI and Machine learning based services. Visit: https://goo.gl/MQZL8n
|
76e1553054dc
|
cogitotech
| 104
| 296
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
3a6dbc8a33d8
|
2018-05-14
|
2018-05-14 10:16:06
|
2018-05-14
|
2018-05-14 10:17:19
| 1
| false
|
en
|
2018-05-14
|
2018-05-14 10:17:19
| 1
|
18d8f7b290bc
| 3.664151
| 0
| 0
| 0
|
“Measurement is the first step that leads to control and, eventually, to improvement.”
| 5
|
FIVE TIPS TO CHOOSING THE RIGHT MARKETING EFFECTIVENESS PARTNER
“Measurement is the first step that leads to control and, eventually, to improvement.”
H. James Harrington
These words certainly ring true for all marketers around the globe that find themselves facing the challenge of measuring the impact of their marketing activity and disentangling the effect of each component in view of optimisation. Today, more than ever, measurement and accountability are becoming fundamental components of the whole marketing strategy.
Advertisers generally turn to Marketing Mix Modelling methods (MMM) or Marketing Effectiveness to help them understand and quantify the true impact of their marketing efforts be it online or offline. A Marketing Mix Modelling project tends to be “heavy” in nature as it requires a commitment from client-side teams to define the scope of the project and deliver the data necessary to build the models. The process takes months rather than weeks, and the project may cost a few hundred thousand dollars, depending on the scope.
The era of a few companies offering MMM is long gone. Today, the analytics market counts a number of companies that can deliver MMM; but what criteria should one consider in choosing the right partner and ensure the best fit?
In this article we provide 5 tips that we believe could help marketers choose the right MMM partner:
TIP 1: CONSIDER THE IMPORTANCE OF OBJECTIVITY
In selecting your marketing effectiveness partner, you need to ensure objectivity. Remember that the primary goal of an MMM project is to identify the principal performance drivers and to measure their impact. This leads to an optimisation exercise where the ultimate aim is to reduce spend where it is less effective and increase it where the performance is greater.
The marketing effectiveness company should, therefore, sit at an equal distance from all these performance drivers and provide insight and recommendations based on an objective analysis derived from the modelling results.
TIP 2: CHECK THE THOROUGHNESS OF THEIR DATA TRANSFORMATION PROCESS
Marketing is an art and science, and so is the Marketing Mix Modelling. Constructing statistically viable models is not enough, albeit necessary, as those also need to be commercially viable.
Regardless of the MMM company chosen, the set of raw data requested at the kickstart of the project is pretty much the same and revolves around four main axes, namely:
The KPIs measured
The Marketing Activities
Media Investments
External Factors
However, what differentiates one MMM provider from the other is the experience they have in transforming the raw data they have at hand to model the real consumer behaviour that is becoming more and more complex. The thoroughness of this phase will depend on the experience of the agency in the client’s sector, the creativity of the agency and also the experience across other industries as there is a lot of transferable knowledge from one industry to the other that could benefit the project under study.
TIP 3: EVALUATE THE SOPHISTICATION OF THEIR MODELLING TECHNIQUES
The modelling techniques used need to be up to speed with the complex media landscape and the changing consumer’s path to purchase. The world is not linear anymore so why would the modelling techniques that match the old world?
Uncovering the complex relationships between the different touch points requires more sophisticated techniques like loglinear, nested modelling, pooled regression to name a few. So, make sure your MMM partner has the necessary mathematical skills and is comfortable using these techniques.
TIP 4: ASSESS THE MATURITY OF THE TECHNOLOGY THEY USE
Econometrics as a mathematical approach to measure marketing effectiveness is now a few decades old. This is to say that the MMM process is now widely known and has reached some sort of maturity. However, what makes the real difference between agencies is their ability to use a technology that allows them to go smoothly through that process and to gain in efficiency and accuracy.
Using robust tools that are built for purpose conceptually, mathematically and computationally is a must in today’s MMM landscape. Proprietary software developed on the go or off the shelf using programming languages like Python or R is not always an optimal solution as it is often time consuming, does not deliver to the speed and efficiency that are required today and does not ensure the client involvement.
TIP 5: CHECK THEIR READINESS FOR DEPLOYMENT & KNOWLEDGE TRANSFER
One of the most critical steps of the CRISP-DM Methodology (Cross Industry Standard Process for Data Mining) is the deployment part. The MMM process must be reliable and repeatable by people with little data mining/statistical background. Achieving this will prove the maturity of the process and reduce the dependency on the agency. Marketers are accountable themselves and as such tend to express more and more the need to be more involved and exposed to the whole MMM process so they can better understand the results and therefore disseminate them more effectively and comfortably internally.
If the agency is equipped with the right level of technology (Tip 4), then the above becomes a natural step of the process. Optimisation and forecasting tools could also be licensed to the client, so they own themselves the insight part, the juiciest part of any MMM project, and become part of the whole process.
This could potentially open the doors to other forms of partnerships between the client and the agency where the latter could transfer knowledge to the client so they can internalize part of the modelling and start building their own in-house modelling capabilities.
|
FIVE TIPS TO CHOOSING THE RIGHT MARKETING EFFECTIVENESS PARTNER
| 0
|
five-tips-to-choosing-the-right-marketing-effectiveness-partner-18d8f7b290bc
|
2018-05-14
|
2018-05-14 10:17:23
|
https://medium.com/s/story/five-tips-to-choosing-the-right-marketing-effectiveness-partner-18d8f7b290bc
| false
| 918
|
MASS Analytics is a technology company that specialises in developing analytical software solution applied to Marketing.
| null |
massanalytics1
| null |
MASS Analytics Blogs
|
info@mass-analytics.com
|
mass-analytics-blogs
|
MARKETING,SOFTWARE,TECHNOLOGY
|
MASS_Analytics
|
Marketing
|
marketing
|
Marketing
| 170,910
|
Mass Analytics
| null |
61bc933de572
|
mass.tunisia
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
(‘previous mean’ * ‘(count -1)’) + ‘new value’) / ‘count’
| 1
|
721b17443fd5
|
2018-07-26
|
2018-07-26 03:26:15
|
2018-07-26
|
2018-07-26 03:38:25
| 2
| false
|
en
|
2018-08-03
|
2018-08-03 21:29:18
| 6
|
18dc5d154dd3
| 4.52673
| 8
| 0
| 0
|
The Real Pros Do More With Less
| 5
|
This is How You Can Build a Better Data-Driven Organization: by Reducing Data
The Real Pros Do More With Less
“A dreamy shot of a man walking along a white sand dune with a mountain range on the horizon” by Matt Le on Unsplash
There’s this thing we do with social media that gives us small hits of dopamine when it works well. You know the feeling — we all chase it these days — you post and like so you can feel good. We sometimes make the same mistakes with data, building tools that give us that same flush of excitement, but don’t lead us anywhere useful.
A data-driven organization is an aligned organization, an informed one, one that values awareness and functionality and is willing to pay the price to have these things. A data-obsessed organization can be one that never learned to get off the dopamine high, even when things are going well. Dopamine’s great, but so is getting work done, which isn’t always as fun. I want to be data-empowered, not data-obsessed.
Take the advantage, take the wins data gives you, but don’t do it in a way that’s going to distract you from the real work.
How do you do this?
Photo by guy stevens on Unsplash
Start with a picture of normal. If normal is happening, even if normal is exciting, leave it alone. Let normal look like a dashboard on the wall that gets updated daily. That was not a flippant comment — daily is good enough for normal. If you have to be reassured what normal looks like this very second, then you’ve either got dopamine addiction problems or more-difficult ones. See if you can reduce your dependence on trivial things.
Normal isn’t quite one thing, and it changes as your organization changes. Pick things that show you what you need to know. Figure out if sales and marketing are working as expected. Figure out if your system is running. Figure out if sales and profits are working. Achieve things with data by your side. See if you can make every metric lead to action if it’s high or low.
Each new day has a different shape to it. Just roll with it.
— Ben Zobrist
Once you have an idea of normal, become conversational in it. Everyone needs to speak about normal — ideally with the same terms. Get the conversation going and see if people use the same name for the same thing. Put the name of a metric prominently on your reports and visualizations. Pick simple and clear terms. Be a stickler if meaning drifts. Be polite, but it’s your job to create clarity and confidence. You can’t do that if people feel confused when they want to talk about the company. As a kinder Mr. T might have said, “pity the poor fools,” by encouraging them to succeed with your data.
If it’s not normal, it’s abnormal. Funny how that works.
The thing that doesn’t fit is the thing that’s the most interesting: the part that doesn’t go according to what you expected.
— Richard Feynman
You want to know about abnormal quickly. If you turn down your data updates, you have to stick some monitors in the mix that can raise the alarm.
Think about this a little when you build your information systems. Say everyone gets a fresh batch of data on their wall in the morning. The system that built it can’t entirely be turned off during the day. The data has to still trickle in somewhere, just not to a distracting place.
That means you’ve got to use transform functions that can work on single records. That means you’ve got to work with streams, even slow-moving ones. That means you choose online versions of your core metrics when you can. (An online algorithm is a fancy way of saying you can produce an answer after every record instead of after you’ve seen all of the records. For example, this is how you calculate mean with an online algorithm:
It turns out this isn’t vapid advice; this is difficult and important. You’re making people smarter by building better habits with your data while preserving the right to use up-to-the-minute techniques when you need to. You might find yourself working harder to design the system or working harder to calculate metrics. That’s the job too.
What if the world’s not as simple as I’m making it sound? It never is. Data work can be hard work. It’s hard because it takes leadership to decide to engage or not.
Once, I was called out of a movie theater by the CEO of a company to address a problem. “The system is down, we’re losing millions of dollars, and it’s your fault.” I left my girlfriend in the show, drove to the nearest laptop, and got to work. The system was down, but the CEO’s son had caused it, and someone had fixed it by the time I got to a laptop.
You’ve got to have the right calls to action, even if they don’t always turn out to be emergencies. Create too many, and you erode confidence and commitment. Create too few, and you’re letting your organization fall apart. There is no rule of thumb here, so lean in and lead. I don’t know if the CEO was wrong that day. It was inconvenient, but it might have been the right call. You’ll make decisions, and you’ll learn from them.
Your job as a data person is to create confidence. That is more important than putting people on the right information diet. Use reliable data. Explain your conclusions clearly. Make decisions. Choose data visualizations for their clarity and simplicity. Choose metrics for their clarity and power to incite action. Choose tools you can learn and understand.
Once you’ve built confidence, once you know what normal is, then you can simplify and slow the flow.
As long as you keep going, you’ll keep getting better. And as you get better, you gain more confidence. That alone is success
— Tamara Taylor
So that’s the gist. Build great insights, but share them in a stingy way. It’s not the popular thing to do, and it’s not always the possible course of action — at first. Keep in mind that driving with data means driving intelligent behavior, building strategic advantages that don’t come as cheaply as a data pipeline, model, and dashboard. Sometimes we have to put people in rehab so they can detox and live up to their potential.
More About Data Leadership
Build a Confident Practice
A Good Data Model Starts Humbly
Build an Intelligent Platform
|
This is How You Can Build a Better Data-Driven Organization: by Reducing Data
| 42
|
this-is-how-you-can-build-a-better-data-driven-organization-by-reducing-data-18dc5d154dd3
|
2018-08-03
|
2018-08-03 21:29:18
|
https://medium.com/s/story/this-is-how-you-can-build-a-better-data-driven-organization-by-reducing-data-18dc5d154dd3
| false
| 1,098
|
Coinmonks is a technology focused publication embracing all technologies which have powers to shape our future. Education is our core value. Learn, Build and thrive.
| null |
coinmonks
| null |
Coinmonks
|
gaurav@coinmonks.com
|
coinmonks
|
BITCOIN,TECHNOLOGY,CRYPTOCURRENCY,BLOCKCHAIN,PROGRAMMING
|
coinmonks
|
Data Science
|
data-science
|
Data Science
| 33,617
|
David Richards
|
Data, software, hiking
|
fa4899d9523e
|
igooshi
| 217
| 530
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-05
|
2018-01-05 15:02:48
|
2018-01-05
|
2018-01-05 15:04:22
| 0
| false
|
en
|
2018-01-05
|
2018-01-05 15:04:22
| 2
|
18ddc926a2ef
| 1.981132
| 0
| 0
| 0
|
Understanding Chatbots is a crucial for businesses to keep in the game. If the concept of Chatbots is completely alien to you, our simple…
| 5
|
What Are Chatbots?
Understanding Chatbots is a crucial for businesses to keep in the game. If the concept of Chatbots is completely alien to you, our simple beginners guide is the perfect place to start.
What are Chatbots?
The simplest way to describe a Chatbot is a virtual assistant, programmed to mimic conversations. Chatbots used to be only capable of simple conversations, but thanks to artificial intelligence and machine learning capabilities, you could be conversing with a Chatbot without even realising it. They can understand your questions based on guidelines in their programming and they will guide you to your answer.
They’ve been around for some time, their growing popularity with leading companies like Facebook and Spotify means more and more companies are jumping on the Chatbot bandwagon.
How do Chatbots work?
Chatbots can do everything! They’ve changed the way businesses provide online assistance to customers, you can seek information when it’s convenient to you instead of waiting to speak to a customer service representative. Businesses can provide help a lot quicker, whether that be ordering a takeaway, booking a flight or enquiring about a service.
There is an endless list of ways introducing Chatbots to your business can benefit you, here’s a few.
1. Access to support 24/7 — nobody likes the irritating music played whilst you wait for what seems like an eternity to speak to an agent. There’s not need for your clients to wait when they don’t have to — bots can provide constant contact day or night. They don’t get tired, they don’t need breaks and they can operate every day of the year.
2. Help for everybody — our multi-tasking skills are nowhere near as advanced as bots, a bot can chat to hundreds, maybe even thousands, of people at one time. You can be confident every single one of your customers is having their questions answered.
3. Time effectiveness — as a business owner, one of your top priorities is using your employees time effectively. For example, your business could use a Chatbot for customer support, then any questions they cannot answer can be directed to an agent. Your customer service team will have extra time to support customers who need the assistance of a human.
Can Chatbots replace humans?
Bots are becoming smarter, friendlier and more ‘human-like’ all the time. But does a bot really communicate the same way humans can? Can their responses be too generic? Empathy and understanding are crucial in conversations, even more so when a customer is unhappy with a service / product.
Can Chatbots understand emotions?
We mustn’t underestimate the capabilities of Chatbots. Yes, Chatbots are an incredible artificial intelligence advancement, but, for the time being, they cannot think for themselves.
Chatbots still have teething problems to improve, but, they’re meeting the needs of demanding customers who want answers immediately. Chatbots are the future for customer service, who knows how advanced they will become…
Getting started with Chatbots
If you would like to find out how you can implement Chatbots into your business, we are here to assist! Please contact Different Gravy Digital on either 0161 706 0004 or mg@differentgravydigital.co.uk
|
What Are Chatbots?
| 0
|
what-are-chatbots-18ddc926a2ef
|
2018-01-08
|
2018-01-08 09:34:39
|
https://medium.com/s/story/what-are-chatbots-18ddc926a2ef
| false
| 525
| null | null | null | null | null | null | null | null | null |
Chatbots
|
chatbots
|
Chatbots
| 15,820
|
Different Gravy Digital
|
Full Service Digital Marketing Agency in Manchester
|
70e7ce20fb10
|
marieharwood
| 4
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
e26b6c1a2407
|
2018-06-29
|
2018-06-29 15:41:46
|
2018-06-14
|
2018-06-14 21:00:54
| 1
| false
|
en
|
2018-06-29
|
2018-06-29 15:45:31
| 1
|
18de97c14ef6
| 2.85283
| 0
| 0
| 0
|
The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Companies should disclose…
| 4
|
The problem with ‘explainable AI’
The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Companies should disclose where and how they got the data they used to fuel their AI systems’ decisions. Consumers should own their data and should be privy to the myriad ways that businesses use and sell such information, which is often done without clear and conscious consumer consent. Because data is the foundation for all AI, it is valid to want to know where the data comes from and how it might explain biases and counterintuitive decisions that AI systems make.
On the algorithmic side, grandstanding by IBM and other tech giants around the idea of “explainable AI” is nothing but virtue signaling that has no basis in reality. I am not aware, for instance, of any place where IBM has laid bare the inner workings of Watson — how do those algorithms work? Why do they make the recommendations/predictions they do?
There are two issues with the idea of explainable AI. One is a definition: What do we mean by explainability? What do we want to know? The algorithms or statistical models used? How learning has changed parameters throughout time? What a model looked like for a certain prediction? A cause-consequence relationship with human-intelligible concepts?
Each of these entail different levels of complexity. Some of them are pretty easy — someone had to design the algorithms and data models so they know what they used and why. What these models are, is also pretty transparent. In fact, one of the refreshing facets of the current AI wave is that most of the advancements are made in peer-reviewed papers — open and available to everyone.
What these models mean, however, is a different story. How these models change and how they work for a specific prediction can be checked, but what they mean is unintelligible for most of us. It would be like buying an iPad that had a label on the back explaining how a microprocessor and touchscreen works — good luck! And then, adding the layer of addressing human-intelligible causal relationships, well that’s a whole different problem.
Part of the advantage of some of the current approaches (most notably deep learning), is that the model identifies (some) relevant variables that are better than the ones we can define, so part of the reason why their performance is better relates to that very complexity that is hard to explain because the system identifies variables and relationships that humans have not identified or articulated. If we could, we would program it and call it software.
The second overarching factor when considering explainable AI is assessing the trade-offs of “true explainable and transparent AI.” Currently there is a trade-off in some tasks between performance and explainability, in addition to business ramifications. If all the inner workings of an AI-powered platform were publicly available, then intellectual property as a differentiator is gone.
Imagine if a startup created a proprietary AI system, for instance, and was compelled to explain exactly how it worked, to the point of laying it all out — it would be akin to asking that a company disclose its source code. If the IP had any value, the company would be finished soon after it hit “send.” That’s why, generally, a push for those requirements favor incumbents that have big budgets and dominance in the market and would stifle innovation in the startup ecosystem.
Please don’t misread this to mean that I’m in favor of “black box” AI. Companies should be transparent about their data and offer an explanation about their AI systems to those who are interested, but we need to think about the societal implications of what that is, both in terms of what we can do and what business environment we create. I am all for open source, and transparency, and see AI as a transformative technology with a positive impact. By putting such a premium on transparency, we are setting a very high burden for what amounts to an infant but high-potential industry.
By Rudina Seseri, Founder and Managing Partner
Originally published at techcrunch.com on June 14, 2018.
|
The problem with ‘explainable AI’
| 0
|
the-problem-with-explainable-ai-18de97c14ef6
|
2018-09-27
|
2018-09-27 20:48:54
|
https://medium.com/s/story/the-problem-with-explainable-ai-18de97c14ef6
| false
| 703
|
Glasswing Ventures is an early stage VC firm investing in AI and frontier tech startups that enable the rise of the intelligent enterprise // www.Glasswing.vc // Where Ideas Take Flight
| null | null | null |
Glasswing Ventures
| null |
glasswingvc
|
GLASSWING,AI,VC,STARTUP,BOSTON
|
GlasswingVC
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Glasswing Ventures
|
Glasswing Ventures is an early stage VC firm investing in AI and frontier tech startups that enable the rise of the intelligent enterprise // www.Glasswing.vc
|
9593d2c91632
|
glasswingventures
| 15
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-29
|
2017-11-29 13:16:07
|
2017-11-29
|
2017-11-29 17:27:17
| 1
| false
|
en
|
2017-11-29
|
2017-11-29 17:55:22
| 0
|
18e1b85e45f1
| 3.550943
| 2
| 0
| 0
|
To me, software engineering is a passion and a feeling of self-fulfillment which, when coupled with fruitful projects giving way to…
| 4
|
Data Science, the old school way
To me, software engineering is a passion and a feeling of self-fulfillment which, when coupled with fruitful projects giving way to advancements in society and humanity as a whole, flares a deeply rooted passion to learn learn learn! This is why, when I first read up on Data Science, I knew that I had honed in on my next target to devour and assimilate into my neural network (yes, the biological one!)
But as I started reading through the hundreds of how-to’s, blogs, MOOCS, video tutorials, online books, PDFs and the rest, I only found more frustration with missing pieces to a larger and more obscure logical puzzle that was defiant and unyielding. A feeling of unrest and concern surfaced my mind as I delved more deeply into the subject getting half-baked and limited understanding allowing me to build only by following “hand-holding” step by step instructions. This was not engineering! Trial after trial the matter finally became clear to me — The topic of Data Science strictly refuses to truly enter a mind until it is imbued with proper mathematical/statistical foundations!
As my reading on Data Science progressed, I would read or hear terms that I knew were key to understanding how Neural Networks worked, but it was just half baked! I started seeing this trend of people seizing the opportunity to piggy back off the Data Science craze and authoring reads and making full fledged online “Data Camps” that in the end will always avoid the real mechanics behind the subject. They breeze through the parts where mathematical foundations are necessary and move on as if it was a side matter and not required to build reliable models. Take this snippet from a free online book for Neural Networks as an example,
“On a related note, the mathematical requirements to read the book are modest. There is some mathematics in most chapters, but it’s usually just elementary algebra and plots of functions, which I expect most readers will be okay with. I occasionally use more advanced mathematics, but have structured the material so you can follow even if some mathematical details elude you.”
Reading through only the second half of Chapter 1: “ We’ll also define the gradient of C to be the vector of partial derivatives”, “We denote the gradient vector by ∇C” and “Then Equation (9) tells us that ΔC≈−η∇C⋅∇C=−η∥∇C∥2.”, all prove otherwise and goes against being “elementary.”
This is why I found myself constantly referring back to tutorials that explained the topic of gradient descent, limits, derivatives, vectors, statistics then go back to continue my reading. But this style of learning left me with a feeling of despair resonating from my logical side whispering, “It’s not enough, I still want to know WHY and HOW!”
Then, one day as I was cleaning up my closet, my eyes suddenly fell on my old college Single Variable Calculus book by James Stewart. Followed by a sudden dramatic stare and a sense of joy from a reunion with an old friend, an innate feeling told me that an epiphany is about to occur! I opened the book - chapter 1 - and lo and behold! “This…is it. It’s…it’s all there! IT’S ALL THERE!” Really, all the missing pieces and loose ends were so finely and reliably described that it left no room for confusion. Yes, I do agree that only after having gone through all the Data Science tutorials that Calculus made so much more sense - but it was still a critical piece to the big Data Science puzzle.
The point I’m driving here is that I feel that there are many fields in I.T. that are slowly being driven by marketing objectives and quick responses to technology hypes rather than the classic RT*M way of deeply rooted “old school” developers. Head hunters are scouring to find anyone that can “fit the description” screening people through checkoff lists and “how many years of” rather than searching for minds with chemical X which in turn forces future Data Scientist to prepare to become real engineers. The pressure coming down from this flawed style of talent selection is slowly giving rise to a batch of cooky cutter engineers that sparks worry given the way the entire field of I.T. ties in together. Security, reliability, scalability and general advancements in I.T. are all negatively effected by allowing the field to grow without well founded roots. Take how Android programming tutorials started as a classic example. The push and drive behind most of the videos, MOOCs, and tutorials was to quickly jump on the demand and make things seem as easy as possible to drive more readers and not towards making strong android engineers. This finally resulted in poor android engineers that designed poorly designed apps making way for many security and stability issues. I believe this is the same trend that is taking place in the Data Science field.
In summary, if you really want to understand Data Science in a way that will allow you to have a positive impact and pave the way for your future in this ever growing field then I strongly suggest studying the basics really well - along with all the MOOCs, videos and tutorials that are still key in understand Data Science and Machine Learning.
|
Data Science, the old school way
| 51
|
data-science-the-old-school-way-18e1b85e45f1
|
2017-12-01
|
2017-12-01 00:42:01
|
https://medium.com/s/story/data-science-the-old-school-way-18e1b85e45f1
| false
| 888
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Amir Elhadidy
|
Software developer experienced in developing and Administering a wide variety of systems and services
|
b0104a7e3b2d
|
atfdynamo5
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
93da5952636b
|
2018-04-03
|
2018-04-03 12:22:17
|
2018-04-03
|
2018-04-03 12:22:18
| 0
| false
|
en
|
2018-04-03
|
2018-04-03 12:22:18
| 1
|
18e22cd66c2b
| 1.366038
| 2
| 0
| 0
| null | 5
|
Blockchain will make AI smarter by feeding it better data
We’ve heard — probably too much — about how cryptocurrencies like Bitcoin shift financial power away from governments and big banks to individuals. Blockchain technology is also democratizing artificial intelligence (AI). Businesses of any size will soon be able to offer the sort of personalized recommendations that are currently the province of giant retailers like Amazon.
Imagine walking into a neighborhood shop for the first time and the sales associate immediately shows you the products that match your tastes, in your size, and that you don’t already own.
That’s the sort of intuitive experience that AI promises. The current challenge to smaller businesses isn’t the cost of AI systems — they’re increasingly more affordable and accessible. The barrier is gaining access to enough high quality data about customers to adequately power those systems. Few retailers can recognize their customers across multiple channels and devices, and they often rely on third-party, behavioral data that doesn’t give them a complete understanding of what products customers want to buy. Moreover, most retailers handle only a small slice of each of their customer’s purchases, hardly enough to make these AI systems work well.
That’s why blockchain technology is so transformative. Its key innovation is to create a database that is open and decentralized, yet with strict controls over privacy. Shoppers could authorize all the stores they patronize to contribute data about their purchases to a blockchain ledger that protects the privacy of both consumers and retailers.
A neighborhood shop, say a shoe boutique with a few stores and an e-commerce site, is never going to have the volume to offer the biggest selection at the lowest prices, but it can win over a group of discerning customers that value its stylish merchandise sourced from smaller, high-quality manufacturers.
Even today, it’s hard for boutique retailers to stay competitive when AI systems for companies like Amazon pounce on customers with recommendations like, “Now that you’ve bought that black cocktail dress, don’t you need some red pumps?” (There are also lot more subtle uses for this personalization technology that are harder to spot.
Posted on 7wData.be.
|
Blockchain will make AI smarter by feeding it better data
| 4
|
blockchain-will-make-ai-smarter-by-feeding-it-better-data-18e22cd66c2b
|
2018-04-03
|
2018-04-03 21:20:24
|
https://medium.com/s/story/blockchain-will-make-ai-smarter-by-feeding-it-better-data-18e22cd66c2b
| false
| 362
|
Insights in the People, Process, Technology and Visualisations of the Data Landscape
| null |
7wdata
| null |
The Data Intelligence Connection
|
yves@7wdata.be
|
the-data-intelligence-connection
|
DATA,INNOVATION,AGILITY
|
7wdata
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Yves Mulkers
|
BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world
|
1335786e6357
|
YvesMulkers
| 17,594
| 8,294
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-05
|
2018-09-05 13:43:48
|
2018-09-10
|
2018-09-10 06:51:08
| 5
| false
|
en
|
2018-09-11
|
2018-09-11 07:50:28
| 9
|
18e2d0f56651
| 5.237107
| 1
| 0
| 0
|
In this story, Inception-v2 [1] by Google is reviewed. This approach introduces a very essential deep learning technique called Batch…
| 5
|
Review: Batch Normalization (Inception-v2 / BN-Inception) —The 2nd to Surpass Human-Level Performance in ILSVRC 2015 (Image Classification)
In this story, Inception-v2 [1] by Google is reviewed. This approach introduces a very essential deep learning technique called Batch Normalization (BN). BN is used for normalizing the value distribution before going into the next layer. With BN, higher accuracy and faster training speed can be achieved.
Intense ILSVRC Competition in 2015
The ILSVRC (ImageNet Large Scale Visual Recognition Competition) competition in 2015 has become intense!
On 6 Feb 2015, Microsoft has proposed PReLU-Net [2] which has 4.94% error rate which surpasses the human error rate of 5.1%.
Five days later, on 11 Feb 2015, Google proposed BN-Inception / Inception-v2 [1] in arXiv (NOT submission to ILSVRC) which has 4.8% error rate.
Though BN did not take part in the ILSVRC competition, BN has a very good concept which has been used for many networks afterwards. And it is a 2015 ICML paper with over 6000 citations at the time I was writing this story. This is a must read item in deep learning. (SH Tsang @ Medium)
ImageNet, is a dataset of over 15 millions labeled high-resolution images with around 22,000 categories. ILSVRC uses a subset of ImageNet of around 1000 images in each of 1000 categories. In all, there are roughly 1.2 million training images, 50,000 validation images and 100,000 testing images.
About The Inception Versions
There are 4 versions. The first GoogLeNet must be the Inception-v1 [3], but there are numerous typos in Inception-v3 [4] which lead to wrong descriptions about Inception versions. Consequently, there are many reviews in the internet mixing up between v2 and v3. Some of the reviews even think that v2 and v3 are the same with only some minor different settings.
Nevertheless, in Inception-v4 [5], Google has a much more clear description about the version issue:
“The Inception deep convolutional architecture was introduced as GoogLeNet in (Szegedy et al. 2015a), here named Inception-v1. Later the Inception architecture was refined in various ways, first by the introduction of batch normalization (Ioffe and Szegedy 2015) (Inception-v2). Later by additional factorization ideas in the third iteration (Szegedy et al. 2015b) which will be referred to as Inception-v3 in this report.”
Thus, when we talk about Batch Normalization (BN), we are talking about Inception-v2 or BN-Inception.
What are covered
Why we need Batch Normalization (BN)?
Batch Normalization (BN)
Ablation Study
Comparison with the State-of-the-art Approaches
1. Why we need Batch Normalization (BN)?
As we should know, the input X is multiplied by weight W and added by bias b and become the output Y at the next layer after an activation function F:
Y=F(W ⋅ X+b)
Previously, F is sigmoid function which is easily saturated at 1 which easily makes the gradient become zero. As the network depth increases, this effect is amplified, and thus slow down the training speed.
ReLU is then used as F, where ReLU(x)=max(x,0), to address the saturation problem and the resulting vanishing gradients. However, careful initialization, learning rate settings are required.
Without BN (Left), With BN (Right)
It is advantageous for the distribution of X to remain fixed over time because a small change will be amplified when network goes deeper.
BN can reduce the dependence of gradients on the scale of the parameters or their initial values. As a result,
Higher learning rate can be used.
The need for Dropout can be reduced.
2. Batch Normalization (BN)
Batch Normalization
During training, we estimate the mean μ and variance σ² of the mini-batch as shown above. And the input is normalized by subtracting the mean μ and dividing it by the standard deviation σ. (The epsilon ε is to prevent denominator from being zero) And additional learnable parameters γ and β are used for scale and shift to have a better shape and position after normalization. And output Y becomes as follows:
Y=F(BN(W ⋅ X+b))
To have a more precise mean and variance, moving average is used to calculate the mean and variance.
During testing, the mean and variance are calculated using the population.
3. Ablation Study
3.1 MNIST dataset
28×28 binary image as input, 3 FC hidden layer with 100 activations each, the last hidden layer followed by 10 activations as there are 10 digits. And the loss is cross entropy loss.
(a) Accuracy: With BN (Blue), Without BN (Black dotted), (b) and (c) One typical activation from last layer
BN network is much more stable.
3.2 Applying BN to GoogLeNet (Inception-v1)
Besides applying BN to Inception-v1 [3], the main difference is that the 5×5 convolutional layers are replaced by two consecutive layers of 3×3 convolutions with up to 128 filters. This is a kind of factorization mentioned in Inception-v3 [4].
Single Crop Accuracy
From the above figure, there are many settings tested:
Inception: Inception-v1 without BN
BN-Baseline: Inception with BN
BN-×5: Initial learning rate is increased by a factor of 5 to 0.0075
BN-×30: Initial learning rate is increased by a factor of 30 to 0.045
BN-×5-Sigmoid: BN-×5 but with Sigmoid
By comparing Inception and BN-Baseline, we can see that using BN can improve the training speed significantly.
By observing BN-×5 and BN-×30, we can see that the initial learning rate can be increased largely to improve the training speed even better.
And by observing BN-×5-Sigmoid, we can see that saturation problem by Sigmoid can be a kind of removed.
4. Comparison with the State-of-the-art Approaches
Comparison with the State-of-the-art Approaches
GoogLeNet (Inception-v1) is the winner in ILSVRC 2014 which has 6.67% error rate.
Deep Image from Baidu, is submitted on 13 Jan 2015, of 5.98% error rate, and with the best error rate of 4.58% with later submissions. Deep Image network is something like VGGNet without any surprise, but it proposed hardware/software co-adaptation which can have up to 64 GPU to increase the batch size up to 1024. (But due to frequent submissions which violated the rule of competition, Baidu was banned for the duration of 1 year. And they also withdrew their paper.)
PReLU-Net from Microsoft, is submitted on 6 Feb 2015, with 4.94% error rate which is the first to surpass human-level performance.
Inception-v2 / BN-Inception, is reported on 11 Feb 2015, has 4.82% error rate which has the best result in this paper.
References
[2015 ICML] [BN-Inception / Inception-v2]
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
[2015 ICCV] [PReLU-Net]
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
[2015 CVPR] [GoogLeNet / Inception-v1]
Going Deeper with Convolutions
[2016 CVPR] [Inception-v3]
Rethinking the Inception Architecture for Computer Vision
[2017 AAAI] [Inception-v4]
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
My Reviews
Review: PReLU-Net, The First to Surpass Human-Level Performance in ILSVRC 2015 (Image Classification)
Review: GoogLeNet (Inception v1) — Winner of ILSVRC 2014 (Image Classification)
Review: VGGNet — 1st Runner-Up (Image Classification), Winner (Localization) in ILSVRC 2014
|
Review: Batch Normalization (Inception-v2 / BN-Inception) —The 2nd to Surpass Human-Level…
| 20
|
review-batch-normalization-inception-v2-bn-inception-the-2nd-to-surpass-human-level-18e2d0f56651
|
2018-09-11
|
2018-09-11 15:01:47
|
https://medium.com/s/story/review-batch-normalization-inception-v2-bn-inception-the-2nd-to-surpass-human-level-18e2d0f56651
| false
| 1,167
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
SH Tsang
|
PhD, Researcher. I share what I've learnt and record what I've done. I hope you like my sharings. :)
|
aff72a0c1243
|
sh.tsang
| 195
| 16
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
9b5d7bbed87e
|
2018-02-26
|
2018-02-26 15:50:03
|
2018-03-07
|
2018-03-07 15:56:18
| 4
| false
|
en
|
2018-03-07
|
2018-03-07 15:59:32
| 2
|
18e3b47f8be1
| 4.028302
| 11
| 0
| 0
|
In our previous post, we described what a supercomputer is and how it works. Here, we would like to talk about its specific use cases. From…
| 3
|
What do supercomputers actually do?
In our previous post, we described what a supercomputer is and how it works. Here, we would like to talk about its specific use cases. From weather forecasting to neuroscience, from astrophysics to playing chess, supercomputers can help a wide range of researchers to deal with various computationally intensive tasks. How do scientists apply massive calculation power?
Typically, supercomputers are used for sophisticated, mathematically intensive scientific problems of national or universal significance. But invariably, some supercomputers are designed to do some specific jobs. For example, IBM built the Deep Blue to play chess. The machine searched through a massive database of possible chess moves and evaluated them in relation to the current situation. In 1997, this supercomputer beat chess champion, Garry Kasparov. IBM Watson machine was also designed to play a game, Jeopardy. However, Watson is now used by health insurers to predict patients’ diagnoses and treatments.
Deep Blue supercomputer plays chess with Garry Kasparov
Nowadays, supercomputers are more designed for general-purpose. The most important criteria for the problem they can solve is to be amenable to massive parallelization. This goes to say that the task can be split into smaller non sequential chunks. Supercomputers are so powerful that they can provide scientists with insight into phenomena that are too big, too little, too fast, or too slow to observe in laboratories.
In the US, National Oceanic and Atmospheric Administration (NOAA) uses the Weather and Climate Operational Supercomputing System, a supercomputer that can make weather forecast, track oceanic and space weather activities and predict rains, thunderstorms, hurricanes and other weather events. The data from observations is loaded into the computer, which then uses complex mathematical models to predict how the weather conditions might change over time. The output is the basis of almost every forecast on weather applications or broadcast across America. European scientists rely on supercomputers too.
A snapshot from the Penn State University real-time hurricane analysis and forecast system. Credit: Fuqing Zhang and Yonghui Weng at Penn State University
Supercomputers can simulate the natural world digitally. The Earth, space or a human body are studied at a very high resolution, atom by atom. For example, scientists used two supercomputers to run a simulation of human immunodeficiency virus interacting with a cell in human body. One of this is the Blue Waters, in Illinois, and the other — Titan, at Oak Ridge National Laboratory. The simulation produced almost 100 terabytes of date, so researchers needed Blue Waters again just to crunch it.
Scientists at the University of Basel used the Piz Daint, currently the third world’s third most powerful supercomputer, to discover interrelationships in the human genome in search for “memory molecules”. Eventually, the study has to lead to more efficient medical treatment for people who suffer memory disturbance diseases. Neuroscientists also use supercomputers to examine the relationship between the structure and function of the brain as well as its dynamic and physiological structures.
Another illustration of a supercomputer capability is molecular dynamics, the way molecules interact with each other. The simulations allow scientists to dock two molecules together and study their interaction. With powerful machines like the supercomputer, researchers can determine the shape of a molecule’s surface and generate an atom-by-atom picture of its geometry. This cannot just be done in a laboratory.
This image comes from a molecular dynamics simulation of lignin reprecipitation onto cellulose (blue), performed onthe Oak Ridge National Laboratory’s Jaguar XT5 supercomputer. Credit: Susan Gregurick.
Asides from modeling natural phenomena, supercomputers can perform a wider range of functions, like figuring out how a bomb or chemical weapon would disperse its harmful elements around. The IBM’s Sequoia machine, for example, is used for nuclear weapon security.
Additionally, Astrophysicists use supercomputers as “time machines” to investigate the past and the future of our universe. In 2000, the Blue Horizon, a supercomputer deployed in San Diego, powered a simulation of the collision of two galaxies: Andromeda and our own Milky Way. Although this is not expected to happen in the next three billion years, the results can be seen now.
In spite of these functions, supercomputers are not always used in the best way they are supposed to. Russian engineers were busted for mining crypto-currencies with the supercomputer of the Federal Nuclear Center, a high-security organization.
Distributed General Supercomputing
The good thing about any computer is that it is a general-purpose machine, which means you can use it in different ways. You can send emails, play games or edit photos by running a different program. Theoretically, a general-purpose supercomputer can be used for absolutely anything. The demand for computing is growing at a fast pace. At the same time, our desktop computers and laptops can be used for a significant portion of the day.
With the block-chain technology, some companies now propose the solution or project — , where participants can lend and borrow computing resources and make money in the process. An example is the Altumea, a decentralized platform where researchers, scientists or artists, who needs to do high-performance calculation, can buy computing power directly from individual GPU owners, who do not use them, round-the-clock. This allows executing large parallel applications significantly cheaper in comparison to a traditional supercomputer. The similar model is used at distributed computing platforms like SETI@Home or Golem.
Of course, this not an ultimate list of supercomputer’s application. Quantum mechanics, oil and gas exploration, molecular modeling, cryptanalysis and physical simulations such as airplane and spacecraft aerodynamics are just a few other areas mention. We’ll probably see more and more of them in the nearest future.
|
What do supercomputers actually do?
| 139
|
what-do-supercomputers-actually-do-18e3b47f8be1
|
2018-03-15
|
2018-03-15 09:38:28
|
https://medium.com/s/story/what-do-supercomputers-actually-do-18e3b47f8be1
| false
| 882
|
Altumea blog
| null | null | null |
altumea
| null |
altumea
| null |
altumea_io
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Olena
|
Commucation wizard @Altumea
|
6cf5a82cd14f
|
olena_5048
| 27
| 5
| 20,181,104
| null | null | null | null | null | null |
0
|
import numpy as np
data = np.array([
[4, 3], [0, 0], [2, 4], [3, 4], [5, 4], [-2, 1], [-3, 0], [-3, -3], [8, 12], [11, 11], [9, 10]
])
K = 3
tol = 0.001
max_iter = 25
centroids = {}
for i in range(K):
centroids[i] = data[i]
groups = {}
for i in range(max_iter):
# step 1
for j in range(K):
groups[j] = []
# step 2
for feature_set in data:
# step 2.1
distances = [np.linalg.norm(feature_set - centroids[centroid_key]) for centroid_key in centroids]
# step 2.2
group = distances.index(min(distances))
groups[group].append(feature_set)
# step 3
old_centroids = dict(centroids)
# step 4
for j in range(K):
centroids[j] = np.average(groups[j], axis=0)
# step 5
optimized = True
for centroid_key in centroids:
old_centroid = old_centroids[centroid_key]
new_centroid = centroids[centroid_key]
a = np.array(new_centroid - old_centroid)
b = np.array(old_centroid)
change = np.divide(a, b, out=np.zeros_like(a), where=b!=0)
if abs(np.sum(change * 100.0)) > tol:
optimized = False
break
if optimized:
break
| 11
|
59eb0ce5ec9e
|
2018-01-30
|
2018-01-30 06:00:11
|
2018-05-01
|
2018-05-01 08:00:40
| 3
| false
|
en
|
2018-09-29
|
2018-09-29 00:27:47
| 2
|
18e407183c0f
| 3.814151
| 4
| 0
| 1
|
We’ve been talking classification for a while now — from K Nearest Neighbors to Naive Bayes to Support Vector Machines. In this post, we’ll…
| 3
|
ML Chops: K Means
We’ve been talking classification for a while now — from K Nearest Neighbors to Naive Bayes to Support Vector Machines. In this post, we’ll be looking at clustering using an algorithm called K Means. Let’s dive in…
Read the previous article in this series on Support Vector Machines…
ML Chops: Support Vector Machine
The Support Vector Machine (SVM) is a supervised learning model used for classification and regression. In this…medium.com
K Means is an unsupervised learning algorithm that tries to cluster data into a specified number of groups, K based on feature similarity. The algorithm works iteratively to assign each data point to one of K groups. Each group is identified by a centroid (the center point of the group) and data points are grouped with the centroid they’re closest to. The centroids are determined during training through optimization.
NB: Unsupervised learning involves training an algorithm with unclassified or unlabeled data. It’s up to the algorithm to figure out a pattern in the data and, in the case of clustering, group similar data points together.
How it works
First, we select the number of groups/clusters, K we want. To figure out what value of K to use, you can visualize the data to see how many groups it can be separated into. This is not always possible because the data can be in more than 3 dimensions. In such a case, you need to have an expectation from your data. Maybe you’re trying to separate good oranges from bad ones. In this case, K = 2.
Next, we create K centroids. The easiest way to do this is to use the first K feature sets from our data or just do a random selection.
These centroids are not optimized. The training process involves moving the centroids until they’re at the center of each group, at which point we can quite correctly determine what group a given data point should be in (the centroid to which it’s closest). With our starting centroids, we iterate through the data grouping each point to the centroid it’s closest to. Afterwards, we change the centroids by taking the mean of all the data points in each group.
This process continues repetitively until we notice the centroids are not changing much or at all. This means we’re optimized!
Code
First things first, the data
Visually, the above data looks like this
From the graph you can easily see the data can be grouped into 3 clusters. Thus K = 3.
Notice the variables tol and max_ter. tol stands for tolerance and represents a percentage (0.001%). As we said earlier, we know we’re optimized when there’s little or no change in the centroids. If the change is greater than 0.001%, we tolerate and keep iterating.
But we can’t keep iterating forever/for too long in the case where we can’t optimize further and the change remains greater than 0.001. So we set max_iter, the maximum number of iterations we’re willing to do.
Next, let’s create the centroids. Their initial values would be the first K data points.
Now unto the meat of the algorithm which is the optimization
Let break things down step by step…
Step 1
Initialize the groups dictionary with K empty arrays. This ensures groups is empty at the start of each iteration.
Step 2.1
Iterate through the data, calculating the euclidean distance of a given feature set from all the centroids.
Step 2.2
The feature set is added to the group of the centroid it’s closest to.
Step 3
Store the current centroids in old_centroids. The centroids are going to change soon. But we need the old centroids to see how much has changed since the last iteration.
Step 4
Calculate the new centroids by taking the average of all the feature sets in each centroid group.
Step 5
Check if the change in any of the centroids is greater than 0.001%. If yes, then we’re not yet optimized.
Here’s the full code for your perusal. It includes some lines of matplotlib code to visualize the data.
Check out the ML Chops repo for a class-based implementation and an example with real world data: https://github.com/nicholaskajoh/ML_Chops/tree/master/k-means.
If you have any questions, concerns or suggestions, don’t hesitate to add a response! 👍
|
ML Chops: K Means
| 14
|
ml-chops-k-means-18e407183c0f
|
2018-09-29
|
2018-09-29 00:27:47
|
https://medium.com/s/story/ml-chops-k-means-18e407183c0f
| false
| 865
|
Web development and Machine Learning tutorials, tech career tips and life from the lens of Computer Science.
| null |
nicholaskajoh
| null |
Alpha Coder
|
kajohterna@gmail.com
|
alpha-coder
|
WEB DEVELOPMENT,MACHINE LEARNING,TECHNOLOGY,CAREER ADVICE,COMPUTER SCIENCE
|
nicholaskajoh
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Nicholas Kajoh
|
Software Engineer, Technical Writer, Teacher.
|
cef8502bd83b
|
nicholaskajoh
| 233
| 49
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-11
|
2017-12-11 06:11:27
|
2018-03-02
|
2018-03-02 03:38:05
| 1
| false
|
th
|
2018-03-02
|
2018-03-02 03:38:05
| 0
|
18e5d946cf06
| 0.649057
| 20
| 1
| 0
|
Big Data คือ ข้อมูลขนาดใหญ่มากจนซอฟต์แวร์หรือฮาร์ดแวร์ธรรมดานั้นไม่สามารถที่จะจัดการหรือวิเคราะห์ได้อย่างมีประสิทธิภาพ
| 4
|
Big Data คือ อะไรกันแน่
Big Data คือ ข้อมูลขนาดใหญ่มากจนซอฟต์แวร์หรือฮาร์ดแวร์ธรรมดานั้นไม่สามารถที่จะจัดการหรือวิเคราะห์ได้อย่างมีประสิทธิภาพ
ฺBig Data คือ การรวบรวมข้อมูลทั้ง Structured (พวกที่เก็บในโครงสร้างตารางข้อมูล) และ Unstructured (พวกที่เป็น text ยาวๆ รูปภาพ และ วิดีโอต่างๆ) มาทำการประมวลวิเคราะห์ข้อมูลและนำไปใช้ประโยชน์
Big Data คือ 4Vs ที่ทุกคนพูดถึงกัน ซึ่งได้แก่ Volume (ข้อมูลขนาดใหญ่) Velocity (ข้อมูลที่เกิดและไหลเข้าสู่การจัดเก็บด้วยความเร็วสูง) Variety (ข้อมูลที่มีความหลากหลายในรูปแบบ) Veracity (ข้อมูลที่มีระดับคุณภาพปะปนกันไป)
ฺBig Data คือ buzzword ที่ทุกคนพูดถึงตลอดเวลา และใช้เป็น Marketing Term ในการสร้างภาพ (น่าเบื่ออออ)
Big Data คือ Big Trend ที่ทุกคนทุกองค์กรพูดถึงอย่างมากในปี 2017 ที่ผ่านมา แต่เป็นสิ่งที่มีคนเข้าใจน้อยมากว่า ตกลง Big Data คืออะไรกันแน่
Big Data คือ ไม่ใช่การที่เราซื้อ Hardware จำนวนมากเพื่อมาเก็บข้อมูลให้ได้เยอะที่สุด ไม่ใช่การถกเถียงว่าเราจะเก็บข้อมูลอะไรดี ไม่ใช่การมานั่งภูมิใจว่า เรามีข้อมูลมากมายมหายศาล
Big Data คือ การพยายามสร้างมูลค่าของธุรกิจจากการนำเข้ามูลจำนวนมากทั้งภายในและภายนอกองค์กร มาวิเคราะห์ประมวลผล (ไม่ใช่แค่เก็บเฉยๆ)
Big Data คือ การสร้างทักษะและความรู้ให้กับทีมงานเพื่อให้สามารถจัดการข้อมูลปริมาณขนาดใหญ่ได้ และเข้าใจเชิงลึกถึงข้อมูลขนาดใหญ่ ไม่ใช่แค่หลับหูหลับตาจับข้อมูลโยนเข้าถัง
Big Data คือ งานของทุกคน ไม่ใช่แค่งานของไอที หรือ ทีมวิเคราะห์ข้อมูล งานนั้นเริ่มตั้งแต่ตัดสินใจว่าจะเก็บข้อมูลอันไหน หรือ ไม่เก็บอันไหน จะเก็บไว้นานเท่าไหร่ จะเก็บไว้ที่ไหนอย่างไร จะเอาไปใช้อย่างไร
Big Data คือ การลงทุนระยะยาว ไม่ใช่การตั้งงบโครงการ แล้วเรียก vendor มา demo ระบบให้ดู แล้วจ่ายเงินเพื่อให้มีระบบซักระบบนึง
ท้ายสุดที่อยากฝากไว้ ก็คือ
Big Data คือ การที่เราเริ่มก้าวจากการให้ความสนใจกับประโยชน์ในการใช้ข้อมูลอย่างแท้จริง จนทำให้เกิดความเข้าใจว่า เราควรต้องมีระบบในการบริหารจัดการข้อมูลที่ดีและสามารถรองรับการเติบโตของข้อมูลที่จะวิ่งไปถึงปริมาณใหญ่ๆ จนสามารถจัดการข้อมูลได้ถึงระดับ real-time และลดการใช้แรงงานในการประมวลผลข้อมูล สร้างรายงาน หรือการวิเคราะห์ข้อมูล
|
Big Data คือ อะไรกันแน่
| 43
|
big-data-คือ-อะไรกันแน่-18e5d946cf06
|
2018-05-28
|
2018-05-28 05:17:18
|
https://medium.com/s/story/big-data-คือ-อะไรกันแน่-18e5d946cf06
| false
| 119
| null | null | null | null | null | null | null | null | null |
Big Data
|
big-data
|
Big Data
| 24,602
|
Thanachart Ritbumroong
|
foodie : traveler : data junkie
|
d83e27d0a896
|
thanachart.rit
| 916
| 443
| 20,181,104
| null | null | null | null | null | null |
0
|
import numpy as np
from numba import jit
from tqdm import tqdm
# 決定木ごとに使用する特徴量とサンプルの選択
@jit('Tuple((b1[:,:], b1[:,:]))(i8, i8, i8)')
def _bootstrap_sample(n_samples, n_estimators, n_features):
n_features_forest = int(np.floor(np.sqrt(n_features)))
features = np.random.choice(n_features, size=(n_estimators, n_features_forest))
indexes = np.random.randint(2, size=(n_estimators, n_samples)).astype(np.bool)
columns = np.zeros((n_estimators, n_features)).astype(np.bool)
for i in range(n_estimators):
col = np.zeros(n_features).astype(np.bool)
col[features[i]] = True
columns[i] = col
return indexes, columns
@jit('f8(f8[:], i8[:], b1[:], i8[:])', nopython=True)
def _gini_index(w, label, index, classes):
w = w[index]
ratio = np.array([w[label==c].sum() / w.sum() for c in classes])
return (ratio * (1 - ratio)).sum()
@jit('f8(f8[:], i8[:], b1[:], i8[:])', nopython=True)
def _cross_entropy(w, label, index, classes):
w = w[index]
ratio = np.array([w[label==c].sum() / w.sum() for c in classes])
return (ratio[ratio!=0] * np.log(ratio[ratio!=0])).sum()
@jit('f8(f8[:], i8[:], i8[:], i8[:], b1[:], b1[:], b1)', nopython=True)
def _calculate_I(w, classes, y_left, y_right, index_left, index_right, use_entropy=True):
n_left = len(y_left)
n_right = len(y_right)
n_total = n_left + n_right
if use_entropy:
left_gain = n_left / n_total * _cross_entropy(w, y_left, index_left, classes)
right_gain = n_right / n_total * _cross_entropy(w, y_right, index_right, classes)
else:
left_gain = n_left / n_total * _gini_index(w, y_left, index_left, classes)
right_gain = n_right / n_total * _gini_index(w, y_right, index_right, classes)
return right_gain + left_gain
@jit('Tuple((i8, f8))(f8[:,:], i8[:], f8[:], b1[:], b1[:], i8[:])', nopython=True)
def _split(X, y, w, index, feature_ids, classes):
d = np.inf
threshold = 0
feature_id = 0
for fi in np.arange(len(feature_ids))[feature_ids]:
xis = X[:, fi]
unique = np.unique(xis[index])
thresholds = (unique[1:] + unique[:-1]) / 2
for th in thresholds:
index_left = np.logical_and(index, xis<th)
index_right = np.logical_and(index, xis>=th)
y_left = y[index_left]
y_right = y[index_right]
I = _calculate_I(w, classes, y_left, y_right, index_left, index_right, True)
if I < d:
d = I
threshold = th
feature_id = fi
return feature_id, threshold
@jit('Tuple((i8[:], b1[:], i8[:], b1[:]))(f8[:,:], i8[:], b1[:], i8[:], i8, f8)')
def to_left_and_right(X, y, index, classes, feature_id, threshold):
index_left = np.logical_and(X[:,feature_id] < threshold, index)
index_right = np.logical_and(X[:,feature_id] >= threshold, index)
classes_left, _left = np.unique(y[index_left], return_counts=True)
classes_right, _right = np.unique(y[index_right], return_counts=True)
count_left = np.zeros(len(classes))
count_right = np.zeros(len(classes))
for i,c in enumerate(classes):
count_left[i] = _left[classes_left==c].sum()
count_right[i] = _right[classes_right==c].sum()
return count_left, index_left, count_right, index_right
def hinge(y):
norm = np.linalg.norm(y, axis=1)
return norm / norm.sum()
def exponential(y, margin):
norm = np.linalg.norm(-y*np.exp(-margin), axis=1)
return norm / norm.sum()
def savage(y, margin):
e_2m = np.exp(margin)**2
grad = -y * e_2m / (1 + e_2m)**3
norm = np.linalg.norm(grad, axis=1)
return norm / norm.sum()
def tangent(y, margin):
grad = 4 * y * (2 * np.arctan(margin) - 1) / (1 + margin**2)
norm = np.linalg.norm(grad, axis=1)
return norm / norm.sum()
class _Node:
def __init__(self, class_count, index):
self.class_count = class_count
self.index = index
self.feature_id = 0
self.threshold = 0.0
self.is_leaf = True
def _grow(self, X, y, w, feature_ids, classes):
# 木の末端なら分裂する
if self.is_leaf:
# このノードに分類されたデータの数が一つ、クラスが一つもしくはデータの種類が1つのみの場合に分裂を中止する。
X_ = X[np.ix_(self.index, feature_ids)]
y_ = y[self.index]
if self.index.sum()==1:
return
if (y_==y_[0]).all():
return
if (X_==X_[0]).all():
return
self.feature_id, self.threshold = _split(X, y, w, self.index, feature_ids, classes)
count_left, index_left, count_right, index_right = to_left_and_right(X, y, self.index, classes, self.feature_id, self.threshold)
self.left = _Node(count_left, index_left)
self.right = _Node(count_right, index_right)
self.is_leaf = False
else:
self.left._grow(X, y, w, feature_ids, classes)
self.right._grow(X, y, w, feature_ids, classes)
class DecisionTree:
def __init__(self, index, feature_ids):
self._classes = None
self.index = index
self.feature_ids = feature_ids
self.top = None
def predict(self, X):
prob = self.predict_prob(X)
return self._classes[np.argmax(prob, axis=1)]
def predict_prob(self, X):
return np.array([self._predict_one(x) for x in X])
def _predict_one(self, xs):
node = self.top
while not node.is_leaf:
is_left = xs[node.feature_id] < node.threshold
node = node.left if is_left else node.right
return node.class_count / node.class_count.sum()
def grow(self, X, y, w):
if self.index is None: return
y_ = y[self.index]
if self.top is None:
self._classes, count = np.unique(y_, return_counts=True)
self.top = _Node(count, self.index)
self.top._grow(X, y, w, self.feature_ids, self._classes)
class AlternatingDecisionForests:
def __init__(self, n_estimators, max_depth):
self.T = n_estimators
self.D = max_depth
def _update_weight(self, X, y):
preds = np.array([e.predict_prob(X) for e in self.estimators])
preds = np.mean(preds, axis=0)
one_hot_ = np.zeros((self.N, len(self._classes))) - 1
index = (np.arange(self.N), y)
one_hot_[index] = 1
margin = one_hot_ * preds
self._w = tangent(one_hot_, margin)
def fit(self, X, y):
self.N = X.shape[0]
self.n_features = X.shape[1]
self._w = np.ones(self.N) / self.N
self._classes = np.unique(y)
self.n_features_forest = int(np.floor(np.sqrt(self.n_features)))
self.estimators = []
#Bootstrap
indexes, columns = _bootstrap_sample(self.N, self.T, self.n_features)
for ind, col in zip(indexes, columns):
self.estimators.append(DecisionTree(ind, col))
#fitting
for i in tqdm(range(self.D)):
for e in self.estimators:
e.grow(X, y, self._w)
self._update_weight(X, y)
def predict(self, X):
probs = np.array([e.predict_prob(X) for e in self.estimators])
prob = np.mean(probs, axis=0)
y_pred = self._classes[np.argmax(prob, axis=1)]
return y_pred
def score(self, X, y):
y_pred = self.predict(X)
return (y==y_pred).mean()
from sklearn.datasets import load_iris
import pandas as pd
from adfs import AlternatingDecisionForests
from sklearn.model_selection import train_test_split
from time import time
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
# 必要ならシャープを外してデータを確認
# df.head()
X = df.loc[:,iris.feature_names].values
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)
adfs = AlternatingDecisionForests(10, 10)
start = time()
adfs.fit(X_train, y_train)
end = time()
print("accuracy: ", adfs.score(X_test, y_test))
print("fitting took {} secs".format(end-start))
| 68
| null |
2018-09-25
|
2018-09-25 06:44:14
|
2018-09-26
|
2018-09-26 16:04:14
| 1
| false
|
ja
|
2018-09-26
|
2018-09-26 16:08:50
| 3
|
18e688f40b7c
| 16.954
| 0
| 0
| 0
|
こんにちは。Medium初回エントリーですが頑張って書いていきたいと思います。今回はモデルの簡単な実装と、シンプルなデータセットを用いた挙動の確認を行います。根本的な実装ミスの指摘や「こう書いたらもっと速度上がるんじゃない?」といったアドバイスお待ちしております。
| 5
|
Alternating Decision ForestsをPythonで実装してみた
こんにちは。Medium初回エントリーですが頑張って書いていきたいと思います。今回はモデルの簡単な実装と、シンプルなデータセットを用いた挙動の確認を行います。根本的な実装ミスの指摘や「こう書いたらもっと速度上がるんじゃない?」といったアドバイスお待ちしております。
今回はタイトル通りRandom Forestの派生アルゴリズムであるAlternating Decision Forestsをpythonで実装してみました。全体の構成は概要、実装、実践の3部に別れています。まずは概要から見ていきましょう。
概要
まずはRandom Forest(RFと以後略)とAlternating Decision Forests(ADFsと以後略)の違いについて大雑把にまとめると以下の2つが挙げられます。
(1)global loss functionの導入
学習器全体として損失関数を導入することによって訓練データに対応した重みを定義することができ、正しく分類できなかったデータを優先的に分類することができます。
(2) 幅優先的にノードを追加
RFにおいては決定木が個々にノードの分裂を行っていましたが、ADFsにおいては最初にそれぞれの決定木に1つのノードを用意し、指定した深度に到達するまで全決定木のノードを同時に分裂させていきます。
全体の損失関数を最小化するようにノードを追加することで従来のRFよりも精度が上がるようです。
実装
いよいよ実装です。アルゴリズムの詳細は元の論文を参照してみてください。numba+numpyで高速化して実装しています。詳しい使い方はこちらを参照してください。
まずは必要なモジュールを読み込みます。importに失敗したらpipなどを使って各自インストールして下さい。
ブートストラップサンプリング
各弱学習器に用いる学習データおよび特徴量をランダムに選択します。
データの重みを考慮に入れた不純度の計算
ノードを分裂する際に指標となる不純度を計算します。不純度の計算にはデータセットの重みが用いられ、このことによって前回のイテレーションで間違えられたサンプルが優先的に考慮され、分割に用いる特徴量と閾値が決定されることになります。計算手法はgini indexとentropyの2種類を用意しています。
ノードの分裂時に用いる特徴量と閾値の決定
上述した不純度を最小化する特徴量と閾値を決定します。
データセットの分割
与えられた特徴量と閾値を用いてデータセットを二つに分割します。return_countsを指定したnp.uniqueはjitの対象外だったのでjitのオプションであるnopython=Trueは追加していません。
重みの更新式
データの重みを更新する際に用います。今回はhinge, exponential, savage, tangentの4つの計算式を用意しています。デフォルトとしてtangentを設定しています。
AlternatingDecisionForestsクラス、DecisionTreeクラス、Nodeクラスの定義
実際にモデルのロジックを書いていきます。まずはNodeクラスです。
次にDecisionTreeクラスです。
最後にAlternatingDecisionForestsクラスです。大雑把なアルゴリズムは、全ての決定木にルートノードを用意したのちデータの重みωを一様になるように初期化し、ノードの追加、ωの更新を最大深度に到達するまで繰り返すというものです。
コード見にくくてすいません。MediumでMarkdownで書く方法ありましたら紹介していただけると幸いです。
実践
実際にモデルを用いて予測をして見ましょう。用いるデータセットはかの有名なirisデータセットです。まずは必要なモジュールを読み込みます。今回は同じディレクトリ内にあるadfs.pyというファイルに上に書いたクラスを実装したのでコードは以下のようになります。
次にデータを読み込みます。
読み込んだデータを訓練用データとテスト用データに分割します。
モデルをトレーニングしてスコアを表示します。
以下のような結果が出れば成功です。
Fig.1 実行結果
シンプルなデータセットを使っているのに精度があまり良くないのが気になります。どこか実装にミスがあるのかもしれません。fittingにかかった時間ももっと短くできるのではないかと思います。
長くなりましたが以上です。まだ完全に論文の趣旨を理解できておらず、簡単に読んだイメージで実装してしまったので、「何やってんだこいつ」とツッコミたくなる場面があったかもしれません。
次回以降しばらくはRFの派生アルゴリズム実装編になる予定です。次回はDecision Junglesについて書いていきたいと思います。それでは次回お楽しみに〜
参考文献
S. Schulter, P. Wohlhart, C. Leistner,
A. Saffari, P. M. Roth and H. Bischof "Alternating Decision Forests" In CVPR, 2013
numba 0.40.0 documentation
https://numba.pydata.org/numba-doc/dev/index.html (9/26/2018)
|
Alternating Decision ForestsをPythonで実装してみた
| 0
|
alternating-decision-forestsをpythonで実装してみた-18e688f40b7c
|
2018-09-26
|
2018-09-26 16:08:50
|
https://medium.com/s/story/alternating-decision-forestsをpythonで実装してみた-18e688f40b7c
| false
| 858
| null | null | null | null | null | null | null | null | null |
Python
|
python
|
Python
| 20,142
|
Akifumi Hamada
|
今興味のある機械学習関連のことをポストしていきます。生命科学を専攻する学生です。
|
97cce791539e
|
akifumihamada
| 4
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-24
|
2018-08-24 17:29:54
|
2018-08-24
|
2018-08-24 18:02:04
| 2
| false
|
es
|
2018-08-24
|
2018-08-24 18:12:59
| 3
|
18e69a19034f
| 0.949371
| 3
| 0
| 0
|
El Instituto Nacional de la Juventud (INJUVE) ha organizado el JUVENTOUR 2018 en CIFCO con el lema “El Salvador sos vos”, en el cuál SPOT…
| 5
|
Análisis facial en CIFCO: JUVENTOUR 2018
El Instituto Nacional de la Juventud (INJUVE) ha organizado el JUVENTOUR 2018 en CIFCO con el lema “El Salvador sos vos”, en el cuál SPOT dará cobertura de Reconocimiento Facial.
Juventour 2018, se realiza todos los años para acercar los servicios que prestan las instituciones del Estado, organismos internacionales y ONG´s en temas como: Educación, salud, cultura, artística, empleo, empleabilidad, emprendimiento, seguridad, deportes y participación juvenil, a la población joven salvadoreña.
Ante un evento de gran magnitud (con más de 50,000 jóvenes asistentes de todas partes de El Salvador) SPOT fue seleccionado para instalar sus cámaras con reconocimiento facial durante todo el evento con el objetivo de:
Brindar seguridad en el desarrollo de las actividades
Conteo en tiempo real de las asistencias al evento
Análisis demográfico de visitas
Conocimiento de mayores horas de afluencia de personas
Conoce más de SPOT: LinkedIn / Facebook/ Web
|
Análisis facial en CIFCO: JUVENTOUR 2018
| 101
|
análisis-facial-en-cifco-juventour-2018-18e69a19034f
|
2018-08-24
|
2018-08-24 18:12:59
|
https://medium.com/s/story/análisis-facial-en-cifco-juventour-2018-18e69a19034f
| false
| 150
| null | null | null | null | null | null | null | null | null |
El Salvador
|
el-salvador
|
El Salvador
| 614
|
SPOT
| null |
ca115f38df93
|
spotcloudio
| 12
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-21
|
2018-02-21 01:39:11
|
2018-02-21
|
2018-02-21 04:34:22
| 0
| false
|
en
|
2018-02-23
|
2018-02-23 21:26:57
| 6
|
18e78180db7e
| 6.041509
| 60
| 0
| 0
|
MIT Technology Review recently ran an article titled “Why even a moth’s brain is smarter than an AI”. Before I go into details about the…
| 3
|
Do electric moths sniff on MNIST?
MIT Technology Review recently ran an article titled “Why even a moth’s brain is smarter than an AI”. Before I go into details about the “moth brain”, let me just start with an unsurprising spoiler: the above article is as wrong as its title is obnoxiously provocative. Even if the moth brain — or some mathematical model associated with it — is “smarter” (i.e. more sample efficient) than current deep learning approaches, there is no clear evidence for that presented anywhere. Neither in the cited paper, nor in the follow up paper “A moth brain learns to read MNIST”. (If you only care about the analysis of their MNIST results, then jump to the last few paragraphs of this article).
Besides the sensationalistic title of the MIT Technology Review article, the seemingly interesting fact about the research reported there is that Charles Delahunt et al. claim that they managed to create a mathematical model of some tiny section of moth’s brain that can learn from significantly less training instances than current machine learning approaches. This is a big claim. Also this is a topic that a lot of researchers (including myself) are naturally interested in: In fact, it is folklore in some circles that deep learning is notoriously inefficient when it comes to learning from limited amount of training data. So, when this proverbial fact arises, researchers nod wisely, pretend it was a big problem and go back to their huge data sets happily. It is like the weather: everybody complains about it, but nobody does anything about it. Or rather: a lot of people claim to work on it, still it never seems to improve. It is a deja vu feeling for me, since until 2014, everybody was complaining about vanishing gradients until they suddenly did not. It turned out, one just needed to try training extremely deep neural networks, but instead of actually doing it, most people just kept on complaining about vanishing and exploding gradients. With ReLU activation and proper initialization, the problem proved to be much tamer than perceived before.
I am deeply suspicious of folklore knowledge, even more suspicious than of people coming from biology claiming that they suddenly outperform state of the art machine learning approaches by modelling some biological systems. My suspicion about the claimed poor sample efficiency of deep learning stems from personal experience. Even in the early days of deep learning based computer vision, in 2013, we successfully trained object detection systems for the VOC benchmark utilizing only just a couple of thousand training images, without pretraining on ImageNet, while matching or exceeding the then best (non-deep learning) techniques. Of course, training from so little data needed a lot of augmentation and regularization, still it was clearly doable with enough patience and skill. I would rather say: deep learning just scales better with the size of the data-set than almost all known competing approaches. In fact, I think understanding the limits of deep learning for large data is more fruitful than complaining about its poor generalization from too little.
But let us get back to the paper with the striking title “Biological Mechanisms for Learning: A Computational Model of Olfactory Learning in the Manduca sexta Moth, with Applications to Neural Nets” by Charles Delahunt et al. First, it is my personal opinion that this line of research is exciting. It is both extremely very relevant (for AI as well) and has great potential. I really like several aspects of the paper and I think if I would review it for some conference or journal, I would like it to appear, but with more careful claims. My criticism is expressed here only because I think the work is a great step towards much more research in this domain and it is worth a close look. On the other hands, the paper seems to have a few flaws, but on the bright side, it is written very well, with a lot of care and thorough reporting, containing valuable experimental results. Also, the authors plan to publish their computational model soon, which will foster future experiments in this domain.
Here, I just want to point out their computational model is verified by analyzing statistical features of their model and by observing that certain high level neural firing statistics are in agreement with that observed in moths. Also, they run experiments to check that the mathematical model is capable of some learning behaviour. This is clearly useful as a general high level test, but still a relatively weak evidence for the claim that the model faithfully emulates all important characteristics of that part of the moth brain. For example, given that their model is a slight modification of the standard spiking neural networks abstraction (with some simple connection and firing statistics modelled after the neural connections found in the moth brain), it really boils down to the detailed quality of the model: the consistent reaction to the same stimuli and the quality of the learning performance: speed, sample efficiency and retaining of old knowledge. Unless these properties are not measured in a comparable manner and verified in more detail, it is very hard to argue for the faithfulness of the model. Still, as mentioned above, the paper presents a well founded initial working hypothesis that hopefully triggers more new research in large scale mathematical modelling of biological neural systems.
Given that I am not an expert in biology I don’t want to go in details about the above paper, but I wanted to express some skepticism on a the aptly named follow up extended abstract “A moth brain learns to read MNIST”. This is more in line with my background and I can ask more well founded questions or suggest constructive criticism. Again, I am thankful that the authors chose to publish this work early on, since it helps to evaluate their approach much more substantially, not just based on some observations on a domain that has not been studied by AI researchers (smells perceived by moths), but on the most extensively studied machine learning data-set in existence: MNIST.
Delahunt et al claims that they outperform most baselines on MNIST. For example they claim that they reach around 75% accuracy while utilizing ten examples per class (that is for a total of 100 supervised samples).
There are quite a few bad things about MNIST —for example that experimental conclusions on MNIST rarely transfer to other data sets — the good thing about it is that there are a lot of baseline results for it. It is known that ladder networks by Harri Valpola do a great job at semi-supervised learning on MNIST. Their paper reports about 99% accuracy by utilizing only 100 labelled training samples. This is vastly better than the reported ~75% for the moth brain inspired models. Not to mention their best “baseline” result of 60% with SVM and even worse results with convolutional networks. The ladder network accuracy comes with the caveat that it utilizes the rest of the MNIST (50000 letters) in unlabelled form, while MothNet did not. This is not a deal breaker in real life applications, since unlabelled training data is very cheap to come by. Also arguably, humans and animals learn from a huge amount of unlabelled data as well, so it seems to be a good idea to leverage it.
I got curious: If I don’t use any unlabelled data, what baseline can I get with so few training instances on MNIST with minimum effort? For lack of time, I constrained myself on training only a single linear layer of SoftMax classifier, straight over the image pixels. I have tuned only three basic hyper-parameters. Knowing that increasing the batch size leads to worse generalization, I have resorted to training with batch-size 1. When using 90% dropout on the input pixels, weight decay 0.00001 and Adam optimizer with learning rate 0.1, I could easily get to an accuracy of 78%. I don’t claim that this experiment is scientific or the best possible result, but it was my third try. This is already marginally better than the claimed result for MothNet and significantly better than their baseline. It took about 10 minutes of experimentation. With similar settings, 60% accuracy could be reached for 3 samples per class and 45% accuracy for one sample per class. Now these numbers are worse than what they claim for “Moth Fast” (still better then their “Moth Natural” numbers and much better then their reported baseline). Admittedly, I could not beat the 70% accuracy for the 1 sample/class case easily, but I have not even started to try any deep network at all.
Given the fact that their results (and especially the baselines) are so easy to beat by a large margin, even with a single linear layer — although they have reported SVM results that should be quite comparable to my experiments —it made me weary about their deeper (CNN) baselines as well. I guess they have not tried dropout at all, which is a bit unfair since dropout is a well established technique and MothNet uses noise for training, too.
However, the higher level moral of the story: if the machine learning community would like to get more serious about sample efficiency (which it claims it wants), we should stop complaining about the weather, but establish some standard baseline results for training with few samples that can be tracked and referred to. Otherwise we might just walk around with umbrellas while the weather is sunny and no clouds are in sight.
|
Do electric moths sniff on MNIST?
| 395
|
do-electric-moths-sniff-on-mnist-18e78180db7e
|
2018-05-14
|
2018-05-14 13:30:18
|
https://medium.com/s/story/do-electric-moths-sniff-on-mnist-18e78180db7e
| false
| 1,601
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Christian Szegedy
| null |
db801026a361
|
christian.szegedy
| 55
| 8
| 20,181,104
| null | null | null | null | null | null |
0
|
flask = "*"
| 1
|
bd6f46cbab91
|
2018-03-11
|
2018-03-11 16:04:29
|
2018-03-11
|
2018-03-11 17:37:04
| 4
| false
|
en
|
2018-03-11
|
2018-03-11 17:37:04
| 6
|
18e7bbcb8405
| 3.24717
| 1
| 0
| 0
|
I love podcasts. It’s true — I began listening to them around 2012 and never turned back. Despite my lengthy love for the media, I…
| 5
|
Podcast Recommender
Visit the Podcast Recommender: https://podcast-recommender.herokuapp.com/
I love podcasts. It’s true — I began listening to them around 2012 and never turned back. Despite my lengthy love for the media, I recognize that many people have not yet indulged in the finer side of radio. Podcasts are essentially on-demand radio shows hosted via platforms such as the Apple Podcasts app or Stitcher. The Podcasts app is similar to iTunes, and categorizes podcasts by genre, allowing the user to view those podcasts that fall under a particular category. However, there does not seem to be a recommendation engine available in the app to suggest similar podcasts that the listener might enjoy. I am sure that a recommender exists somewhere, but I’ll take an approach epitomized by Gavin Belson in Silicon Valley:
For this reason, I decided to applied unsupervised machine learning techniques to build my own!
Model Building
In order to make my podcast recommender, I need a decent amount of text data. Finding transcripts or, at the very least, episode descriptions should be enough to formulate a model. After searching the internet for transcripts, I had to settle for episode descriptions available through the iTunes API. Pressed for time, I opted to use a github repository with metadata and episode descriptions for approximately 10k podcasts. Due to the nature of this project, it was prudent to store the data in a MongoDB database I created using an AWS EC2 instance. With the data stored in my database, I used pyMongo to access it for analysis using my jupyter notebook. Once there, the data can be cleaned using Natural Language Processing (NLP) techniques available through the Python package NLTK. With the text processed, I used to create a latent semantic analysis (LSA) model to develop word frequencies for each podcast. LSA is a technique applying Singular Value Decomposition (SVD) to text data. It allows us to reduce dimensionality and map podcasts to vectors that can then be evaluated using a similarity metric such as cosine similarity. Cosine similarity evaluates two vectors by examining the angle between and magnitude of two vectors.
Cosine Similarity
At this point, calculating cosine similarities between each podcast and every other podcast should unearth similar podcasts, which will serve as recommendations!
Creating an Application
With recommendations in hand, I want to create a flask application that allows a user to enter one of the 10k podcasts included in the system and see a randomly chosen podcast that falls within the 30 most similar as derived from the cosine similarities above. As I am not an expert in bootstrap.js, I searched for publicly available templates as the foundation for my application, and came across some fantastic resources at www.w3schools.com. I customized the layout and realized during this process that I would need to implement an autocomplete search, as my podcasts were stored in a particular manner. This led me to discover the twitter typeahead JavaScript library. By making some additional tweaks for my application, I had a fully functioning flask app that I could access locally!
Photo by Andre Hunter on Unsplash
Using the app locally was fun, and served my purposes for discovering new podcasts, but I really wanted to make it publicly available for everyone to use. A friend recommended that I look into Heroku, a free service for web application deployment.
Heroku Deployment
Walkthrough of the App
There are many available Heroku tutorials that will demonstrate the process of flask app deployment, so I will not detail that in this post. However, I will share a particular hurdle that significantly delayed the deployment of my app. Assuming that you create a virtual environment for your flask application, ensure that the Pipfile in your directory includes all of the necessary packages for your application to work. I consistently received an H10 error code through Heroku that was due to the fact that my Pipfile did not have the line:
Once that issue was addressed, I was able to deploy the application. If you are interested in podcasts or intrigued by the project, I encourage you to visit my podcast recommender at https://podcast-recommender.herokuapp.com/.
|
Podcast Recommender
| 1
|
podcast-recommender-18e7bbcb8405
|
2018-03-12
|
2018-03-12 01:30:55
|
https://medium.com/s/story/podcast-recommender-18e7bbcb8405
| false
| 675
|
A blog documenting my experiences in learning and applying data science
| null | null | null |
Discovering Data Science: A Chronicle
| null |
discovering-data-science-a-chronicle
|
DATA SCIENCE,DATA,DATA ANALYSIS,BOOTCAMP,HOW TO MAKE IT
| null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Christopher Gillespie
|
Data Scientist at Metis — urban planning, trivia, and healthcare aficionado
|
c0c895e003d
|
crgillespie22
| 9
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
ec10e05abbed
|
2018-04-20
|
2018-04-20 08:25:43
|
2018-04-20
|
2018-04-20 09:01:08
| 0
| false
|
ko
|
2018-04-20
|
2018-04-20 09:02:30
| 11
|
18e8d74c1e0d
| 2.872
| 1
| 0
| 0
|
WentaoTian이 CortexLabs에 참여한다는 발표를 한 이후로, 다음내용을 발표하게 된 것을 자랑스럽게 생각합니다:
| 5
|
인공지능분야의 최우수 개발자 5명이 CortexLabs에 합류
WentaoTian이 CortexLabs에 참여한다는 발표를 한 이후로, 다음내용을 발표하게 된 것을 자랑스럽게 생각합니다:
인공지능분야의 최우수 개발자 5명이 CortexLabs 개발자 커뮤니티에 합류했습니다.
이 5명의 커뮤니티 개발자는 인공지능분야의 실력자들입니다. 이들은 세계 인공지능 대회에서 뛰어난 실력을 인정받고 풍부한 경험을 쌓은바 있고, 인공지능 분야에서 우수한 논문을 발표했습니다. CortexLab에 합류한 후 Cortex 퍼블릭 체인 및 모델 레이어를 구축하는 데 기여하게 될 것입니다.
여러분은 지금 Cortex가 GitHub 링크에 나와있다는 것을 확인했을지도 모릅니다. Python POC코드가 거의 완료되었고, 프레임 구조는 구체적인 합의에 따른 추론, 블록 구조 암호화 및 리눅스에 대한 일치된 알고리즘으로 입증되었습니다. 유사한 프로젝트의 표절을 막기 위해, 우리는 코드 공개를 2–3개월 지연시킬 것입니다.
이 5명의 기술자들의 이력은 다음과 같습니다.
Debut Kele Hsu
인공 지능 음향 전문가
파리 제6대학 박사학위
국가 기밀 학교의 강사
17' ACM KDD컵, 16’ CCF 빅데이터 ML, Ctrip Forecasts, 카글 데이터 마이닝 및 최적화 대회 3위
카글 세계 랭킹 59/80K+
Arie Ling
인공지능 연구 과학자
토론토 대학, 캐나다, 컴퓨터 과학.
NIPS 논문, CVPR및 ACL 첫번째 저자
Peiwen Yang
인공지능 연구 과학자
우다시티의 디자이너, 검토자, 멘토
15' 국제 수중 로봇 콘테스트 1위와 2위.
Kaggle Dog Breed Identification 상위 0.5%.
Haihong Yang
인공지능 연구 과학자
Zhejiang 대학, 컴퓨터 과학 및 기술 박사 과정 지원자
Taobao Dressing Collocation Algorithm 1단계 상위 5%
14' 미국 대학 수학 경시 대회 1위
BurnessDuan
인공지능 프레임워크 연구 개발 전문가
TFLearn 4위 기부자
MXNet / TensorFlow / Paddlepaddle 기부자
백서에 나와있는 Cortex프로젝트 일정을 순조롭게 진행하기 위해 더 많은 인공지능과 블록체인 전문가들이 합류할 것입니다. 앞으로의 발표에 주목해 주십시오!
더 자세한 정보
Cortex에 대해 더 자세히 배우고 공식 채널을 통해 기술 개념에 대해 토론하려면 언제든지 저희에게 연락하십시오.
웹사이트 : http://www.cortexlabs.ai/
백서(영문) : http://www.cortexlabs.ai/Cortex_AI_on_Blockchain_EN.pdf
트위터 : https://twitter.com/CTXCBlockchain
페이스북 : https://www.facebook.com/CTXCBlockchain/
레딧 : https://www.reddit.com/user/CTXCBlockchain/
깃허브 : https://github.com/CortexFoundation
미디엄 : https://medium.com/@CTXCBlockchain
공지채널(공식) : https://t.me/CortexLabs
공지채널(한국) : https://t.me/CortexLabsKorean
공식커뮤니티 : https://t.me/CortexBlockchain
중국커뮤니티 : https://t.me/CortexLabsZh
|
인공지능분야의 최우수 개발자 5명이 CortexLabs에 합류
| 1
|
인공지능분야의-최우수-개발자-5명이-cortexlabs에-합류-18e8d74c1e0d
|
2018-04-20
|
2018-04-20 17:56:05
|
https://medium.com/s/story/인공지능분야의-최우수-개발자-5명이-cortexlabs에-합류-18e8d74c1e0d
| false
| 291
|
AI on Blockchain - The Decentralized AI Autonomous System
| null |
CTXCBlockchain
| null |
Cortex Labs
|
support@cortexlabs.ai
|
cortexlabs
|
AI,BLOCKCHAIN,CRYPTOCURRENCY,CTXC,CORTEXLABS
|
CTXCBlockchain
|
한국어
|
한국어
|
한국어
| 84
|
BITZANTIN
|
BITZANTIN believe that blockchain technology is an innovative.
|
c12007643eec
|
bitzantin
| 23
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d0c08339b41f
|
2018-06-19
|
2018-06-19 21:12:22
|
2018-06-11
|
2018-06-11 20:55:30
| 1
| false
|
en
|
2018-06-28
|
2018-06-28 14:29:29
| 18
|
18e8e6ace401
| 1.033962
| 0
| 0
| 0
|
TWiML Talk 149
| 5
|
Problem Formulation for Machine Learning with Romer Rosales
TWiML Talk 149
In this episode, we’re joined by Romer Rosales, Director of AI at LinkedIn.
Subscribe: iTunes / SoundCloud / Google Play / Stitcher/ RSS
We begin with a discussion of graphical models and approximate probability inference, and he helps me make an important connection in the way I think about that topic. We then review some of the applications of machine learning at LinkedIn, and how what Romer calls their ‘holistic approach’ guides the evolution of ML projects at LinkedIn. This leads us into a really interesting discussion about problem formulation and selecting the right objective function for a given problem. We then talk through some of the tools they’ve built to scale their data science efforts, including large-scale constrained optimization solvers, online hyperparameter optimization and more. This was a really fun conversation, that I’m sure you’ll enjoy!
About Romer
Linkedin Engineering on Twitter
Romer on Linkedin
Mentioned in the Interview
The LinkedIn Engineering page
The Economic Graph microsite (for big-picture trends research with the World Bank, etc.)
The Economic Graph Research Project (for academic projects)
Email Volume Optimization at LinkedIn
Join us in celebrating our 2nd Birthday!
TWiML Presents: Series page
TWiML Events Page
TWiML Meetup
TWiML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0
Originally published at twimlai.com on June 11, 2018.
|
Problem Formulation for Machine Learning with Romer Rosales
| 0
|
problem-formulation-for-machine-learning-with-romer-rosales-18e8e6ace401
|
2018-06-28
|
2018-06-28 14:29:29
|
https://medium.com/s/story/problem-formulation-for-machine-learning-with-romer-rosales-18e8e6ace401
| false
| 221
|
Interesting and important stories from the world of machine learning and artificial intelligence. #machinelearning #deeplearning #artificialintelligence #bots
| null |
twimlai
| null |
This Week in Machine Learning & AI
|
team@twimlai.com
|
this-week-in-machine-learning-ai
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING,PODCAST,TECHNOLOGY
|
twimlai
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
TWiML & AI
|
This Week in #MachineLearning & #AI (podcast) brings you the week’s most interesting and important stories from the world of #ML and artificial intelligence.
|
ca095fd8e66c
|
twimlai
| 292
| 33
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d48546e9a9e3
|
2018-01-25
|
2018-01-25 16:57:23
|
2018-01-25
|
2018-01-25 16:57:21
| 1
| false
|
en
|
2018-01-25
|
2018-01-25 17:10:10
| 4
|
18e9a50433bb
| 2.283019
| 1
| 0
| 0
| null | 5
|
4 New Google Adwords Features You Should Be Using
Did you know that Google Adwords updates with new tools every 3–6 months to improve campaign performance? If you aren’t keeping up, you run the risk of using out-of-date keyword and bidding strategies. In order to help you make sense of it all, here are 4 key tools in Adwords you should take advantage of in Q1:
1. Custom Intent-Targeting
After analyzing your existing campaigns, Adwords can auto-create custom intent audiences. These custom intent audiences are created from the common keywords and URLs gleaned from a user’s search behavior. For example, if someone was searching for information about knee-high boots and browsed related content (i.e. “ankle-boots”), you can then target that user while they browse in real-time. You could technically create custom audiences manually and insert related keywords and URLs — but why not harness the power of machine learning and save time?
2. Using APIs and Scripts
For the new version of Adwords API, a programmer can build a software that manages many Adwords campaigns at the same time — a useful tool for marketers, businesses or agencies that are juggling multiple accounts. The API can also do “almost everything Adwords does but programmatically,” from the customer to the keyword level. APIs are a great way to save time and optimize campaigns on a regular basis. Not only can you control spend, pause non-converting keywords and optimize bids, but you can see your disapproved ads and landing pages. Scripts are a useful tool to use if you want to automate certain procedures based on your campaign goals. For example, adjustment of bids, controlling of maximum CPC, etc.
3. Smart Bidding
In your Adwords account, set key performance targets based on your business goals. Once this is set up, a machine learning algorithm in Adwords can use your data to make predictions about how different bid amounts will impact your conversions and campaign performance. During campaign optimization and post-campaign analysis, you can pull insightful reports about your bidding strategy and performance. You can also simulate bidding and see how a certain amount would pan out for the rest of your campaign.
4. Attribution
Your ads are dispersed across many different publishers, but how do you know which publisher is best? To determine which publisher drives the highest conversions and CTR, you need some sort of tag or marking on the high performing URLs. This is where Google attribution comes in — it’s essentially giving credit where credit is due. This is a necessary tool to use starting now. Instead of focusing on last-click events, you’ll be able to review performance more holistically which will result in better optimization and bidding strategy. You can reach your customers earlier in the purchase funnel, and on different devices. Not to mention, it’s free!
These new features will help optimize your campaign faster and save time for you or your paid media specialists to focus on creative messaging for a more visually enticing ad campaign. Remember: Adwords changes a lot — don’t get left behind. To stay up to date on new features, follow our blog on Medium. For questions and thoughts, tweet us @katana_media.
Interested in what we do at Katana? Visit us to learn more.
|
4 New Google Adwords Features You Should Be Using
| 1
|
4-new-google-adwords-features-you-should-be-using-18e9a50433bb
|
2018-05-14
|
2018-05-14 09:22:49
|
https://medium.com/s/story/4-new-google-adwords-features-you-should-be-using-18e9a50433bb
| false
| 552
|
Thoughts, research, and tips on everything from Machine Learning to Media Buying and other next generation topics for advertisers and marketers.
| null |
katanaconnect
| null |
Katana Media
|
letstalk@katana.media
|
katana-media
|
PROGRAMMATIC,DIGITAL MARKETING,DIGITAL ADVERTISING,SOCIAL MEDIA ADVERTISING,BIG DATA
|
Katana_Media
|
Digital Advertising
|
digital-advertising
|
Digital Advertising
| 848
|
Cem Kuşcu
| null |
99f0e9cec24c
|
cemkuscu00
| 4
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
api_data = []
gid = None
did = None
dre = None
for season in seasons:
for kind in kinds:
print(season,kind,'gathering...')
games = nflgame.games(season, kind=kind)
plays = nflgame.combine_plays(games)
plist = [play for play in plays]
for p in plist:
game_id = p.drive.game.eid
drive_id = p.drive.drive_num
drive_res = p.drive.result
if gid != game_id: gid = game_id
if dre != drive_res: dre = drive_res
if did != drive_id:
did = drive_id
api_data.append({'gid':gid,'did':did,'dre':dre})
pd.DataFrame.from_records()
def mapper(row):
df1 = ext.loc[ext.GameID == row.GameID]
df2 = df1.loc[df1.Drive == row.Drive]
if len(df2) == 1:
return df2.drive_result.values[0]
else:
return None
param_dist = {"max_depth": [3, None],
"max_features": sp_randint(1, 20),
"min_samples_split": sp_randint(2, 111),
"min_samples_leaf": sp_randint(1, 111),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
| 4
| null |
2017-10-02
|
2017-10-02 12:57:58
|
2017-10-02
|
2017-10-02 13:26:59
| 1
| false
|
en
|
2017-11-19
|
2017-11-19 05:11:15
| 6
|
18ea1bc3e9fa
| 6.65283
| 0
| 0
| 0
|
How to make our model better.
| 4
|
NFL Data and Machine Learning… One Play at a Time — Part 3
How to make our model better.
In our last post we got more technical about our data, made a basic model to choose the right play given a specific game scenario, and showcased a web app developed to run the model whenever, wherever you want.
Part one investigated our data-set provided by the NFL APIs since the 2009 season and explored it in different ways. We looked at few different visualizations of the game and dug into how a rule change back in 2015 could be detected in the data.
Here we will take this model one step further and incorporate some more advanced methods to indicate which actions on a play lead to positive or negative outcomes.
What’s Really Important
NFL play predictions are tough to get exactly right. We have already attempted to solve this with a basic model with basic play details. However, the key information anyone wants to know about a play is not only what play makes sense for the current game situation, it is also how positive or negative the impact of doing one type of play vs another is on the outcome of the game. We can formulate this by incorporating our response variable from the first model(play type) into this model, along with the offensive and defensive teams taking part in the given play. We will use the same Random Forest Classifier machine learning method with a new response variable we will compute from the data available. This response variable we need to come up with is whether a play was a “good” or “bad” idea.
Before we go further we need to discuss our scoring metric setup and dig a bit deeper into some challenges we face with this machine learning problem. To generate this new metric we have to look at the impact of a play on the game in as general or specific of a way we like. A statistic that can give a good indication of this is to look at the results of the overall drive of a play and provide it a positive or negative connotation depending on the outcome of the drive. If that drive resulted in scoring points then it would be a positive outcome. The resulting “good” or “bad” nature of a play could also be determined by other factors such as number of points earned for the drive or yards gained on the play. Even severe penalties could be incorporated for fumbling and throwing an interception.
So with all these options why are we going to proceed with a binary response variable? (1 if points were scored on the drive and 0 if no points were scored) Reliability. Machine learning methods have been developed for addressing this ranking classification issue but each have various caveats and pitfalls that can severely hamper our analysis. Our venture with this model will rely on the more tested and understood binary response for supervised classification. If we see promising results then it would be safe to assume further refinement of the success metric and experimentation with these alternative machine learning methods for ranked classification can be done with even stronger predictive power when handled properly. We have to start somewhere though, so lets see what comes of our original idea.
Searching For Answers
Our data set is pulled together using nflscrapR (with an R script described in the previous post). It has information on what drive each play is a part of, however, we want to know the ultimate result for each drive of a game. Did a given play occur on a drive that ultimately gained points or turned it back over to the other team?
To accomplish this we have two options. First we can go through all our existing data and pull out the last play of each drive we have. That last play should indicate if the drive was a success (Touchdown, Field Goal) or a failure (Punt, Downs, Interception). We can also get this data through a python package named nflgame. This packages lets us compile not only play-by-play data, but game and drive specific data via python data structures. We will use the second option here since it really is a toss up between the two and exploring another API and incorporating new data into our existing data set will be a good challenge.
In the above code we are generating play data for every play available in the regular and post seasons games. For each play we check if the drive is different than the last one. If it is we take a snapshot of that play’s drive number, game id, and of course the result of the drive. We store this as a dictionary in a running list. Once we have gone through each of the plays we convert the list of dictionaries to a pandas data frame though the use of the following function:
This will take our list of dictionaries, extract each key to be a new column and populate that column with the corresponding variable from each dictionary. Next we store this as a csv file (why not, it took a long time to run) and get started on integrating this data with our original data set.
The next step is to merge this new drive_result column with our old data set. This process would have been a lot simpler if every play in the original data was labeled with a playID, but of course the more complicated way is required.
This mapping function takes each row from our old dataset and manually joins itself with our new drive information via GameID and Drive. If a match is found we return the corresponding drive result.
Now we have added a column to our original data that states what the outcome of the play’s drive will be. For our purposes this value needs to be converted from its current categorical nature to that of a binary, good or bad, response variable. This is the point in the analysis we had discussed before that could change this problem from a binary classification to a ranked one if we so desired.
Dummies
Now that all our key pieces of data are ready for analysis we should be able to begin fitting our model and tuning results right? Not so fast! Since we are using a scikit-learn’s RandomForestClassifier() we have to convert our categorical input values to a whole lot of dummy ones using a process called one-hot encoding. The basic premise is as follows: You have a categorical variable with three different possible categories, yes, no, and maybe. One-hot encoding means you need to replace that one column with three columns, each of which can only be 0 (cold) or 1(hot). The names of these columns would match that of the different levels in the original categorical variable. This is done to preserve a categorical value’s information when trying to use it’s predictive power on a tool that doesn’t play nicely with categories.
There are drawbacks to doing this though as in our case we need to convert our two NFL team factors and our play type factor. This adds a substantial number of columns to our input for our classifier. Though once complete we can begin fitting and tuning our model.
Performance
After splitting the data for testing and training, we run a RandomizedSearchCV to search the hyperparameter space for the optimum settings on our Random Forest Classifier. Below you can see what was used as an input for the parameter space that should be randomly searched by the cross validation / tuning function.
After 10 K-folds on 10 different randomized parameter sets totaling 100 different fits of the model our best score came back as 0.77964. Not bad at all given the increased complexity of this model vs our previous one. We experimented with removing the current playing teams from the model to see how impactful they were only to find our results would be barely better than randomly guessing at 0.55. Keeping the team information definitely strengthens our model.
Satisfied with our hyper parameter tuning results we fit the resulting best parameters to a new classifier, fit it against our training data and tested it on our test set to get a score of 0.781! It actually has improved on our test data.
Lets take a look at the ROC curve.
The two lines are just mirrors of one another. The key here is to understand that a perfect model would have a curve that looks more like a right angle passing through point (0,1). On the other hand, the closer our curve get’s to passing through (0.5,0.5) the closer it is to just randomly guessing an answer. This is typically reviewed by looking at the AUC or area under the curve which is provided in the legend area of this chart. We have an AUC of 0.76. Much better than random guessing but still failing in certain cases.
You can find this model in the GitHub repo for this project as well as on Monday Morning Quarterback where you can put your own input’s in and see what it thinks about your decision to have the Jets pass the ball when down by 3 in the 4th with 10 yards to go when playing the Patriots.
Hope you enjoyed reading this!
Thanks for reading!
Feel free to reach out to me via the links below.
LinkedIn | Twitter | GitHub | Website
|
NFL Data and Machine Learning… One Play at a Time — Part 3
| 0
|
nfl-data-and-machine-learning-one-play-at-a-time-part-3-18ea1bc3e9fa
|
2018-03-20
|
2018-03-20 07:15:50
|
https://medium.com/s/story/nfl-data-and-machine-learning-one-play-at-a-time-part-3-18ea1bc3e9fa
| false
| 1,710
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Michael Skrzypiec
|
Data Geek
|
3a94ec56a037
|
skrzym
| 20
| 62
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-07
|
2018-05-07 21:24:13
|
2018-05-08
|
2018-05-08 12:50:06
| 2
| false
|
en
|
2018-05-08
|
2018-05-08 12:53:18
| 6
|
18ebdbf8a403
| 1.696541
| 4
| 0
| 0
|
One of the confounding things about Deep Learning is that we really don’t grok why it works. Oh, fine, we do, kinda, understand the “how” —…
| 4
|
Deep Learning and Interpretability
One of the confounding things about Deep Learning is that we really don’t grok why it works. Oh, fine, we do, kinda, understand the “how” — Stochastic Gradient Descent, TensorFlow, NVIDIA, whatever — but “why it does what it does” is, well, beyond us.
When you look at a trained neural network, our human tendency to categorize things tends to come to the fore. We look at the model, with all it’s weights, and we look for some sort of order, a pattern that could explain things, and, inevitably, we find what we’re looking for.
Or, well, we kinda find it. In many many cases, we spot neurons that seem to have very specific functions. For example, there is the infamous “Jennifer Aniston neuron” that fires whenever she shows up on the screen. On a more relevant note, we can identify clusters of neurons that are associated with very specific tasks (identifying cats, for example).
The vast majority of the neurons, however, don’t actually seem to do anything that we can identify. They’re all just there as part of a confusing morass of neurons that are clearly necessary, but for what purpose?
/via https://deepmind.com/blog/understanding-deep-learning-through-neuron-deletion/
The folks at DeepMind set out to study this, using tried-and-proven software-engineering techniques (aka: “Start deleting stuff, and see what breaks”). In this case, they started deleting individual neurons to see what impact if would have on performance.
Surprisingly, they found that “networks which generalise well were much more robust to deletions than networks which simply memorised images that were previously seen during training. In other words, networks which generalise better are harder to break”
The thing about the above is that “generalise” is a different way of saying “one of those confusing morass of neurons”. Or, to put all this in other words, “The more generalizable a network, the less interpretable it is”, a quite humbling result, really.
Coda: There may be parallels to our DNA here, where we’ve been discovering that all that “junk DNA” isn’t actually junk after all…
(This article also appears on my blog)
|
Deep Learning and Interpretability
| 34
|
deep-learning-and-interpretability-18ebdbf8a403
|
2018-05-09
|
2018-05-09 14:54:49
|
https://medium.com/s/story/deep-learning-and-interpretability-18ebdbf8a403
| false
| 348
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Mahesh Paolini-Subramanya
|
That Tall Bald Indian Guy…
|
bd8dbcc39636
|
dieswaytoofast
| 110
| 48
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-03
|
2018-06-03 07:48:02
|
2018-06-03
|
2018-06-03 08:02:35
| 0
| false
|
en
|
2018-06-03
|
2018-06-03 08:02:35
| 3
|
18eded2cdbb7
| 0.279245
| 4
| 0
| 0
|
The AWS Summit Benelux took place a few days ago and I had the pleasure to give a talk on deploying Machine Learning at the Edge.
| 5
|
Talk: ML at the Edge @ AWS Summit Benelux
The AWS Summit Benelux took place a few days ago and I had the pleasure to give a talk on deploying Machine Learning at the Edge.
Machine Learning inference at the Edge
If you’d like video versions, here are a couple of recent talks :)
Webinar: Enabling Deep Learning in IoT Applications with Apache MXNet
Talk @ Papis.io: Deep Learning at the Edge with AWS DeepLens
Happy to answer questions here or on Twitter. For more content, please feel free to check out my YouTube channel.
|
Talk: ML at the Edge @ AWS Summit Benelux
| 5
|
talk-ml-at-the-edge-aws-summit-benelux-18eded2cdbb7
|
2018-06-15
|
2018-06-15 20:11:51
|
https://medium.com/s/story/talk-ml-at-the-edge-aws-summit-benelux-18eded2cdbb7
| false
| 74
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Julien Simon
|
Hacker. Headbanger. Harley rider. Hunter. https://aws.amazon.com/evangelists/julien-simon/
|
4ffe14103b7a
|
julsimon
| 3,230
| 31
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-05
|
2018-04-05 12:49:01
|
2018-04-05
|
2018-04-05 17:25:28
| 1
| false
|
en
|
2018-04-05
|
2018-04-05 17:25:28
| 1
|
18ef14fd2a39
| 1.132075
| 3
| 1
| 0
|
AI and Blockchain, two buzzwords in technology industry right now. I will skip the introduction to AI and Blockchain separately, assuming…
| 4
|
Merging of Artificial Intelligence and BlockChain
AI and Blockchain, two buzzwords in technology industry right now. I will skip the introduction to AI and Blockchain separately, assuming that people have the basic idea for the same.
Lets see how can we merge both the technologies and will it be beneficial to the industry and in what ways. What can be the uses of merging both tech ?
The most important thing in Machine Learning, Artificial Intelligence is “data”. Without it, we cannot train the network, basically we need lots of data to successfully develop a model. At the same time, we need to make sure to secure the data especially if it is very critical data, then we cannot take risk for it to be leaked. As we know, recently we had a case where data on popular social networking site was leaked. So we need proper security, a trusted one for this task.
As we are aware about the Bitcoins, a blockchain technology. Its popular digital coin. As its uses cryptography (encryption), so its very reliable and secure. We can use this technology in AI as well so as to secure the data. Blockchain database will have all the data in encrypted form. It will be a great idea to merge AI with blockchain to get the benefits.
Let me know if you have any other application of both the technologies together.
References
https://www.forbes.com/sites/bernardmarr/2018/03/02/artificial-intelligence-and-blockchain-3-major-benefits-of-combining-these-two-mega-trends/#77532d124b44
|
Merging of Artificial Intelligence and BlockChain
| 101
|
merging-of-artificial-intelligence-and-blockchain-18ef14fd2a39
|
2018-05-21
|
2018-05-21 11:18:02
|
https://medium.com/s/story/merging-of-artificial-intelligence-and-blockchain-18ef14fd2a39
| false
| 247
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Disha Patel
| null |
45292b675a60
|
disha.kmh
| 2
| 88
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-29
|
2018-04-29 06:09:03
|
2018-04-29
|
2018-04-29 06:11:43
| 1
| false
|
en
|
2018-04-29
|
2018-04-29 10:13:27
| 1
|
18ef6aa476db
| 3.750943
| 0
| 0
| 0
|
Data runs the modern world. Targeted advertising has made relatively new companies like Google and Facebook corporate behemoths. Online…
| 5
|
Analytics in Education : An Endless Number of Possibilities
Analysis of Student Activities!
Data runs the modern world. Targeted advertising has made relatively new companies like Google and Facebook corporate behemoths. Online retailers like Amazon and Target can predict your next purchase before the though even forms in your mind. Data is everywhere and deeply affects our lives in the 21st century.
Another thing that also deeply affects our (and everyone else’s) lives in the 21st century is Education. It has, however, remained somewhat aloof of the frenetic disruption that data-driven technologies have caused elsewhere. Education certainly isn’t lacking in data; in fact it is one of the most data-intense domains where performance is tracked and evaluated as an end in itself. However, precious little has been done to leverage this veritable gold mine in order to improve the quality of education students receive.
The Start of Analysis
Schools are slowly adopting the latest and greatest in technology. Once a domain only for the ultra-exclusive schools for the rich or the so-called “alternative education” practitioners, opportunities for producing data-driven solutions are increasing day by day. Modern technology allows for tracking fine metrics and accurate data analysis.
Analytics in education can work for the betterment of both teachers and students. Learning from the pitfalls of No Child Left Behind program in the US (viz. connecting teachers’ pay to student performance, among others), we can use the data rich environment afforded by schools to improve both how teachers teach and how students learn.
Window shopping
Because of the sheer scope of data that is tracked by schools, analysis can lead to insights in not only education but also how that affects and is in turn affected by socio-economic factors. Tracked variables include, but are certainly not limited to, location, health records, parental income, parental status, past learning experiences, educational variables like grades and scores, attendance and truancy records, etc. It is paramount that this sensitive data be properly anonymised and randomised in order to prevent biases and data-hacking to influence results favourable to vested interests.
Once the proper precautions are taken, though, even very simple analytical tools can be used to derived valuable insight, from drawing attention to laggard students to drawing connections between academic performance and behavioural patterns such as truancy or late submissions of assignments.
A new stratagem
In order to leverage this analytical muscle, education must move to new age media that supports tracking. This means an increased focus on online learning, education software, and social network integration. The conventional methods of paper books must be abandoned at the institutional level. This will not only reduce overhead in converting data-on-paper into digital but also allow tracking behavioural patterns that otherwise could never have been discerned and analysed.
The way Google tracks login behaviours to detect and predict fraudulent activity, educational analytics can track the time spent on a particular chapter or paragraph to model not only each individual student but also the student body as a whole vis-a-vis learning ability and instruction assimilation. Tracking how students interact with each other on social media and other platforms of discourse can also be effective in evaluating the value of each individual’s contribution and further diversify the data pool from which we can draw insight.
This situation has also spurred further research and development into Learning Management Systems (LMS) that are environments tailor made to facilitate and prioritise learning in every interaction while minimising the possibilities of entertainment or distraction that conventional social networks facilitate at every level.
From personalising courses to individual ability to providing new means of evaluating progress that are not just standardised testing, analytics really hold the key to unlocking the next paradigm shift in education.
There are however, concerns. Privacy is paramount among them, as the invasive data collection can be used to affect discrimination in the wrong hands. Also concerning is the question of ownership of the data generated by these mechanisms; does it belong to the students or the institution? What if a student wishes to exercise the equivalent of The Right to be Forgotten in this context? Who can access the results of the analysis? Should students be able to check their own profiles? There are no easy answers here and significant teething problems are to be expected.
Treading carefully
For all the importance we assign to education, we still have no clear idea about the underlying mechanisms of education. Even after all these years, we do not know which standard educational practices affect favourable outcomes in students, if any. We do not know which practices need to be curtailed. It is mostly flying blind and the Silicon Valley approach of Move Fast and Break Things is as appropriate in this environment as an ice cream cake near a crackling fireplace.
Analytics also has its limits. The variables we track are structurally defined by the medium we observe. As such, there is no way to analyse the impact of an encouraging teacher or a supportive circle of friends and co-learners. We must tread carefully and not lose sight of the goal that is to improve education in learners, not just as statistics.
We know that we need to reform our education system but we cannot predict its effects with any degree of certainty. Analytics can certainly ameliorate this impasse by providing dependable insight on how to best treat the pain points in modern education systems. The blinds must be lifted before we can chart a course to the destination. And only then will we be able to enjoy the view from the windows.
By,
TheoreX Edutech LLP
|
Analytics in Education : An Endless Number of Possibilities
| 0
|
analytics-in-education-an-endless-number-of-possibilities-18ef6aa476db
|
2018-04-29
|
2018-04-29 10:13:28
|
https://medium.com/s/story/analytics-in-education-an-endless-number-of-possibilities-18ef6aa476db
| false
| 941
| null | null | null | null | null | null | null | null | null |
Education
|
education
|
Education
| 211,342
|
Ayan Choudhury
|
Education Entrepreneur, Listener & Learner, Chief Hustler of TheoreX Edutech, Blending Deep Technologies and Education to make awesome future ready products!
|
1220d923de5b
|
ayanchoudhury
| 227
| 209
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-18
|
2018-06-18 19:59:47
|
2018-06-18
|
2018-06-18 20:02:53
| 0
| false
|
en
|
2018-06-18
|
2018-06-18 20:02:53
| 4
|
18ef88f8650
| 0.675472
| 0
| 0
| 0
|
In recent years, an almost countless number of corporate sectors have felt the impact of artificial intelligence (AI) and machine learning…
| 5
|
What does ML/AI mean for digital asset management in 2018?
In recent years, an almost countless number of corporate sectors have felt the impact of artificial intelligence (AI) and machine learning (ML). Wall Street’s adoption of this technology has been no different — especially within digital asset management (DAM) — though some top experts suggest that growth, in this regard, is only just beginning.With this notion in mind, the immediate future appears poised to expand upon a variety of lingering trends and potential developments in AI and ML’s widespread implementation.
These are a few of the most pivotal trends in AI/ML asset management moving into 2018 and beyond.
New possibilities
Perhaps the biggest drawing point of AI/ML is its ability to streamline potentially tedious tasks in creative new ways. In DAM, these benefits have become more prominent as the technology grows increasingly sophisticated. Now, DAM professionals are shifting toward innovative approaches in implementing both AI and ML, exploring the possibilities of image recognition tagging technology, keywording, and search engine optimization (SEO) — among other concepts and services.
Read more
|
What does ML/AI mean for digital asset management in 2018?
| 0
|
what-does-ml-ai-mean-for-digital-asset-management-in-2018-18ef88f8650
|
2018-06-18
|
2018-06-18 20:02:54
|
https://medium.com/s/story/what-does-ml-ai-mean-for-digital-asset-management-in-2018-18ef88f8650
| false
| 179
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Vish Nandlall
|
Vish Nandlall is an accomplished tech executive with significant international experience. Learn more at vishnandlall.net.
|
d84b9abc767c
|
vnandlall
| 88
| 92
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-10
|
2018-07-10 09:10:01
|
2018-07-10
|
2018-07-10 09:14:02
| 1
| false
|
en
|
2018-07-10
|
2018-07-10 09:14:02
| 4
|
18f12da09082
| 1.237736
| 0
| 0
| 0
|
Automated bots can do a variety of tasks from marking a date on the calendar, booking a reservation at a restaurant, collecting and…
| 5
|
The New Generation Chatbots
“A robot named Pepper holding an iPad” by Alex Knight on Unsplash
Automated bots can do a variety of tasks from marking a date on the calendar, booking a reservation at a restaurant, collecting and displaying information to its users, to informing the user about the weather. The most frequently used model of bots is of a chatbot, which is designed to simulate a conversation with a person.
This is the key reason of the widespread acceptance of chatbots in messaging applications. Moreover, these applications are versatile and seamlessly adapt to any environment to help solve different business problems. These applications can help automate tasks that are to be done frequently and at the right time, such as email marketing and responding to frequently asked questions.
As chat is a key part in mobile devices, most chatbot implementations are easy. Most enterprise chatbots come with a hovering box on a webpage offering a unique an experience that turns out to be weak in comparison to the normal customer norms. Though the floating boxes have a significant position, they’re quickly moving out of the bigger picture.
Rather than working around a single messaging channel, these pieces of proprietary AI shift from IM to voice call to Facebook, from placing an order to registering a query and back without interruption — like an independent entity. Just as phone-based personal assistants such as Siri and Alexa use data from various applications and devices to improve their services, connected enterprise chatbots can also use information from different channels and internal databases to become smarter and flexible.
Also Read:
Take Monitoring To The Next Level With Customer Experience
Introducing Consumer Internet Of Things (CIoT) And Its Evolution
|
The New Generation Chatbots
| 0
|
the-new-generation-chatbots-18f12da09082
|
2018-07-10
|
2018-07-10 09:14:03
|
https://medium.com/s/story/the-new-generation-chatbots-18f12da09082
| false
| 275
| null | null | null | null | null | null | null | null | null |
Chatbots
|
chatbots
|
Chatbots
| 15,820
|
Sahana Karnam
| null |
4d14a046af36
|
sahana_63956
| 43
| 174
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
20b7ae10293f
|
2018-04-23
|
2018-04-23 11:05:48
|
2018-04-23
|
2018-04-23 12:54:04
| 8
| false
|
en
|
2018-05-27
|
2018-05-27 09:24:08
| 4
|
18f13870191b
| 5.016352
| 8
| 0
| 0
|
What is Machine Learning?
| 5
|
Benefits of Machine Learning in different industries
What is Machine Learning?
Nowadays, we hear talk almost every day about facial-recognition technology, driverless cars and online recommendation offers. Tech giants, such as Apple, Google and Amazon, have turned these concepts into reality and are making huge progresses in order to rapidly improve the reliability of such services. We know that behind these recent developments there is the advance of Artificial Intelligence — and most specifically, Machine Learning. But do we all have clear in mind what Machine Learning is?
Machine Learning is a method of data analysis that allows computer programs to analyze big data, extract information automatically and learn from them, in order to identify future patterns with the minimum human intervention.
Machine Learning is not a brand new concept, as this term itself was coined back in 1959 by Arthur Samuel, an American pioneer of Artificial Intelligence research, that originally referred to this concept as “the field of study that gives computers the ability to learn without being explicitly programmed”.
However, the ability to mechanically apply complex mathematical calculations to big data is just a recent development, and nowadays Machine Learning plays a key role in industries such as transportation, oil and gas and healthcare.
How do these industries benefit from Machine Learning algorithms?
A part for the development of self-driving cars, the transportation industry is using Machine Learning to collect commuters and travelers’ data in order to predict potential problems and increase the efficiency of the routes, generating more profitability.
Thanks to Machine Learning, the oil and gas sector is now able to detect new energy sources and predict refinery sensor failures, coming up consequently with more cost-effective strategies. Because machines can analyze huge amounts of data, a lot of processes have now been automatized, resulting in a great amount of cost saving.
The healthcare area, from its side, is taking advantage of Machine Learning procedures to improve diagnoses and treatments, with the project of fighting diseases for which, until now, a cure has not been found yet.
In general, the ability of the recent Machine Learning technologies to analyze large amounts of clinical data can give make a great impact in the decision making process. Moreover, the use of predictive analytics with samples of the patients’ population can allow healthcare providers to take preventive actions, reducing health risks and saving superfluous costs.
How are financial services being improved with the help of Machine Learning?
Another industry that, because of Machine Learning, is experiencing major progresses, is the financial one.
Thanks to Natural Language Processing, a subset of Machine Learning that enables systems to understand language, banks can now provide a more efficient and rapid customer service through the use of chat bots and conversational interfaces, that can help clients make payments, manage their accounts and find answers to general questions in real time, without the need of waiting for an agent to make time for them.
The improved Machine Learning technology is now able to provide more targeted and personalized financial services. In fact, budget management apps provided with Machine Learning abilities can now offer highly customized financial advices, by tracking clients’ spending, interests and behaviors in a more precise way. There are certain algorithms in particular, that are called robo-advisors and are built to calibrate a financial portfolio to the goals and to the risk tolerance of a specific client. For this reason, they can provide a customized financial advice to every different customer, according to the target that each of them enters.
Another great improvement that Machine Learning has brought to the society, is the ability to detect and prevent fraud. In the years, the number of transactions that are made every day has increased significantly, and so have the payment methods that people can use.
Today, customers’ data have reached an amount that is practically impossible to be analyzed correctly by human beings. With the help of Machine Learning, systems can now detect any suspicious activity and behavior, and flag them for security teams.
According to a report published by Goldman Sachs in 2016, by 2025 Machine Learning and Artificial Intelligence will enable $34 billion to $43 billion in “cost savings and new revenue opportunities” within the financial sector.
What we want to focus our attention on, however, is how Machine Learning is playing a particularly significant role in the trading business.
How is Machine Learning bringing extra value to traders?
We know how hard it is for traders to collect all the necessary information and analyze them in the correct way to make a profit out of this activity. The needed data are possibly infinite and the study phase is endless. What if technology developed machines capable of doing all the dirty job for us, and we could simply enjoy the return of our investments, without any extra effort needed?
We know already, at this point, that Machine Learning algorithms are able to analyze huge quantities of historical data and to forecast the future trends. This means that ideally they can predict future stock prices better and more rapidly that even the smartest human being on Earth could do.
Moreover, Machine Learning can monitor a very large number of markets at the same time, while a person already encounters difficulties in analyzing one market, having to face the tradeoff between time efficiency and detail of his research.
Most importantly, the application of Machine Learning in the trade market would allow traders to finally say goodbye to what is probably considered as their worst enemy and that we call the “human factor”.
Most of the transactions and trading decisions are influenced by human feelings and anxiousness, and when a person gets nervous or excited about how its target company is going, the trading process in the relative stock market can flip in the opposite direction, and all the hours spent in researches and industry analysis can result just as useless in a blink of an eye.
Can you imagine your trading future with no more avoidable mistakes and losses due to your lack of attention and your incomplete researches?
Follow AITrading and see this dream taking shape with us.
|
Benefits of Machine Learning in different industries
| 351
|
benefits-of-machine-learning-in-different-industries-18f13870191b
|
2018-06-20
|
2018-06-20 19:26:16
|
https://medium.com/s/story/benefits-of-machine-learning-in-different-industries-18f13870191b
| false
| 1,029
|
AI-powered trading ecosystem. We help traders to earn more. We help people to trade easily.
| null |
aitrading.official
| null |
AITrading
|
info@aitrading.com
|
aitrading
|
MACHINE LEARNING,TRADING,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,DATA SCIENCE
|
aitrading_com
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
AITrading
|
AI-powered trading ecosystem. 🤖 Wealth management for everyone.
|
24a6f92ac4fa
|
aitrading
| 69
| 83
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-16
|
2018-09-16 16:09:26
|
2018-09-16
|
2018-09-16 17:22:41
| 6
| false
|
en
|
2018-09-16
|
2018-09-16 17:22:41
| 2
|
18f28ab6d1af
| 4.474528
| 1
| 0
| 0
|
NOW in the 'golden age' of travel, most of us have been to at least one other country. However, there’s still one frontier that’s yet to be…
| 5
|
Be Ready for space journey:
NOW in the 'golden age' of travel, most of us have been to at least one other country. However, there’s still one frontier that’s yet to be opened up to commercial exploration: Space.
But commercial space travel will be here in the blink of an eye. And when it arrives, we will be ready for lift-off. Below is an imagined (abbreviated) cheat sheet for the intrepid and intergalactic traveller's first foray into outer space.
Things necessary for space travel:
1. Breaking Through The Atmosphere
Where does space begin anyway? About 100 kilometres above sea level (it ends at the upper limit of low Earth orbit). Close enough to phone home and full of photo-ops.
There are no black holes, supernovas or gamma ray bursts here — for that, you'll have to go to deep space. For the occasional astronaut, space is less about comet-hopping and more about stargazing while travelling 7.7 kilometres a second.
2. Orientation
There are no road signs in space (yet). Get your bearings from the ground up: Continents and clouds occupy the troposphere. Passenger jets fly under the ozone layer in the stratosphere. Shooting stars burn briefly in the mesosphere. Auroras dance and shuttles glide through the thermosphere. Satellites, the International Space Station, and future space resorts, like the Galactic Suite, orbit in the outermost exosphere.
With physics at the wheel, you're basically on a carousel ride around Earth, and you're bound to be disoriented doing 16 laps around it every 24 hours. Up is down in zero gravity. Figure out your position using any three fixed celestial points of reference and some friendly impossible math (there's probably a button on the dashboard for this).
3. Preparation
.........................................................................................................................................................
Exiting the atmosphere isn't easy. You have to do your stretches, eat your veggies, and brace yourself for a complete change of scene.
Visit a doctor to ensure you're fit for flight. Train your body for weightlessness 12 metres underwater or in microgravity on a zero-g flight. In two years' time, you'll be ready for takeoff.
4. Weightlessness
.........................................................................................................................................................
At some point, pouring a glass of celebratory champagne is going to be darn near impossible. On the upside, your decompressed spine will make you look longer and leaner than ever before.
A daily space exercise regime (stationary bikes, squats and cable-machines) is a must for conditioning your body for re-entry.
Here’s What Astronauts Pack When They Go To Space:
There is no washing machine in space. Water is heavy and therefore expensive to take on a 250-mile journey above Earth. That pretty much means what you think it does - astronauts have to wear the same clothes, including underwear, for several days in a row before they can put on a clean outfit.
So how does one pack for space?
Here's the packing list:
1 pair of shoes for the treadmill 1 pair of shoes for the bike 1 pair of exercise shorts for every 3 days of exercise 1 T-shirt for every 3 days of exercise 1 work shirt for every 10 days 1 T-shirt for under the workshirt for every 10 days 1 pair work pants/shorts for every 10 days 1 pair underwear for every 2 days 1 pair socks for every 2 days 2 sweaters 2 pairs Russian overalls (optional)
This doesn't include the spacesuit that astronauts wear during launch and landing, and when working outside.
NASA points out that because astronauts live in temperature-controlled environments and don't go outside except when wearing their spacesuit, clothes don't get as dirty as they do on Earth.
Most of the clothing brought to the space station doesn't come back home. Dirty clothes are placed in a bag and most of it is put on a resupply vehicle that makes a one-way trip to the space station several times a year. The Progress cargo capsules then burn up in the Earth's atmosphere.
SPACE JOURNEY:-
“Out of this world” flights, offered by XCOR Space Expeditions, are now being sold on the Kayak website, departing from either the spaceport in California’s Mojave Desert or the Dutch Caribbean island of Curacao.
Available from 2016, flight prices vary depending on the selected travel date as well as the currency exchange rate from US dollars to the British pound. On August 31, a flight was available for £68,272 during a search done on Kayak.co.uk today. Other search results included £81,402 for travel on January 29 and £74,837 for travel on December 5 next year. While travel dates and 2016 are available, users of the Kayak website can only search for flights up to a year in advance.
The flights can be found in the search engine by entering Mojave, for the Mojave Air and Spaceport (MHV), or Curacao for Curacao International Airport (CUR) in the departure slot, and Thermosphere in the destination slot as well as ticking the first class passenger box.
Accompanied only by one other person - the pilot - the one hour voyage takes each passenger either 60 or 100 kilometres above Earth, depending on which of the two Lynx Mark spacecraft you are on. It is yet to be confirmed which will be used for the flights, but the flight path for both are said to be "similar and equally exciting” according to XCOR.
Here I provide u a link for online purchase of space tickets :
https://www.viagogo.com/ww/Concert-Tickets/Alternative-and-Indie/Space-Tickets?AffiliateID=49&adposition=1t1&PCID=PSROWGOOCONSPACEDA9A001D3-000000&AdID=278572533807&MetroRegionID=&psc=&psc=&ps=&ps=&ps_p=0&ps_c=1455571613&ps_ag=56684710535&ps_tg=kwd-335576786393&ps_ad=278572533807&ps_adp=1t1&ps_fi=&ps_fi=&ps_li=&ps_li=&ps_lp=9040177&ps_n=g&ps_d=m&gclid=Cj0KCQjw_vfcBRDJARIsAJafEnFnvK8yAYYk2x0eryuOmnIBZIjdypGKEPDCqfvxHmAgMr0TgItMnyoaAiBxEALw_wcB
So guyz hold breathe. Tighten ur seat belts and get ready for space journey.
ALL THE VERY BEST
Lots of love from Ashutosh Raina.
<………………………………….. >
|
Be Ready for space journey:
| 40
|
be-ready-for-space-journey-18f28ab6d1af
|
2018-09-18
|
2018-09-18 14:59:19
|
https://medium.com/s/story/be-ready-for-space-journey-18f28ab6d1af
| false
| 934
| null | null | null | null | null | null | null | null | null |
Space
|
space
|
Space
| 12,832
|
Ashutosh Raina
|
New here, Tech lover, programming, chess, foodie, music, NASA💟, student,animal lover, future engineer.
|
65448d47e87a
|
ashutoshraina766
| 8
| 128
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
1f35b6f451e8
|
2018-07-11
|
2018-07-11 07:50:56
|
2018-07-11
|
2018-07-11 10:11:55
| 1
| false
|
en
|
2018-07-14
|
2018-07-14 21:12:49
| 13
|
18f29e439795
| 1.622642
| 0
| 0
| 0
| null | 5
|
Webography for 4 dummies to make it in machine learning — Chapter 20, Scene 3
Sélection de données
MySQL est un système de gestion de base de données (SGBD). Son rôle est d'enregistrer des informations dans de…openclassrooms.com
Initiez-vous à l'algèbre relationnelle avec le langage SQL
Dans ce cours, vous apprendrez à manipuler des relations à l'aide des opérateurs de l'algèbre relationnelle. Et vous…openclassrooms.com
List of SQL Commands
SQL, Structured Query L anguage, is a programming language designed to manage data stored in relational databases. SQL…www.codecademy.com
MySQL :: Sakila Sample Database
This document describes the Sakila sample database-its history, installation, structure and usage.dev.mysql.com
Importer et exporter des données sous MySQL
Les Système de Gestion de Bases de Données tels que MySQLpermettent de manipuler facilement et avec beaucoup de…www.commentcamarche.com
twintproject/twint
twint - An advanced Twitter scraping & OSINT tool written in Python that doesn't use Twitter's API, allowing you to…github.com
Turning web forms into conversations
Conversational Form is an open-source concept by SPACE10 to easily turn any form element on a web page into a…space10-community.github.io
- HTML (HyperText Markup Language) | MDN
Les éléments input dont l'attribut type vaut "text" permettent de créer des champs de saisie avec du texte sur une…developer.mozilla.org
numpy.random.multivariate_normal - NumPy v1.14 Manual
The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal…docs.scipy.org
Flask app "Restarting with stat"
I've built a few Flask apps, but on my latest project I noticed something a little strange in development mode. The…stackoverflow.com
Equivalent of Linux `touch` to create an empty file with PowerShell?
In PowerShell is there an equivalent of touch? For instance in Linux I can create a new empty file by invoking: touch…superuser.com
Turning web forms into conversations
Conversational Form is an open-source concept by SPACE10 to easily turn any form element on a web page into a…space10-community.github.io
SQL WHERE IS NULL | SELECT WHERE IS NOT NULL | NULL or Value | NULL or Empty | Examples
SQL WHERE IS NULL | SELECT WHERE IS NOT NULL | NULL or Value | NULL or Emptywww.dofactory.com
Configure Flask dev server to be visible across the network
I'm not sure if this is Flask specific, but when I run an app in dev mode (http://localhost:5000), I cannot access it…stackoverflow.com
|
Webography for 4 dummies to make it in machine learning — Chapter 20, Scene 3
| 0
|
webography-for-4-dummies-to-make-it-in-machine-learning-chapter-20-scene-3-18f29e439795
|
2018-07-14
|
2018-07-14 21:12:49
|
https://medium.com/s/story/webography-for-4-dummies-to-make-it-in-machine-learning-chapter-20-scene-3-18f29e439795
| false
| 377
|
We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE.
| null |
ethercourt
| null |
Ethercourt Machine Learning
|
adoucoure@dr.com
|
ethercourt
|
INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING
|
ethercourt
|
Sql
|
sql
|
Sql
| 3,343
|
WELTARE Strategies
|
WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner
|
9fad63202573
|
WELTAREStrategies
| 196
| 209
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
634d4b270054
|
2018-04-21
|
2018-04-21 12:37:05
|
2018-04-21
|
2018-04-21 12:42:01
| 1
| false
|
en
|
2018-06-05
|
2018-06-05 08:44:01
| 3
|
18f2d808e5ed
| 1.256604
| 2
| 0
| 0
|
Recently, a four-year-old, nearly-blind German shepherd went missing in darkness of the open Dubai desert. Michael Rudolph and his family…
| 5
|
Thermal Drone Rescued A Dog Lost In The Dubai Desert
Recently, a four-year-old, nearly-blind German shepherd went missing in darkness of the open Dubai desert. Michael Rudolph and his family feared on the missing of their dog, named Stevie, who is 80% blind would already be in a state of panic.
“You can imagine in this dark environment, it was a crisis situation for all of us,” said Rudolph, who works as head of the Airspace Safety Section at Dubai Civil Aviation Authority.
The search became complicated and after a while, Rudolph took the help from his friend Eduardo Dorado, a drone pilot at Exponent. “They were great, I asked them for a drone with a thermal camera so that we could spot Stevie’s heat source in the middle of the open desert,” said Rudolph.
After a few hours, a big blotch appeared on the screen glimpsing the dog. “When we found him, he was utterly exhausted,” said Rudolph. He also mentioned that “if it had not been for the thermal drone, I believe it would have been a lot danger, if not forever.”
Source: https://bit.ly/2qP6tkN
About DEEPAERO
DEEP AERO is a global leader in drone technology innovation. At DEEP AERO, we are building an autonomous drone economy powered by AI & Blockchain.
DEEP AERO’s DRONE-UTM is an AI-driven, autonomous, self-governing, intelligent drone/unmanned aircraft system (UAS) traffic management (UTM) platform on the Blockchain.
DEEP AERO’s DRONE-MP is a decentralized marketplace. It will be one stop shop for all products and services for drones.
These platforms will be the foundation of the drone economy and will be powered by the DEEP AERO (DRONE) token.
|
Thermal Drone Rescued A Dog Lost In The Dubai Desert
| 2
|
thermal-drone-rescued-a-dog-lost-in-the-dubai-desert-18f2d808e5ed
|
2018-06-05
|
2018-06-05 08:44:02
|
https://medium.com/s/story/thermal-drone-rescued-a-dog-lost-in-the-dubai-desert-18f2d808e5ed
| false
| 280
|
AI Driven Drone Economy on the Blockchain
| null |
DeepAeroDrones
| null |
DEEPAERODRONES
| null |
deepaerodrones
|
DEEPAERO,AI,BLOCKCHAIN,DRONE,ICO
|
DeepAeroDrones
|
Deepaero
|
deepaeros
|
Deepaero
| 0
|
DEEP AERO DRONES
| null |
dcef5da6c7fa
|
deepaerodrones
| 277
| 0
| 20,181,104
| null | null | null | null | null | null |
0
|
$ git clone https://github.com/tolstikhin/wae.git
Cloning into 'wae'...
remote: Counting objects: 170, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 170 (delta 15), reused 22 (delta 14), pack-reused 144
Receiving objects: 100% (170/170), 1.32 MiB | 653.00 KiB/s, done.
Resolving deltas: 100% (105/105), done.
$ cd wae
$ spell run -t v100 --framework tensorflow --pip matplotlib --pip pillow --mount public/image/mnist “python2 run.py --exp mnist”
Run created -- waiting for a V100 machine.
✨ Run is building
✨ Run is mounting
Successfully added mount: public/mnist:/spell/mnist
✨ Run is running
2018-04-30 21:56:11,018 - Loading MNIST
2018-04-30 21:56:19,307 - Loading Done.
2018-04-30 21:56:19,308 - Building the Tensorflow Graph
...
2018-04-30 21:56:24,427 - Training WAE
2018-04-30 21:56:25,014 - Pretraining the encoder
2018-04-30 21:56:33,673 - Step 0/200, loss=0.183633
2018-04-30 21:56:33,791 - Step 1/200, loss=41.173756
...
2018-04-30 21:56:37,055 - Step 28/200, loss=0.094692
2018-04-30 21:56:37,055 - Pretraining the encoder done.
2018-04-30 21:56:37,083 - Real pictures sharpness = 0.06711
2018-04-30 21:56:38,287 - Matching penalty after 0 steps: 0.566213
2018-04-30 21:56:38,365 - Matching penalty after 1 steps: 2.016228
2018-04-30 21:56:38,440 - Matching penalty after 2 steps: 1.100368
...
2018-05-01 01:59:46,120 - Matching penalty after 59998 steps: -0.014440
2018-05-01 01:59:46,194 - Matching penalty after 59999 steps: -0.010323
2018-05-01 01:59:46,494 - EPOCH: 100/100, BATCH:600/600, BATCH/SEC:4.11 (WAE_LOSS=0.20427, RECON_LOSS=0.30750, MATCH_LOSS=-0.01032, RECON_LOSS_TEST=0.52220, SHARPNESS=0.04923)
✨ Run is saving
Retrieving modified or new files from the run
Saving 'configs.pyc'
Saving 'datahandler.pyc'
Saving 'improved_wae.pyc'
Saving 'models.pyc'
Saving 'ops.pyc'
Saving 'results_mnist'
Saving 'utils.pyc'
Saving 'wae.pyc'
Compressing saved files
✨ Total run time: 4h8m33.150474s
✨ Run 1287 complete
$ spell ls runs/1287/results_mnist
641 Apr 30 19:00 checkpoints
795 Apr 30 14:56 params.txt
1505204 Apr 30 14:56 res_e0000_mb00049.png
1373522 Apr 30 14:56 res_e0000_mb00099.png
1397305 Apr 30 14:57 res_e0000_mb00149.png
1382843 Apr 30 14:57 res_e0000_mb00199.png
1349543 Apr 30 14:57 res_e0000_mb00249.png
...
1275053 Apr 30 18:58 res_e0099_mb00349.png
1266474 Apr 30 18:58 res_e0099_mb00399.png
1276125 Apr 30 18:59 res_e0099_mb00449.png
1274966 Apr 30 18:59 res_e0099_mb00499.png
1273144 Apr 30 18:59 res_e0099_mb00549.png
1283237 Apr 30 18:59 res_e0099_mb00599.png
$ spell cp runs/1287/results_mnist/res_e0001_mb00049.png
Copying file to ./res_e0001_mb00049.png
1.3M
$ open res_e0001_mb00049.png
$ spell cp runs/1287/results_mnist/res_e0001_mb00549.png
Copying file to ./res_e0001_mb00549.png
1.3M
$ open res_e0001_mb00549.png
| 5
| null |
2018-05-01
|
2018-05-01 20:14:11
|
2018-05-01
|
2018-05-01 22:39:33
| 2
| false
|
en
|
2018-05-01
|
2018-05-01 22:39:33
| 6
|
18f499914e47
| 2.613522
| 6
| 0
| 0
|
Running ICLR is a series of quick-start guides for running the code presented here at ICLR in Vancouver, BC.
| 3
|
Reproducing ICLR: Wasserstein Auto-Encoders
Running ICLR is a series of quick-start guides for running the code presented here at ICLR in Vancouver, BC.
Wasserstein Auto-Encoders. Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Schoelkopf. arXiv github OpenReview
Wasserstein Auto-Encoders are generative models/auto-encoders based on minimizing Wasserstein distance between the input and target distributions. The authors explain: “Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality.”
Let’s try it on MNIST. First off, we clone the repository:
Then using the public MNIST dataset available on Spell we’ll run run.py on a Spell V100.
Even on a V100 the autoencoder took more than 4 hours to train!
In the last few lines of the logs we can see that Spell saved all the files created during the run into runs/1287. Let’s take a look at the results_mnist directory.
Looks like the process writes out one of these png files after every 50 steps. Let’s look at the first image and the last one.
WAE results after 550 steps.
Success! The autoencoder is reproducing the data quite well. The samples certainly look a lot better on the last image than the first one.
If you want to try — sign up for a Spell account and install the cli using pip install spell
Originally published at medium.com.
|
Reproducing ICLR: Wasserstein Auto-Encoders
| 10
|
reproducing-iclr-wasserstein-auto-encoders-18f499914e47
|
2018-06-01
|
2018-06-01 20:48:15
|
https://medium.com/s/story/reproducing-iclr-wasserstein-auto-encoders-18f499914e47
| false
| 591
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Spell
|
Making AI easier
|
67d6dc2aea98
|
spellrun
| 20
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f60682517af3
|
2018-01-26
|
2018-01-26 14:42:26
|
2018-01-26
|
2018-01-26 14:35:18
| 1
| false
|
en
|
2018-01-26
|
2018-01-26 14:43:32
| 6
|
18f5c4228ef2
| 3.120755
| 1
| 0
| 0
|
Written by Russell Haworth, CEO Nominet
| 5
|
Are ethics ruining the AI party?
Written by Russell Haworth, CEO Nominet
Artificial intelligence (AI) is one of the most incredible advancements of our time and has now reached a point at which our society needs to prepare for application on a wide scale. This involves asking the big ethical questions, clipping the wings of AI in the short-term to ensure it stays within moral frameworks we have yet to design.
This is most pertinent for AI that is equipped with machine learning; algorithms are employed to allow a computer to adapt over time in response to stimuli — or ‘learn’ from its interactions. There are supervised and unsupervised approaches to machine learning, with the latter presenting some potential complications. If we can’t supervise we are unable to understand how decisions are reached nor the ‘thought’ process behind them. How do we ensure the route and choices made by AI are ethical?
Ethical decisions for AI
Human beings make decision based on the context, their past and the cultural norms of the society in which they live. AI has no such resources to draw upon. A machine must be programmed not to always make decisions solely based on a mathematical logic but follow an ethical, moral code that human beings have hardwired. A robot needs to know that if a person has run out of meat for dinner, it is unacceptable to cook the cat.
Hard-wiring a complicated ethical code into a machine is a serious challenge for the software developers of today, especially as this decision-making process could make them liable for the consequences. The issue has been brought up often in discussions around autonomous vehicles; the trolley problem of today. What will, and should, a car do in a situation when only one of two lives can be saved — pedestrian or driver? Who makes that decision and who is responsible for the consequences?
Experts have suggested that to remain ethical AI needs to be transparent and trustworthy, working with humans and not as a replacement. AI that takes on cognitive work need to be robust against manipulation, argue researchers from the Machine Intelligence Research Institute. There needs to be a clear proof of the systems and workings of the AI to facilitate an investigation when mistakes are made. If we can’t identify why AI did something, we can’t make sure it doesn’t repeat it.
Cooperation and context
Equally important is cooperation between the parties involved at every step of an AI machine’s design, creation and application. Ethics needs to be considered at the point of creation, entwined in the workings rather than applied in retrospect. It would be easy to imagine the polarisation of software developers or AI manufacturers and ethics committees or risk management experts. As John C Havens, author of Heartifical Intelligence stresses, we “need to inform the AI manufacturing process with programming based on the codification of our deeply held beliefs”. This will be complicated by the commercial nature of AI development and the swift advancements in technology, not to mention the challenge of ‘codifying’ a set of beliefs that all involved can agree on, free from prejudices and bias. Would this vary by country? By industry?
There are also ethical issues to consider beyond the workings of AI and into a wider context: the impact on society and the individual. Unemployment could rise, which psychologists warn could impact mental health, and decisions would need to be made over who benefits from the work of AI and revenue produced. Who would pay the tax required to support a non-working human population? It is likely that a reliance on AI will change human behaviours and interactions — what consequences could there be? We also need to tackle security issues, bias, and potentially even the right of the robots with ‘cognition’.
See opportunities, not limitations
For those who are forging ahead in technology and evolving the capabilities of AI at an extraordinary rate, ethical considerations could be viewed as inhibitors. Ultimately, they are enablers. Technology is only useful to our society if it works with us and our existing systems. Without trust and liability, robust regulations and checks, AI could veer from maliciously lethal to unproductive and ineffective, neither of which are helpful.
Ethicists will come centre stage in the coming years to facilitate the move towards wide scale adoption of AI and ensure automation works for humans and not against them, boosting productivity alongside people. It will be interesting to see how they progress together to facilitate a safe society-wide roll-out of life-changing tech.
Originally published at digileaders.com on January 26, 2018.
|
Are ethics ruining the AI party?
| 1
|
are-ethics-ruining-the-ai-party-18f5c4228ef2
|
2018-01-26
|
2018-01-26 14:49:02
|
https://medium.com/s/story/are-ethics-ruining-the-ai-party-18f5c4228ef2
| false
| 774
|
Thoughts on leadership, strategy and digital transformation across all sectors. Articles first published on the Digital Leaders blog at digileaders.com
| null |
digitalleadersprogramme
| null |
Digital Leaders
|
louise.stokes@digileaders.com
|
digital-leaders-uk
|
DIGITAL LEADERSHIP,DIGITAL TRANSFORMATION,DIGITAL STRATEGY,DIGITAL GOVERNMENT,INNOVATION
|
digileaders
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Digital Leaders
|
Informing and inspiring innovative digital transformation digileaders.com
|
c0cad3f73a0
|
DigiLeaders
| 2,783
| 2,148
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-10
|
2018-09-10 14:12:33
|
2018-09-10
|
2018-09-10 14:16:27
| 7
| false
|
zh
|
2018-09-10
|
2018-09-10 14:16:27
| 1
|
18f8dd3081dd
| 6.792
| 1
| 0
| 0
|
本文来自《Aurora’s Approach to Developement》感想,所有图片引用来自原文。
| 3
|
Aurora的无人车开发准则
本文来自《Aurora’s Approach to Developement》感想,所有图片引用来自原文。
上周阅读了无人车公司Aurora的一篇开发方法的文章,写得很有道理,很意外没有看到中文的分析和翻译,于是今天自己来讲一讲。
对于不了解Aurora的人,该公司由Waymo前CTO Chris Urmson创办。他们认为自动驾驶技术能改善交通安全性,可达性和便利性。他们的使命是让这个技术安全,快速和广泛地应用起来。为了完成这项任务,Aurora分享这篇关于他们技术和开发的战术文章。
自动驾驶是应用科学(applied science)问题,不是传统的产品开发。Aurora通过优化组织使他们的员工更好的去解决难题,并与战略伙伴将技术转化为产品。Aurora不相信纯研究计划和与工程师分离的团队能去的成功,他们相信工作中最需要好的工程师与好的领域专家相互协作。他们不相信普通的管理方法,或者工程师是可替代的。Aurora偏向在组织中培养对工作有深刻理解的领导者。
渐进主义的两面性。(笔者言:我这些天和很多无人车工作者聊天,有的公司很相信渐进主义,不断地找connor case,然后找数据训练模型解决问题,我觉得Aurora讲的两面性很有道理。)Aurora认为系统在L2,L3到L4的领域逐渐提高是不太可行的。在逐步提高的情况下,司机并不能无所顾忌的使用系统,我们也不会发布一个误导驾驶员能力的系统,提高驾驶风险。
相反,Aurora支持的渐进主义是先从有应用价值的关键领域入手,首先确保完全自我驾驶能力,再向外扩展。
优先模拟测试,离线测试。在无人车领域一个常见的想法是自动驾驶系统“只是关于数据”,隐含的假设是拥有最多数据的团队将获胜。追求这种观点会导致生成大量的低价值里程。自动驾驶汽车每小时可以产生数TB的数据,远远超过了处理能力。如果没有好的数据处理系统,能从数据中提取价值的团队将淹没在数据中。
Aurora优先数据处理,保证我们能用数据进行测试:
Aurora不去测试不能模拟的工作。代码在模拟器里应该最终能成功的,但如果我们知道代码在模拟器里不工作,它肯定不会在现实世界中工作。
Aurora强调单元测试,模块测试和完整模拟测试,在代码都通过后才会在路上进行测试。
测试是强化学习的第一步。失败的测试成为以后系统必须解决的问题,用失败指导工作。
火箭vs梯子。有一句短语说:“不要试图建造一个通往月球的梯子”。这个短语来自Bell Labs,当时人们首次使用卷积神经网络读取商业支票,如今类似的技术被运用到计算机视觉的很多地方。对工程师来说,这个梯子每天都会变得更有用一点。但问题是它可能永远不会达到实际目标。实际上我们可能需要一个火箭,最初看起来几乎没有可见的进展,火箭停在架子上。但经过精心打造和测试,这枚火箭将在几天内飞越25万英里。
每一天,Aurora都需要平衡员工和合作伙伴关于造火箭,和造梯子的需求。
我们的目标是安全快速地推出自动驾驶车辆。这意味着我们需要快速设计模型并学习参数。我们更需要快速实验。如果我们可以使用Python创建原型,我们需要快速尝试,做出判断:新的学习系统能否更好地实现目标?如果是,我们就执行,并不在意他是短期的梯子还是花时间的火箭,我们需要执行去给火箭补充燃料。
Aurora的学习方法:
高质量数据:较高质量的数据比大量的低质量的数据更重要。
经验中学习:我们可以从驾驶员演示中获得非常有价值的数据。驾驶员干预措施表明 (a)我们进入这种情况时犯了一个大错误 (b)如何从这种情况中恢复过来。
现实模拟测试中学习:用现实场景作为系统的约束,我们希望得到平滑,可预测,类似人类的驾驶行为。
机器学习的大部分工作都在于infrastructure的设计和支持,这也是Aurora早期工作中的重点。
为了学习性设计。Aurora在软件设计中会仔细考虑如果在框架中进行机器学习,并且进行可重用的结构设计。我们避免构建自定义学习框架,因为这会增加系统复杂性。当需要时,我们应该有一个更好的不在现在标准框架中实现和训练的理由。
事后再学习和设计非常困难。每项设计工作都应考虑我们是不是采用机器学习,如果是,我们需要如何获得有效的训练数据。如果我们不考虑如何获取数据或设计损失函数,则设计的系统将很难会获得有效地标注以及进行扩展。
Perception是一种统计游戏。我们相信有可能造出一辆完全依靠相机的自动驾驶汽车。但是,为了尽快进行试验,我们需要快速的减少错误。从理论上说,如果我们有三个独立的信号源和ε检测错误,将它组合起来就能达到ε三次方检测错误。在实践中则完全达不到理论极限,然而,每百万英里的错误也还可以被提升到每十亿英里的错误,仅使用相机则很难达到这种精确度。
不同的传感器形有不同的优点和缺点;因此,结合多种传感器可以提高系统可靠性。照相机在低光和高速情况下都有识别困难;由于多路径和多普勒模糊,雷达受到有限的分辨率和伪像的影响;激光雷达可能“看到”假象。下面的图片显示了Aurora系统在自动驾驶时的真实情况。
黑暗中的图像
通道里的雷达
雪天里的lidar
地图让一切变得更好。正如人们在去过的地方驾驶会更好一样,Aurora的系统会很好的利用以前驾驶获得的知识。Aurora会制定一系列安全措施,确保车辆在地图更新前能收集数据,计算更新,保证安全驾驶。
为什么不完全放弃地图?这是一个简单的道理,如果一个系统只需要在1%的时间内处理相对于地图的变化,那么无人车可以比实时感知世界的安全性高出两个数量级。
追求工程卓越。Aurora最自豪的地方就是团队能够严谨的进行设计和开发。当我们遇到问题时,我们会尽力寻找可靠的,最佳的解决方案。我们不相信简单的端到端的学习能解决所有问题。相反,我们相信自动驾驶问题需要严谨的方法,将现代机器学习的精华与工程编织在一起,包括实real-time systems,几何学,状态估计,高性能计算,以及decision making和control。
Aurora的目标是结合这些方法中的最佳方法,努力实现高性能的感知和规划系统,安全的像人类一样的驾驶。
自行车在周边旋转
在变道时被干扰
在行人过街时减速
识别在两车之间的摩托
Aurora关注code review,关注代码的高质量,自动化的测试,并且努力构建可扩展,可重复使用的分析工具。通过快速进行实验,将他们变成可靠的代码。
这些谨慎的过程不一定是新颖的,但是非常有用。Aurora的CTO有非常多年经验,公司学到了很多经验教训。关于哪些有效,哪些无效。关于技术的发展方向以及如何避免解决方案不可扩展的问题。
Aurora的经验不仅能在无人车上应用,在不同的机器学习问题上都有帮助。希望这篇文章能让大家有所启示。
注:本文插图来自原文。
Aurora’s Approach to Development
https://medium.com/aurora-blog/auroras-approach-to-development-5e42fec2ee4b
|
Aurora的无人车开发准则
| 1
|
aurora的无人车开发准则-18f8dd3081dd
|
2018-09-10
|
2018-09-10 14:16:27
|
https://medium.com/s/story/aurora的无人车开发准则-18f8dd3081dd
| false
| 56
| null | null | null | null | null | null | null | null | null |
Self Driving Cars
|
self-driving-cars
|
Self Driving Cars
| 13,349
|
Dong Wang
|
Software Engineer, computer vision, machine learning, search, recommendation, algorithm and infrastructure.
|
fe62520df6d7
|
yaoyaowd
| 741
| 260
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-04
|
2017-10-04 14:21:41
|
2017-10-04
|
2017-10-04 14:22:56
| 0
| false
|
en
|
2017-10-04
|
2017-10-04 14:22:56
| 1
|
18f958c6a799
| 0.407547
| 4
| 0
| 0
|
“I still remembered the day, May 28th, 2017 at NEA auditorium, Next to Centre for Cyber Crime Investigation, It was Peerlyst Delhi Chapter…
| 5
|
“I still remembered the day, May 28th, 2017 at NEA auditorium, Next to Centre for Cyber Crime Investigation, It was Peerlyst Delhi Chapter Second MeetUp. I attended Peerlyst Delhi Chapter First MeetUp as attendee and attended Second MeetUp as Volunteer, Great progress, right.” Have you been to a Peerlyst Meetup before?, by Peerlyst Delhi Second Chapter MeetUp
“I still remembered the day, May 28th, 2017 at NEA auditorium, Next to Centre for Cyber Crime Investigation, It was Peerlyst Delhi Chapter Second MeetUp. I attended Peerlyst Delhi Chapter First MeetUp as attendee and attended Second MeetUp as Volunteer, Great progress, right.” Have you been to a Peerlyst Meetup before?
|
“I still remembered the day, May 28th, 2017 at NEA auditorium, Next to Centre for Cyber Crime…
| 10
|
i-still-remembered-the-day-may-28th-2017-at-nea-auditorium-next-to-centre-for-cyber-crime-18f958c6a799
|
2017-10-04
|
2017-10-04 15:50:44
|
https://medium.com/s/story/i-still-remembered-the-day-may-28th-2017-at-nea-auditorium-next-to-centre-for-cyber-crime-18f958c6a799
| false
| 108
| null | null | null | null | null | null | null | null | null |
Telegram
|
telegram
|
Telegram
| 3,592
|
Peerlyst Community
| null |
c8e6ec7b2f87
|
Peerlyst
| 1,480
| 2,309
| 20,181,104
| null | null | null | null | null | null |
0
|
import os
import requests
CENSUS_DATASET = (
"http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
"http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.names",
"http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test",
)
def download_data(path='data', urls=CENSUS_DATASET):
if not os.path.exists(path):
os.mkdir(path)
for url in urls:
response = requests.get(url)
name = os.path.basename(url)
with open(os.path.join(path, name), 'w') as f:
f.write(response.content)
download_data()
import pandas as pd
import seaborn as sns
names = [
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income',
]
data = pd.read_csv('data/adult.data', names=names)
data.head()
sns.countplot(y='occupation', hue='income', data=data,)
sns.countplot(y='education', hue='income', data=data,)
import json
meta = {
'target_names': list(data.income.unique()),
'feature_names': list(data.columns),
'categorical_features': {
column: list(data[column].unique())
for column in data.columns
if data[column].dtype == 'object'
},
}
with open('data/meta.json', 'w') as f:
json.dump(meta, f, indent=2)
from sklearn.datasets.base import Bunch
def load_data(root='data'):
# Load the meta data from the file
with open(os.path.join(root, 'meta.json'), 'r') as f:
meta = json.load(f)
names = meta['feature_names']
# Load the readme information
with open(os.path.join(root, 'README.md'), 'r') as f:
readme = f.read()
# Load the training and test data, skipping the bad row in the test data
train = pd.read_csv(os.path.join(root, 'adult.data'), names=names)
test = pd.read_csv(os.path.join(root, 'adult.test'), names=names, skiprows=1)
# Remove the target from the categorical features
meta['categorical_features'].pop('income')
# Return the bunch with the appropriate data chunked apart
return Bunch(
data = train[names[:-1]],
target = train[names[-1]],
data_test = test[names[:-1]],
target_test = test[names[-1]],
target_names = meta['target_names'],
feature_names = meta['feature_names'],
categorical_features = meta['categorical_features'],
DESCR = readme,
)
dataset = load_data()
from sklearn.preprocessing import LabelEncoder
gender = LabelEncoder()
gender.fit(dataset.data.sex)
print(gender.classes_)
print(gender.transform([
' Female', ' Female', ' Male', ' Female', ' Male'
]))
from sklearn.base import BaseEstimator, TransformerMixin
class EncodeCategorical(BaseEstimator, TransformerMixin):
"""
Encodes a specified list of columns or all columns if None.
"""
def __init__(self, columns=None):
self.columns = columns
self.encoders = None
def fit(self, data, target=None):
"""
Expects a data frame with named columns to encode.
"""
# Encode all columns if columns is None
if self.columns is None:
self.columns = data.columns
# Fit a label encoder for each column in the data frame
self.encoders = {
column: LabelEncoder().fit(data[column])
for column in self.columns
}
return self
def transform(self, data):
"""
Uses the encoders to transform a data frame.
"""
output = data.copy()
for column, encoder in self.encoders.items():
output[column] = encoder.transform(data[column])
return output
encoder = EncodeCategorical(dataset.categorical_features.keys())
data = encoder.fit_transform(dataset.data)
imputer = Imputer(missing_values='Nan', strategy='most_frequent')
imputer.fit(dataset.data)
from sklearn.preprocessing import Imputer
class ImputeCategorical(BaseEstimator, TransformerMixin):
"""
Encodes a specified list of columns or all columns if None.
"""
def __init__(self, columns=None):
self.columns = columns
self.imputer = None
def fit(self, data, target=None):
"""
Expects a data frame with named columns to impute.
"""
# Encode all columns if columns is None
if self.columns is None:
self.columns = data.columns
# Fit an imputer for each column in the data frame
self.imputer = Imputer(missing_values=0, strategy='most_frequent')
self.imputer.fit(data[self.columns])
return self
def transform(self, data):
"""
Uses the encoders to transform a data frame.
"""
output = data.copy()
output[self.columns] = self.imputer.transform(output[self.columns])
return output
imputer = ImputeCategorical(['workclass', 'native-country', 'occupation'])
data = imputer.fit_transform(data)
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
# we need to encode our target data as well.
yencode = LabelEncoder().fit(dataset.target)
# construct the pipeline
census = Pipeline([
('encoder', EncodeCategorical(dataset.categorical_features.keys())),
('imputer', ImputeCategorical(['workclass', 'native-country', 'occupation'])),
('classifier', LogisticRegression())
])
# fit the pipeline
census.fit(dataset.data, yencode.transform(dataset.target))
from sklearn.metrics import classification_report
# encode test targets, and strip traililng '.'
y_true = yencode.transform([y.rstrip(".") for y in dataset.target_test])
# use the model to get the predicted value
y_pred = census.predict(dataset.data_test)
# execute classification report
print classification_report(y_true, y_pred, target_names=dataset.target_names)
import pickle
def dump_model(model, path='data', name='classifier.pickle'):
with open(os.path.join(path, name), 'wb') as f:
pickle.dump(model, f)
dump_model(census)
def load_model(path='data/classifier.pickle'):
with open(path, 'rb') as f:
return pickle.load(f)
def predict(model, meta=meta):
data = {} # Store the input from the user
for column in meta['feature_names'][:-1]:
# Get the valid responses
valid = meta['categorical_features'].get(column)
# Prompt the user for an answer until good
while True:
val = " " + raw_input("enter {} >".format(column))
if valid and val not in valid:
print "Not valid, choose one of {}".format(valid)
else:
data[column] = val
break
# Create prediction and label
yhat = model.predict(pd.DataFrame([data]))
return yencode.inverse_transform(yhat)
# Execute the interface
model = load_model()
predict(model)
pd.read_csv('adult.data', sep="\s*,", names=names)
| 60
|
f4f1e49a4f74
|
2017-12-26
|
2017-12-26 15:38:16
|
2017-12-26
|
2017-12-26 15:41:53
| 3
| false
|
en
|
2017-12-26
|
2017-12-26 15:41:53
| 15
|
18f996c4d7cf
| 14.682075
| 1
| 0
| 0
|
An end-to-end machine learning example using Pandas and Scikit-Learn
| 5
|
Building a Classifier from Census Data
An end-to-end machine learning example using Pandas and Scikit-Learn
By Benjamin Bengfort
One of the machine learning workshops given to students in the Georgetown Data Science Certificateis to build a classification, regression, or clustering model using one of the UCI Machine Learning Repository datasets. The idea behind the workshop is to ingest data from a website, perform some initial analyses to get a sense for what’s in the data, then structure the data to fit a Scikit-Learn model and evaluate the results. Although the repository does give advice as to what types of machine learning might be applied, this workshop still poses a challenge, especially in terms of data wrangling.
In this post, I’ll outline how I completed this workshop alongside my students this past weekend. For those new to machine learning or to Scikit-Learn, I hope this is a practical example that may shed light on many challenges that crop up developing predictive models. For more experienced readers, I hope that I can challenge you to try this workshop, and to contribute iPython notebooks with your efforts as tutorials!
Data Ingestion
The first part of the workshop is to use the UCI Machine Learning Repository to find a non-trivial dataset with which to build a model. While the example datasets included with Scikit-Learn are good examples of how to fit models, they do tend to be either trivial or overused. By exploring a novel dataset with several (more than 10) features and many instances (more than 10,000), I was hoping to conduct a predictive exercise that could show a bit more of a challenge.
There are around 350 datasets in the repository, categorized by things like task, attribute type, data type, area, or number of attributes or instances. I ended up choosing a Census Income dataset that had 14 attributes and 48,842 instances. The task listed was a binary classifier to build a model that could determine from census information whether or not the person made more than $50k per year.
Every dataset in the repository comes with a link to the data folder, which I simply clicked and downloaded to my computer. However, in an effort to make it easier for you to follow along, I’ve included a simple download_data function that uses requests.py to fetch the data.
This code also helps us start to think about how we’re going to manage our data on disk. I’ve created a data folder in my current working directory to hold the data as it's downloaded. In the data management section, we'll expand this folder a bit further to be loaded as a Bunch object.
Data Exploration
The very first thing to do is to explore the dataset and see what’s inside. The three files that downloaded do not have a file extension, but they are simply text files. You can change the extension to .txt for easier exploration if that helps. By using the head and wc -l commands on the command line, our files appear to be as follows:
adult.data: A CSV dataset containing 32,562 rows and no header
adult.names: A text file containing meta information about the dataset
adult.test: A CSV dataset containing 16,283 rows with a weird first line
Clearly this dataset is intended to be used for machine learning, and a test and training data set has already been constructed. Similar types of split datasets are used for Kaggle competitions and academic conferences. This will save us a step when it comes to evaluation time.
Since we already have a csv file, let’s explore the dataset using Pandas:
Because the CSV data doesn’t have a header row, I had to supply the names directly to the pd.read_csv function. To get these names, I manually constructed the list by reading the adult.names file. In the future, we'll store these names as a machine readable JSON file so that we don't have to manuually construct it.
By glancing at the first 5 rows of the data, we can see that we have primarily categorical data. Our target, data.income is also currently constructed as a categorical field. Unfortunately, with categorical fields, we don't have a lot of visualization options (quite yet). However, it would be interesting to see the frequencies of each class, relative to the target of our classifier. To do this, we can use Seaborn's countplot function to count the occurrences of each data point. Let's take a look at the counts of data.occupation and data.education — two likely predictors of income in the census data:
The countplot function accepts either an x or a y argument to specify if this is a bar plot or a column plot. I chose to use the y argument so that the labels were readable. The hue argument specifies a column for comparison; in this case we're concerned with the relationship of our categorical variables to the target income. Go ahead and explore other variables in the dataset, for example data.race and data.sex to see if those values are predictive of the level of income or not!
Data Management
Now that we’ve completed some initial investigation and have started to identify the possible feautures available in our dataset, we need to structure our data on disk in a way that we can load into Scikit-Learn in a repeatable fashion for continued analysis. My proposal is to use the sklearn.datasets.base.Bunch object to load the data into data and target attributes respectively, similar to how Scikit-Learn's toy datasets are structured. Using this object to manage our data will mirror the native API and allow us to easily copy and paste code that demonstrates classifiers and technqiues with the built in datasets. Importantly, this API will also allow us to communicate to other developers and our future-selves exactly how to use the data.
In order to organize our data on disk, we’ll need to add the following files:
README.md: a markdown file containing information about the dataset and attribution. Will be exposed by the DESCR attribute.
meta.json: a helper file that contains machine readable information about the dataset like target_names and feature_names.
I constructed a pretty simple README.md in Markdown that gave the title of the dataset, the link to the UCI Machine Learning Repository page that contained the dataset, as well as a citation to the author. I simply wrote this file directly using my own text editor.
The meta.json file, however, we can write using the data frame that we already have. We've already done the manual work of writing the column names into a names variable earlier, there's no point in letting that go to waste!
This code creates a meta.json file by inspecting the data frame that we have constructued. The target_names column, is just the two unique values in the data.income series; by using the pd.Series.unique method - we're guarenteed to spot data errors if there are more or less than two values. The feature_names is simply the names of all the columns.
Then we get tricky — we want to store the possible values of each categorical field for lookup later, but how do we know which columns are categorical and which are not? Luckily, Pandas has already done an analysis for us, and has stored the column data type, data[column].dtype, as either int64or object. Here I am using a dictionary comprehension to create a dictionary whose keys are the categorical columns, determined by checking the object type and comparing with object, and whose values are a list of unique values for that field.
Now that we have everything we need stored on disk, we can create a load_data function, which will allow us to load the training and test datasets appropriately from disk and store them in a Bunch:
The primary work of the load_data function is to locate the appropriate files on disk, given a root directory that's passed in as an argument (if you saved your data in a different directory, you can modify the root to have it look in the right place). The meta data is included with the bunch, and is also used split the train and test datasets into data and target variables appropriately, such that we can pass them correctly to the Scikit-Learn fit and predict estimator methods.
Feature Extraction
Now that our data management workflow is structured a bit more like Scikit-Learn, we can start to use our data to fit models. Unfortunately, the categorical values themselves are not useful for machine learning; we need a single instance table that contains numeric values. In order to extract this from the dataset, we’ll have to use Scikit-Learn transformers to transform our input dataset into something that can be fit to a model. In particular, we’ll have to do the following:
encode the categorical labels as numeric data
impute missing values with data (or remove)
We will explore how to apply these transformations to our dataset, then we will create a feature extraction pipeline that we can use to build a model from the raw input data. This pipeline will apply both the imputer and the label encoders directly in front of our classifier, so that we can ensure that features are extracted appropriately in both the training and test datasets.
Label Encoding
Our first step is to get our data out of the object data type land and into a numeric type, since nearly all operations we’d like to apply to our data are going to rely on numeric types. Luckily, Sckit-Learn does provide a transformer for converting categorical labels into numeric integers: sklearn.preprocessing.LabelEncoder. Unfortunately it can only transform a single vector at a time, so we'll have to adapt it in order to apply it to multiple columns.
Like all Scikit-Learn transformers, the LabelEncoder has fit and transform methods (as well as a special all-in-one, fit_transform method) that can be used for stateful transformation of a dataset. In the case of the LabelEncoder, the fit method discovers all unique elements in the given vector, orders them lexicographically, and assigns them an integer value. These values are actually the indices of the elements inside the LabelEncoder.classes_ attribute, which can also be used to do a reverse lookup of the class name from the integer value.
For example, if we were to encode the gender column of our dataset as follows:
We can then transform a single vector into a numeric vector as follows:
Obviously this is very useful for a single column, and in fact the LabelEncoder really was intended to encode the target variable, not necessarily categorical data expected by the classifiers.
Note: Unfortunately, it was at this point that I realized the values all had a space in front of them. I’ll address what I might have done about this in the conclusion.
In order to create a multicolumn LabelEncoder, we’ll have to extend the TransformerMixin in Scikit-Learn to create a transformer class of our own, then provide fit and transform methods that wrap individual LabelEncoders for our columns. My code, inspired by the StackOverflow post “Label encoding across multiple columns in scikit-learn”, is as follows:
This specialized transformer now has the ability to label encode multiple columns in a data frame, saving information about the state of the encoders. It would be trivial to add an inverse_transformmethod that accepts numeric data and converts it to labels, using the inverse_transform method of each individual LabelEncoder on a per-column basis.
Imputation
According to the adult.names file, unknown values are given via the "?" string. We'll have to either ignore rows that contain a "?" or impute their value to the row. Scikit-Learn provides a transformer for dealing with missing values at either the column level or at the row level in the sklearn.preprocessing library called the Imputer.
The Imputer requires information about what missing values are, either an integer or the string, Nan for np.nan data types, it then requires a strategy for dealing with it. For example, the Imputer can fill in the missing values with the mean, median, or most frequent values for each column. If provided an axis argument of 0 then columns that contain only missing data are discarded; if provided an axis argument of 1, then rows which contain only missing values raise an exception. Basic usage of the Imputer is as follows:
Unfortunately, this would not work for our label encoded data, because 0 is an acceptable label — unless we could guarentee that 0 was always "?", then this would break our numeric columns that already had zeros in them. This is certainly a challenging problem, and unfortunately the best we can do, is to once again create a custom Imputer.
Our custom imputer, like the EncodeCategorical transformer takes a set of columns to perform imputation on. In this case we only wrap a single Imputer as the Imputer is multicolumn — all that's required is to ensure that the correct columns are transformed. I inspected the encoders and found only three columns that had missing values in them, and passed them directly into the customer imputer.
I had chosen to do the label encoding first, assuming that because the Imputer required numeric values, I'd be able to do the parsing in advance. However, after requiring a custom imputer, I'd say that it's probably best to deal with the missing values early, when they're still a specific value, rather than take a chance.
Model Build
Now that we’ve finally acheived our feature extraction, we can continue on to the model build phase. To create our classifier, we’re going to create a Pipeline that uses our feature transformers and ends in an estimator that can do classification. We can then write the entire pipeline object to disk with the pickle, allowing us to load it up and use it to make predictions in the future.
A pipeline is a step-by-step set of transformers that takes input data and transforms it, until finally passing it to an estimator at the end. Pipelines can be constructed using a named declarative syntax so that they’re easy to modify and develop. Our pipeline is as follows:
The pipeline first passes data through our encoder, then to the imputer, and finally to our classifier. In this case, I have chosen a LogisticRegression, a regularized linear model that is used to estimate a categorical dependent variable, much like the binary target we have in this case. We can then evaluate the model on the test data set using the same exact pipeline.
As part of the process in encoding the target for the test data, I discovered that the classes in the test data set had a "." appended to the end of the class name, which I had to strip in order for the encoder to work! However, once done, I could predict the y values using the test dataset, passing the predicted and true values to the classifier report.
The classifier I built does an ok job, with an F1 score of 0.77, nothing to sneer at. However, it is possible that an SVM, a Naive Bayes, or a k nearest neighbor model would do better. It is easy to construct new models using the pipeline approach that we prepared before, and I would encourage you to try it out! Furthermore, a grid search or feature analysis may lead to a higher scoring model than the one we quickly put together. Luckily, now that we’ve sorted out all the pipeline issues, we can get to work on inspecting and improving the model!
The last step is to save our model to disk for reuse later, with the pickle module:
You should also dump meta information about the date and time your model was built, who built the model, etc. But we’ll skip that step here, since this post serves as a guide.
Model Operation
Now it’s time to explore how to use the model. To do this, we’ll create a simple function that gathers input from the user on the command line, and returns a prediction with the classifier model. Moreover, this function will load the pickled model into memory to ensure the latest and greatest saved model is what’s being used.
The hardest part about operationalizing the model is collecting user input. Obviously in a bigger application this could be handled with forms, automatic data gathering, and other advanced techniques. For now, hopefully this is enough to highlight how you might use the model in practice to make predictions on unknown data.
Conclusion
This walkthrough was an end-to-end look at how I performed a classification analysis of a dataset that I downloaded from the Internet. I tried to stay true to my exact workflow so that you could get a sense for how I had to go about doing things with little to no advanced knowledge. As a result, there are definitely some things I might change if I was going to do this over.
One place that I struggled with was trying to decide if I should write out wrangled data back to disk, then load it again, or if I should maintain a feature extraction of the raw data. I kept going back and forth, particularly because of silly things like the spaces in front of the values. This could be fixed by loading the data as follows:
Using a regular expression for the seperator that would automatically strip whitespace. However, I’d already gone too far to make these changes!
I also had problems with the ordering of the label encoding and the imputation. Given another chance, I think I would definitely wrangle and clean both datasets and save them back to disk. Even just little things like the “.” at the end of the class names in the test set were annoyances that could have been easily dealt with.
Now that you’ve had a chance to look at my walkthrough, I hope you’ll try a few on your own and send your workflows and analyses to us so that we can post them as well!
District Data Labs provides data science consulting and corporate training services. We work with companies and teams of all sizes, helping them make their operations more data-driven and enhancing the analytical abilities of their employees. Interested in working with us? Let us know!
|
Building a Classifier from Census Data
| 3
|
building-a-classifier-from-census-data-18f996c4d7cf
|
2018-05-17
|
2018-05-17 03:52:08
|
https://medium.com/s/story/building-a-classifier-from-census-data-18f996c4d7cf
| false
| 3,745
|
Data science tutorials, thought pieces, and other awesome content.
| null |
DistrictDataLabs
| null |
District Data Labs
|
tojeda@districtdatalabs.com
|
district-data-labs
|
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,ANALYTICS,BIG DATA
|
DistrictDataLab
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
District Data Labs
|
Data science consulting firm, research lab, and open source collaborative.
|
96c976e31f28
|
DistrictDataLabs
| 921
| 471
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-07
|
2018-04-07 19:51:21
|
2018-04-09
|
2018-04-09 03:41:31
| 14
| false
|
en
|
2018-04-09
|
2018-04-09 03:56:05
| 4
|
18f9a67d3dab
| 4.680189
| 2
| 0
| 0
|
The only thing that put me into a hard thinking situation in a month long journey on Deep Learning was “WORKING OF BACK-PROPAGATION” and…
| 4
|
Back-propagation — A Simple Explanation
Fig. 1. An Intuition — back-prop (borrowed from https://www.youtube.com/watch?v=tIeHLnjs5U8 )
The only thing that put me into a hard thinking situation in a month long journey on Deep Learning was “WORKING OF BACK-PROPAGATION” and when I went online to get out of that trouble, I experienced a shortage of simple explanation. Presence of ML libraries like TensorFlow, Keras, PyTorch and Theano have made life super easy as they provide all tools to work without knowing much about the mechanism going under the hood. Now, believe me and I’m sorry to break your heart, but you’ll never reach an expert level in this field not even intermediate if you ignore behind the scene working of some operations and Back-prop is one of them as it helps you understand a lot of issues like Vanishing Gradients, Saturation of Neurons and reasons for random initialization of weights.
Now, I’m not going in detail on “why” part, because an awesome explanation is already out there by Andrej Karpathy.
Yes you should understand backprop
When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to…medium.com
After going through a lot of videos and posts, I finally understood it and I’m going to try my best to make you grasp it too.
Notations Used :
Notations used
Fig. 2. A simple Neural Network of 3 layers with a single neuron in each network
I’m considering a simple neural network of 3 layers with a single neuron in each layer as our main aim is to understand the working of back-propagation. Firstly, the input to the model propagates forward through the model and finally reaches the output layer where the loss is calculated. Also, each node in the model receive it’s input from the previous layer and this input is the weighted sum of each of the weights multiplied by previous layer’s output and then the weighted sum is passed to an activation function and the result is then passed as an input to the next layer.
Fig. 3. Forward propagation
Cross Entropy Loss or Error :
Fig. 4. Cross Entropy Loss
OR
Fig. 5. Cross Entropy Loss
After calculating the loss, aim is to minimize the loss and Back-propagation is the tool that is used by Gradient Descent for doing this. In this task of minimizing the loss, derivative or gradient of loss with respect to weights are calculated which are then used to update the weights so that predicted output gets as close as possible to the given label for a particular set of input features.
Now let’s dive into the computation part —
Gradient of loss with respect to the weights of the 3rd layer —
Fig. 6. Gradient of loss with respect to the weight of 3rd layer
Fig. 7. Gradient of loss with respect to the bias of the 3rd layer
The first two terms are same as the terms in the computation of the gradient of loss with respect to the weight and the third term came out to be 1.
Now, let’s calculate the gradient of loss with respect to the weights of the 2nd layer —
Fig. 8. Gradient of loss with respect to the weight of the 2nd layer
Now, the first term in this is not known to us, so let’s find it —
Fig. 9. Gradient of loss with respect to the activation value of 2nd layer
The first two terms of this are already computed(Fig. 6). Now, this value is inserted into the equation above(Fig. 8) and thus the gradient of loss with respect to the weight of second layer is computed. Now, the gradient of loss with respect to the bias of 2nd layer —
Fig. 10. Gradient of loss with respect to the bias of 2nd layer
The first two terms are already computed above(Fig. 8 and Fig. 9). After inserting the values, you will get the required gradient.
Now, let’s calculate the gradient of loss with respect to the weights of the 1st layer —
Fig. 11. Gradient of loss with respect to weight of the 1st layer
Now again, we have to get the first term here and this is how we get it —
Fig. 12. Gradient of loss with respect to the activation of the 1st layer
The first two terms are already computed (Fig. 8 and Fig. 9). And so after inserting all the values we’ll get our gradient of loss with respect to the weight of the 1st layer. Now let’s get the gradient of loss with respect to the bias of 1st layer —
Fig. 13. Gradient of loss with respect to the bias of 1st layer
And again, the first two terms are already computed above(Fig. 11 and Fig. 12). Insert the values and you get the gradient of loss with respect to the bias of the first layer. After getting all these gradients, the weights and biases are updated using an appropriate learning rate after which the prediction is done on new set of features using those weights and biases.
So, this is how we back propagate from the third layer to the first, computing the gradients of the loss or error function with respect to the weights and biases of each layer. And, on adding more units in layers just adds up some more indexes but the basic functionalities remains the same.
NOTE —
Now, for better understanding, I suggest you to write down all equations on paper such that you have eyes on all of them simultaneously and then start going through it.
If you are still unable to grasp this, I suggest you to go through the references below —
Back-Propagation is very simple. Who made it Complicated ?
Learning Outcome: You will be able to build your own Neural Network on a Paper.medium.com
Let me know if this was helpful or not in the comment section and if it is, then let it help others too, Share it. Thank You!!
|
Back-propagation — A Simple Explanation
| 7
|
back-propagation-and-related-issues-a-simple-explanation-18f9a67d3dab
|
2018-04-10
|
2018-04-10 19:05:01
|
https://medium.com/s/story/back-propagation-and-related-issues-a-simple-explanation-18f9a67d3dab
| false
| 856
| null | null | null | null | null | null | null | null | null |
Backpropagation
|
backpropagation
|
Backpropagation
| 125
|
Sharim Pervez
|
AI & ML Enthusiast || Astrophile || Pythonista || Programmer || Teacher
|
bd6802867fe7
|
sharimpervez
| 7
| 58
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-18
|
2018-09-18 23:51:59
|
2018-09-19
|
2018-09-19 00:13:02
| 1
| false
|
en
|
2018-09-19
|
2018-09-19 00:13:02
| 10
|
18f9f780ffee
| 4.056604
| 1
| 0
| 0
|
Why Be Conscious At All?
| 3
|
A Short Potential Solution to the Hard Problem of Consciousness
Why Be Conscious At All?
Dr. David Chalmers writes “A mental state is phenomenally conscious when there is something it is like to be in that state.” He has also coined “The Hard Problem of Consciousness” which is so-called because there is seemingly no reason why a system that processes information should have phenomenally conscious experiences, rather unconscious ones. Regardless of what any system’s or lifeform’s conscious experience of gravity or air-pressure are like, it is thought to be a mystery of why there should be any experience in the first place. After all, we can build satellites and drones that measure and detect gravity and air-pressure automatically, without having any phenomenal sensation. What advantage does consciousness confer?
Indeed in 2018, we find neural networks approximating complex tasks which were previously thought to be in the exclusive domain of conscious human processing, including winning at go, holding (text-based) conversations, labeling images, recommending music, and generating psychedelic art. Neural networks provide a clear example that complex computational tasks can be done without conscious experience. In more traditional accounts of the Hard Problem, this idea is illustrated with reference to “philosophical zombies,” creatures which do just about everything you or I can do, but just in terms of automatic responses to data collected by the zombie’s senses.
I suspect that the reason the Hard Problem is hard is because the referent of the word “consciousness” implicitly contains two separable concepts, rather than just one as many people have assumed. I will do my best to disentangle them, though seeing as how they are linked, please forgive some circularity of definitions while I build to the conclusion.
This image has the quality of capturing Qually The Qualified California Quail’s likeness, which some would call Qualeish. Photo from Wikipedia.
The first part is the “something it is like” ness. Though it goes by many names (including phenomenal or epiphenomenal experience, and qualia), it is relatively straightforward and has been discussed by many. To give a concrete example, the feeling of hunger is one such conscious state with a distinct subjective flavor — we know we are hungry when our conscious experience is colored by a distinct overtone that abates when we eat or are injected with the right fluids.
Crucially, the second part is that conscious experiences cannot be conceived of without something that consciously observes. My point is that you need both words in “conscious observer” to get an idea of what’s going on when you say the words “consciousness” or “phenomenally conscious experience.
To get concrete, certain wavelengths of light, after they are run through a bunch of unconscious neural processing, are at some point consciously perceived to be red by humans. It is absurd to conceive of “redness” without some thing to observe it.
Consider that whenever we imagine if what it would mean for a system (like a lifeform or neural network) to have phenomenal states, either we imagine that the system has an “I” to consciously observe the data it operates with, or we use our “observing I” to perceive our conscious imagining of the system’s “something it is to be like” states. This leads me to state a proposition:
1. It is by definition impossible to conceive of a conscious state without reference to something consciously observing said state.
In other words, a quail on the dark side of the moon only “has” qualia when we read or think about this sentence. Or as neuroscientist, psychiatrist, and creator of “Integrated Information Theory,” Giulio Tononi (who more or less believes that anything dealing with information is, to some infinitesimal degree, conscious) almost put it, consciousness is data, observed. It is the very act of being able to observe some experience related to data in the first place that makes said experience, by definition, conscious.
Assuming that the “something that observes” lies in the system we are imagining, and not in us, this bit must form the base of the “I”-ness of said system. This leads me to a second proposition:
2. It is impossible to conceive of any creature’s or system’s sense of “I” that doesn’t have any phenomenally conscious experiences.
A moment’s thought about philosophical zombies, comatose patients, future neural-network-driven recommendation systems, and so on, reveals that these things are lacking a sense of “observing I,” though comatose patients recover an “observing I” when they become conscious.
Allow me to make two clarifications. First, in humans, ego, identity, and habits all grow around the observing I, and when comatose patients recover they regain some or all of these things. Second, by “observing I” I mean the part of you that’s left over if you get rid of all of:
your entire past,
what you are thinking about now,
what you are planning for in the future,
the content of your working, short-term, and long-term memory,
what you are expecting,
any ideas you have about yourself,
any mental models you’ve formed from being in the world,
and anything that isn’t a sensation that is registering right now.**
We thus have a functional account of the purpose of consciousness: phenomenally conscious experiences are impossible to divorce from a system’s “consciously observing I.” And if you happen to be in the business of self-preservation, having a sense of “I” or “I am”-ness might make feedback, rewards, punishments for various goal-selection and goal-pursuit strategies more effective.
Taken together, both propositions imply that Tononi’s pan-psychism, while a laudable commitment to consistency, is not correct; a light-diode has no more a sense of “I” than the banana I just ate, even though both are capable of responding to changes in their environments (of concentrations of light and Ethylene gas, respectively). Because these things have no “I” that observes the data they are operating with, by definition, there is nothing to generate their conscious experiences.
..
** Various writers, sages, and gurus make this point; it is well known in meditative traditions. Alan Watts writes about it in The Book, Sam Harris writes about it in Waking Up, and for someone a tad less mainstream, the spiritual teacher Mooji also talks about it.
|
A Short Potential Solution to the Hard Problem of Consciousness
| 50
|
a-short-potential-solution-to-the-hard-problem-of-consciousness-18f9f780ffee
|
2018-09-19
|
2018-09-19 00:13:02
|
https://medium.com/s/story/a-short-potential-solution-to-the-hard-problem-of-consciousness-18f9f780ffee
| false
| 1,022
| null | null | null | null | null | null | null | null | null |
Philosophy
|
philosophy
|
Philosophy
| 39,496
|
Evan Warfel
| null |
468f06b04195
|
evanwarfel
| 210
| 148
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-12
|
2018-09-12 06:09:06
|
2018-09-17
|
2018-09-17 06:52:18
| 2
| false
|
en
|
2018-09-17
|
2018-09-17 07:11:29
| 10
|
18fa4f67b95f
| 7.802201
| 1
| 0
| 0
|
Web development is a regularly evolving field. Clients dependably search for rich encounters and customized content. At last, all they need…
| 5
|
How Artificial Intelligence is Transforming the Future of Web Development — Fraction Tech
Web development is a regularly evolving field. Clients dependably search for rich encounters and customized content. At last, all they need is the keen web application which conveys hyper-altered client encounters. Be that as it may, this makes a testing work for web developers in light of the fact that the conventional methodologies in web development are not the response to this. Subsequently, utilizing AI in web development and related apparatuses can better the things.
Via automating the web development process, web developers need to put less consideration regarding essential awkward undertakings and for the most part center around the esteem including viewpoints, for example, arranging super client encounter, defining plan methodology and different exercises. Actually, human-machine cooperation is fit enough to change the web.
Artificial Intelligence in Web Development
Machine Learning, a branch of Artificial Intelligence, offers another favorable position face to face machine cooperations. Without learning abilities, applications will approach an issue similarly on numerous occasions, and commit a similar error without altering or improving the arrangement in light of related knowledge.
Machine Learning is an empowering innovation that permits web applications to adjust after some time by watching and gaining from clients’ propensities, characteristics, and inclinations. Client encounter enhances because of the applications simply being more quick-witted.
Read More Flutter Vs. React Native: Let’s See Who the Winner is and What You Need To Know
With the previously mentioned upper hands, for what reason are AI-empowered sites not sent wherever starting today? One reason is that, in spite of its long history, AI is as yet a developing innovation to the extent standard Information Technology is concerned. The apparatuses that AI utilizes, (for example, neural systems, hereditary calculations, Markov chains, Bayes classifiers) are only babbling to standard web developers. To incorporate artificial intelligence with a web application without any preparation is out of the scope for generally company’s.
The capability of commercializing AI did not get away from the consideration of the best worldwide web innovation players. Google, Facebook, and company’s of that kind have thought of AI toolbox that empower the stopping of instant normal dialect comprehension and machine learning highlights into web applications.
wit.ai and Dialog flow (in the past api.ai) are free administrations claimed by Facebook and Google separately. Conversely, Amazon Lex, IBM Watson, Microsoft LUIS are business paid administrations.
The AI toolbox offered by worldwide industry pioneers have made conceivable the appropriation of AI in big business web applications. You never again need to enlist AI PhDs to engage your sites with normal dialect understanding capacities.
Rather, standard web developers can incorporate AI into chatbots on your current web and versatile innovation stages. developers shouldn’t be retrained to code in obscure AI programming dialects to exploit the innovation. Rather, they work with APIs and devices which they are as of now comfortable with, for example .NET, WPF, WCF, and different technologies..
Conveying AI utilizing the above toolbox isn’t without challenges. In spite of the toolbox’ best push to shroud the complexities of Artificial Intelligence, developers still need to take in another dialect and ideas, for example, specialists, goals, elements, and activities. It is consoling, nonetheless, to realize that online documentation is promptly accessible for updating developers with the toolboxes. Figuring out how to incorporate and alter the innovation is especially practical.
Read More Swift versus Objective-C: Which Is Better For iOS App Development?
A more considerable test for coordinating the toolbox is that the product requires extra customization with the end goal for it to comprehend the particular ideas in your specific application space. These toolboxes are intended to be broadly useful beginning stages for understanding everyday dialect develops, and may not be particular enough to parse the area particular ideas or the regular errands that your web guests may wish achieved.
Subsequently, human coaches must give the product an idea progression that is particular to your application. Likewise, to enhance the precision of sentence parsing for your specific application space, coaches should unequivocally give sentence cases of the ordinary demands that your applications are intended to deal with.
This preparation segment is exceptionally tedious and dreary, yet fundamental keeping in mind the end goal to decrease the possibility of blunders in understanding clients demands.
To conquer the underlying preparing obstacles and to kick off the reception of AI toolbox, toolbox merchants have begun to give pre-assembled area models that objective particular businesses and assignments. For instance, Dialog flow offers pre-manufactured operators that objective ventures, for example, bistros, eateries, inns, carriers, and regular assignments, for example, item bolster, delineate, web look, and so on. Microsoft LUIS highlights pre-constructed spaces for taxis, eatery reservation, motion picture theaters, wellness following, and so forth.
The pattern to give prepackaged spaces will abbreviate an opportunity to convey AI functionalities in web applications.
Read More Enterprise Chatbot solutions to increase productivity | Intelligent Automation for Enterprise
In this blog, we will talk about some ways how artificial intelligence can enhance web development.
1) By reclassifying web coding with self-learning calculations
Artificial intelligence can encourage web programming as well. It can perform different fundamental assignments like including, refreshing the records to the database, anticipating which bits of code are destined to be used to take care of an issue and using those expectations to invite web developers for receiving a specific arrangement.
For a case, Pix2Code is robotizing front-end development. Besides, AI calculations can be utilized to make self-adjusting codes from the scratch being it as a completely practical code with no human inclusion.
Along these lines, developers can assemble more quick-witted applications and bots utilizing Artificial Intelligence innovation at considerably speedier rates than previously.
In any case, they would need to prepare these machines and feed them with great informational indexes, in the first place. The wiser the outline and the more total the preparation, the better the outcomes that these frameworks will deliver. This is the place the aptitudes of the web developers have an essential effect in the web development.
2) By improving web development utilizing AI virtual aides
For all web developers, it is a dull activity to outline essential designs and formats of the site pages. To facilitate these assignments, AI instruments, for example, virtual colleagues can enable you to out in the wake of disentangling the entire plan and web development process.
By making inquiries of the client (for this situation, the proprietor of the site or even the engineer) and concentrate the substance of their answers, the AI virtual aides will make customized content with the correct mix of brand, outline, format, and substance required by that client.
Give us a chance to take a case of Adobe Sensei with which you can consequently investigate the passages and prescribe outline components to the client.
These extents from the robotization of fundamental photograph altering aptitudes, for example, trimming utilizing picture acknowledgment systems to the making of components in pictures that did not already exist by concentrate the neighboring pixels.
Subsequently, web developers now simply need to center around preparing a machine to think and act like an originator.
3) By tackling web development for SEO with AI
Search engine optimization procedures depend to an expansive degree on the numerical crunch. Many web analysis devices are great, in any case, their potential is presently constrained by the handling capacities of people who translate that information into their sites.
With the development and mining of information upheld by AI, you can amplify the utility of a site’s metadata and other information and metadata created by the client.
Prescient motors fabricated utilizing Artificial Intelligence technologies can produce thoughts that can point developers at inconsistencies in the design of their site or feature the terrible substance from an SEO perspective.
Utilizing such thoughts, AI can list better approaches to outline sites and create web content that associates with the intended interest group.
Market Brew is a falsely savvy SEO stage that utilizations artificial intelligence to enable developers to respond and plan the substance of their sites with the goal that web search tools can see them..
Read More How to Develop The Top Notch Web and Mobile Education App
4) By mechanizing complex prerequisite social occasion and investigation
You can robotize the way toward gathering your customer’s prerequisites and end-client stories with no human intercession by utilizing a voice right hand or an AI-fueled chatbot.
You can likewise set up a nitty-gritty portrayal of the gathered information and can utilize different information extraction devices with a specific end goal to produce thoughts which at that point drive the website design and development technique.
This has turned out to be conceivable through a deliberately built framework that utilizations counterfeit vision, NLP, machine learning, and calculations and picture acknowledgment apparatuses among others.
For example, Kore.ai is one of those stages that enable chiefs with the learning they have to drive business results inside information-driven examines.
5) By furnishing prevalent end-client involvement with AI web development
Chatbots with AI feed can take client care and collaboration to the following level. A straightforward standards-based chatbot reacts just to the particular predefined summons. A chatbot imbued with AI, then again, can mimic a genuine discussion by discovering some new information from every discussion and adjusting the reactions and activities appropriately. They can computerize routine assignments and give applicable data and administrations.
You can envision the different potential outcomes here. These chatbots can enhance guest interest by reacting to inquiries, remarking on blog entries and giving continuous help and customization.
The ShopBot controlled by eBay’s AI is one of that chatbot based on Facebook Messenger that can enable shoppers to diminish the best eBay offers from their entire postings and react to client-driven inquiries.
Read More 10 Website Development Trends That Will Shape The Future of The IT Industry in 2018
6) By putting testing and quality affirmation on auto-pilot
By utilizing AI calculations, one can test the application itself with a minor human information. In light of chronicled information, they can anticipate the real parameters of programming testing forms. They can likewise discover blame examples and enhance blame forecasts with considerably higher proficiency than conventional quality control approaches.
Consequently, the distinguishing proof and repair of blunders/bugs will never again be a long and moderate process. At this very moment, Microsoft is planning to dispatch an Artificial Intelligence bug web crawler for developers, Microsoft Security Risk Detection.
In this new QA condition driven by artificial intelligence, developers can find more viable approaches to test, distinguish exceptions all the more rapidly and work on powerful code scope systems, all without fundamental test understanding. In straightforward terms, developers can center around consummating the assemblage while the AI handles complex experiments and coming about mistakes consequently.
Ending point:
So far we have seen the real routes by which artificial intelligence is enhancing the web development. This has changed web development to smart application development, computerized website design, exceptionally capable proposal motors and substantially more. It is simply the AI which will make so crucial to the web development biological system by presenting the shrewd web.
The following point of reference for AI in web development is to create self-enhancing applications that can think past the breaking points of human reasoning. So there is an extraordinary seek after a progressive AI that will affect companies for more prominent’s benefit.
In this way, in the event that you are getting ready for enhanced web development, at that point, you may search for a reliable web development company like Fraction Tech. We give custom web development benefits in .NET, WPF, WCF, and different technologies.
Tag: hire web developer, responsive web design
|
How Artificial Intelligence is Transforming the Future of Web Development — Fraction Tech
| 5
|
how-artificial-intelligence-is-transforming-the-future-of-web-development-fraction-tech-18fa4f67b95f
|
2018-09-17
|
2018-09-17 07:11:29
|
https://medium.com/s/story/how-artificial-intelligence-is-transforming-the-future-of-web-development-fraction-tech-18fa4f67b95f
| false
| 1,966
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Fraction Tech PVT LTD
|
Fraction Tech is an India based fast growing offshore website, software, mobile, cloud-based application design, and development company in India.
|
70fcbfe41664
|
fraction.tech
| 15
| 3
| 20,181,104
| null | null | null | null | null | null |
0
|
l = [5, -3, 100, 70, 2]
larger_than_50 = [e for e in l if e > 50]
print(larger_than_50)
#output
[100, 70]
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch. Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!
docker pull leemeng/cat
docker run -dp 1234:5000 leemeng/cat
| 4
| null |
2018-08-10
|
2018-08-10 12:41:38
|
2018-08-10
|
2018-08-10 12:53:29
| 7
| false
|
zh-Hant
|
2018-08-10
|
2018-08-10 12:53:29
| 27
|
18fa82d6767f
| 2.846226
| 0
| 0
| 0
|
Pandas、Docker 以及數據時代的反思
| 4
|
資料科學文摘 Vol.3
Pandas、Docker 以及數據時代的反思
不同於上週的文摘 Vol.2 產品理解以及 DS / DE 之路,這週的選文比較技術以及實作導向。本週將導讀 3 篇使用 Python 以及 Pandas 的文章,並鼓勵讀者實際動手學習。
我們也會看到如何使用 Docker 來讓資料科學變得更簡單,並提供一個有趣的貓咪圖片辨識 App 給有興趣的讀者參考。
最後,讓我們分別看看哈佛商業評論以及美國前首席資料科學家 DJ Patil 談談如何讓資料科學在企業內普及,以及數據時代我們面臨的各種道德議題。
How to Master Your Skills for Pandas?
Python 裡頭最著名的資料處理 library 非 Pandas 莫屬了。這篇文章使用互動式的環境,列出挺完整的 Pandas 指令讓讀者可以邊參考 sample code 邊自己動手玩玩看。
其中包含各種利用 Series 以及 Dataframe 兩種 Pandas 常見的資料格式來對數據進行各種操作,適合沒碰過 Pandas 的新手以及想要重新 refresh 語法的人。
How to rewrite your SQL queries in Pandas, and more
提供常見的 SQL 查詢以及其對應的 Pandas 寫法。一個有效率的資料科學家通常需要 SQL 及 pandas 兼具。雖然這篇一開始的目標讀者是那些已經熟悉 SQL 並打算使用 Pandas 的讀者,我認為熟悉 Pandas 但還不了解 SQL 的同學們也能從這篇學到點東西。
這篇適合至少懂 Python 或是 SQL 並想學習另外一個語言的讀者。如果你想要深入了解 SQL 或是其與 Python 之間的差異,你可以看看我之前寫的為何資料科學家需要學習 SQL。
這篇 Hackernon 文章則簡單介紹 Functional Programming 在 Python 可以如何被實作,函式(function)是怎麼被視為 Python 的第一公民以及我們能如何活用函式如 Map、Filter 函式。
如果你剛起步,想要有效率地學習 Python 的話,我建議可以從 List comprehension 開始學起。
一個簡單的例子是假設我們想從一個 List 中取得大於 50 的數字:
文章的後半段則透過 The Zen of Python (Python 的禪學)來說明為何使用 List comprehension 會比使用傳統 Functional Programming 中的 Map、Filter 函式來得簡單。
Python 有一個著名的彩蛋,你可以利用 import this 來顯示 The Zen of Python,它提供使用 Python 的人一個簡單的開發準則,具體如下:
Docker for Data Scientists
很簡單地說明常見的 Docker 術語以及使用 Docker 可以為資料科學家帶來的好處:
節省建置開發 / 分析環境所需的時間
增加可重現性(Reproducibility)
抽象化作業系統(OS)的概念,再也沒有只能在 Mac 跑而不能在 Windows 跑的問題
這篇提供非常初級的指令來開始在本機環境使用 Docker,可以嘗試看看。
在 Smartnews 我則是使用 Amazon Elastic Container Service 來開發一些資料科學家們會用到的分析工具,如大家的好朋友 Jupyter Hub、Airbnb 開發的 BI 工具 Superset。之後有機會會另外撰文分享經驗。
Try It Yourself
Docker 讓我們可以快速重現其他人的分析環境或者是有趣的 application。如果你想馬上感受 Docker 的威力,可以看看我之前利用 Tensorflow 以及 Flask 實作的一個貓咪圖片辨識的 Github repo(feat. CNC):
Cat Recognizer :利用 Tensorflow, Flask 實作 App 並使用 Docker 快速與他人分享成果
雖然 Github repo 上也有教學指南,想要最快速地在你的電腦上使用這個 App 的話,下載 Docker 並開啟 Daemon 後,使用命令列輸入以下指令:
接著在瀏覽器輸入 localhost:1234 應該就能開始使用了。如果你有興趣了解如何一步步打造這樣的 ML 應用,可以留言跟我說。
不過現在讓我們繼續看剩下的 2 篇好文章:)
The Democratization of Data Science
哈佛商業評論(Harvard Business Review, HBR)在這篇文章裏頭敘述為何不只是針對資料科學家,提升所有人的「資料素養」對一個企業來說是一件非常重要的事情。
最明顯的優點是可以讓數據團隊專注在:
解決更高層次的企業問題
建立分析工具以加速所有部門的數據分析
而不是處理每個部門的「資料瑣事」。
這個議題並非只跟企業的管理階層相關。對一個資料科學家來說,想辦法利用資料工程(Data Engineering)等方式來自動化如「建立簡單儀表板」的工作,並教導各個部門實際的使用方式,可以讓你一勞永逸,避免永遠在處理非常瑣碎的「資料瑣事」,專著在更大的目標。
你不會因為自己不是會計師就不遵守專案預算;
你也不會因為不是資料科學家就不提升資料素養。
Data’s day of reckoning
生活在數據驅動時代的我們或許都能感受到世界變化的快速。
美國前首席資料科學家 DJ Patil 認為不管是資料科學、機器學習還是人工智慧領域,「道德倫理」以及「安全隱私」議題都應該越來越被重視。
電腦科學(Computer Science)時代最著名的安全議題非 SQL 注入(SQL Injection)莫屬了。如同這個議題,在數據驅動時代,我們也會面臨類似道德以及數據保護的議題,像是人工智慧模型產生具有偏見的預測、以及最近的 GDPR 等等。
在教育方面,DJ Patil 認為我們應該教育下一代在數據處理時,應該遵守的準則並將其被納入課綱;以數據驅動的公司則需要將這些想法都納入企業文化,在招聘資料科學家的時候,除了考慮他 / 她的分析能力以外,也要評估道德倫理的部分。
身為一個資料科學家,除了技術層面的提升,也應該稍微了解這些議題。
We can build a future we want to live in, or we can build a nightmare. The choice is up to us
結語
Pandas、SQL、Docker、資料素養的培養以及數據時代的道德倫理問題等等,這週我們也看了不少資料科學相關的文章,希望你有從這篇文章裡頭學到點東西。
雖然因為篇幅關係沒辦法把所有實際的 Python 指令列在這邊,我希望透過摘要的方式能讓沒時間的你也能學習、初步地了解資料科學並進一步發現自己有興趣的地方鑽研。
有時間的話我推薦實際閱讀這些文章(當然也可以閱讀其他你自己收藏的文章,也歡迎分享),也可以試試看我寫的 Cat Recognizer 並留言跟我說說你的想法。
之後一樣會定期更新,希望收到第一手消息的話可以點擊原部落格的訂閱。另外如果你有其他會對這篇文章有興趣的朋友,也請幫忙分享給他 / 她:)
That’s it for this week, stay tuned and happy data science!
Post Tags 文摘 Pandas SQL Docker 資料科學
Originally published at leemengtaiwan.github.io.
|
資料科學文摘 Vol.3
| 0
|
資料科學文摘-vol-3-18fa82d6767f
|
2018-08-10
|
2018-08-10 12:53:30
|
https://medium.com/s/story/資料科學文摘-vol-3-18fa82d6767f
| false
| 476
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
李孟(Lee Meng)
|
資料科學家,現居東京。軟體工程出身,最近努力學習資料科學並加以應用。 假日在東京市區騎著腳踏車冒險、尋找美食。懶得動的時候就待在咖啡廳寫寫程式
|
b7c3862d5333
|
leemeng
| 36
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
a42f6d7c7c06
|
2018-08-25
|
2018-08-25 07:30:00
|
2018-08-28
|
2018-08-28 17:35:45
| 10
| false
|
ko
|
2018-10-16
|
2018-10-16 09:28:06
| 0
|
18fcc8461087
| 5.656
| 1
| 0
| 0
|
최근 AI + 블록체인 프로젝트가 많이 나오는걸로 알고있습니다 그중에 최근 빗썸에 상장한 Cortex 프로젝트가 괜찮게 보여서 간단한 리뷰를 작성했습니다.
| 5
|
코인 리서치- Cortex
최근 AI + 블록체인 프로젝트가 많이 나오는걸로 알고있습니다 그중에 최근 빗썸에 상장한 Cortex 프로젝트가 괜찮게 보여서 간단한 리뷰를 작성했습니다.
Cortex
등급:B
프로젝트 소개:
Cortex의 주요 임무는 사용자가 Cortex 블록 체인에서 스마트 계약을 사용하여 추론 할 수 있는 블록 체인 최첨단 기계 학습 모델을 제공하는 것입니다. Cortex의 목표 중 하나는 사용자가 플랫폼에서 작업을 게시하고 AI Dapps (인공 지능 분산 응용 프로그램)을 제출할 수 있는 기계 학습 플랫폼을 구현하는 것입니다.
블록 체인은 인공지능 컨센서스를 계약에 사용할 수 있게 하려고 합니다. 전통적인 EVM은 전체 노드에 대해 CPU에서 계산 부하가 유발 되므로 복잡한 추론 계산을 통해 전체 노드에서 스마트계약을 실행할 수 없습니다. 인공지능으로 컴퓨팅하는 응용 프로그램은 기존의 스마트계약에서 개발할 수 없습니다. Cortex Virtual Machine (CVM)은 GPU / FPGA에서 실행되는 추론 명령어를 가능하게 하며, 이러한 추론 명령어 세트를 사용하는 계약 수수료는 GPU / FPGA 용으로 설계되었습니다.
채굴 합의 알고리즘: Cuckoo Cycle POW
CPU 채굴 연산력은 GPU의 비해 많이 안 좋습니다 추가적으로 AI + 스마트 계약과 연관 되어있기 때문에 무조건 연산력이 높은 GPU 채굴을 추천하는데. 제단쪽은 이미 CPU/GPU 연산력 성능 문제 때문에 Cuckoo Cycle 방식의 POW를 사용합니다.따라서 CPU/GPU의 성능격차를 줄일수 있습니다. (마치 지금 모네로 채굴방식 크립토 나이트 알고리즘와 같이 각 채굴기 성능 격차를 줄일수 있습니다)추후 모든 모바일(스마트폰)의 GPU로 채굴할수있게끔 만든 알고리즘입니다. Cortex는1채굴기 1표를 위해 모든 채굴자에게 동등한 위치를 추구합니다. 이런부분은 어느 정도의 거버넌스를 많이 생각한것같습니다.
코인 인플레이션:
채굴 알고리즘이면 코인 인플레이션은 무조건 확인해야 하는 목록입니다. Cortex도 비트코인과 마찬가지 4년마다 블록 반감기를 통해서 물량 조절할 수 있고 추후 코인 채굴 끝나면 엔도르핀으로 채굴 보상 예정인 것 같습니다. 그중에 토큰 발행량 절반이 채굴 보상으로 배포됩니다 (1억 5천개).
토큰 매트릭스:
토큰 심볼:CTXC
토큰 타입: ERC-20
총 발행량: 299,792,458
세일된 토큰량: 149,896,229 (50%)
퍼블릭 ICO 토큰 가격:0.58USD/0.00066667ETH
ICO 수익률:
현재 시장 상황에서 타 유망 프로젝트들 거의 수익률이 0.5x 이지만,Cortex 이정도 가격방어면 꽤나 훌륭한 프로젝트 입니다.
토큰 이코노미:
엔도르핀은 이더리움 gas와 비슷한 알고리즘 이라고 생각하시면 됩니다 예를 들면 스마트 계약 이용 비용 혹은 CTXC 전송할 때 수수료등 이런비용 입니다.추가적으로 Cortex에서케글( Kaggle) 개발 및 분석 대회 플랫폼을 도입하여 개발자에게 인센티브 장려를 CTXC로 지급됩니다.
Cortex 생태계
Cortex핵심 팀원 :
CEO Ziqi Chen, CTO Weiyang Wang 및 수석과학자 Jia Tian 는 중국명문대학 Tsinghua University에서 졸업했습니다.특히 CTO는 졸업 후 석사 학위를 받기 위해 미국에 갔으며 기계 학습 및 인공 지능 이론에 능하며 Kaggle 대회에서 은메달을 수상했을뿐만 아니라 IBM Hyperledger 블록 체인 프레임 워크 디자인 을 기반으로 자산 증권화를 개발했습니다. 그리고 2017 년 Dorahack hackathon 대회에서 우승했습니다.
Cortex 투자부문:
Investor
투자 부문 보시면 비트메인﹑FBG﹑젠펀드﹑노드﹑OK﹑후오비등 많은 유명 VC들이 참여했고, 그중에 비트메인이 추후 Cortex를 위해 어떤 채굴기를 제공해줄지 궁금합니다.
상장된 거래소:
Exchanges
중국팀이다보니 거의다 중국계 거래소 (OKEX﹑Huobi﹑CoinEx등)가 상장되었습니다. 한국 거래소 유일한 상장한 곳은 빗썸입니다.
Cortex 거래 볼륨 탑10거래소
지금 거래량이 제일 많은 거래소가 빗썸입니다 그리고 현재까지 원화 투자 가능한 곳은 빗썸 밖에 없습니다 탑10 거래 볼륨을 보자면 골고루 흩어져 있습니다.
Cortex 토큰 소유 비례:
Cortex Top 100 Token Holders
1등 토큰 보유분은 추후 채굴 보상으로 락업된상태이고 100등까지 토큰 홀더를 보면 거의다 거래소/VC기관들 지갑 입니다.
깃허브 업데이트 횟수:
타 플랫폼 코인에 비해 커밋수가 엄청 적고 업데이트도 자주 안합니다. 다들 깃허브를 자주 안보시는데 깃허브 업데이트 횟수는 무조건 보셔야 하는 항목입니다. 깃허브는 곧 프로젝트의 개발지표 입니다.
결론:
여태 나온 AI 프로젝트 중 Cortex가 제일 유망하다고 생각합니다. AI 계의 이더리움이라고 생각하시면 됩니다 .기술 개발 부분은 칭화 대학교에서 나온 CTO 그리고 어드바이저가 AI 방면의 다재다능한 지식이 있고 하이프도 엄청 탄탄합니다.하지만 아직 채굴에 대한 테스트넷이 발표가 안되서 채굴 알고리즘을 통해 CPU/GPU 성능차이 추가적으로 EVM(이더리움 가상 머신)의 한계점을 극복할 수 있는지가 관건입니다. 메인넷은 2019년 2분기 출시되서 이 프로젝트는 장기적으로 봐야 될 것 같습니다.
종합적으로 등급을 B로 평가하겠습니다.
추후 점수 부분은 테스트넷이 발표되고 결과 나오면 조정할 예졍입니다.
위험감수:프로젝트가 나아가야할 영역은 아직 미개발영역이고, 실생활의 응용하기 까지 아직 한참 멀었으며, 프로젝트의 진행도가 느릴 위험이 있습니다.
|
코인 리서치- Cortex
| 1
|
프로젝트-리뷰를-하는-남자-cortex-18fcc8461087
|
2018-10-16
|
2018-10-16 09:28:06
|
https://medium.com/s/story/프로젝트-리뷰를-하는-남자-cortex-18fcc8461087
| false
| 577
|
We are two blockchain enthusiasts, striving to understand and educate deeply what it is all about to anyone out there. We also do consultancy services for projects and people thinking about starting projects
| null |
LandingBlock
| null |
Landing Block
|
hello@landingblock.com
|
landing-block
|
CRYPTO,TOKEN ECONOMY,CRYPTOCURRENCY,EDUCATION TECHNOLOGY,ASIA
|
LandingBlock
|
Cortex
|
cortex
|
Cortex
| 38
|
Mike Mu
|
Landing Block Co-Foudenr Reserch tokeneconomic
|
653afc5ba0b
|
whdqkf88
| 25
| 67
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
1a6df3dd4b9b
|
2018-02-26
|
2018-02-26 18:56:32
|
2018-02-26
|
2018-02-26 18:56:57
| 1
| false
|
en
|
2018-02-26
|
2018-02-26 23:30:20
| 4
|
18ffb55e7156
| 2.384906
| 2
| 0
| 0
|
A doctor who used big data to not only detect but beat cancer — her own.
| 5
|
How to make Health.AI mainstream?
A doctor who used big data to not only detect but beat cancer — her own.
FDA approving a company that detects whether you are having a stroke.
Predicting heart disease by looking at your eye health.
It seems like everyday we hear about news articles like this. But while there has been explosive progress in using AI to solve big health problems, we are still very far from mainstream adoption. Here are the three key things that need to happen.
1) Data Collectability
Arguably the most obvious tenet — you can’t generate meaningful insights if you don’t have enough data. Ubiquitous phones with biometric sensors can provide some data, but it’s limited essentially to heart rate, steps and maybe temperature and sleep. How do you know really important biometrics encoded in your breathing or blood? To what extent are you willing to go to have this data to be measured and analyzed? Are chips embedded in your body the solution? We are moving progressively to a world where everything can be measured and monitored but we will have to balance them against privacy, safety, and ultimately even address the question of what makes us human.
2) Retrospective Repeatability
Many studies the media is quoting now are retrospective — you have diabetes, well let’s see now if we can use your data to predict whether someone else today will have diabetes. There could have been many confounding factors, say people with diabetes were taking medications causing the body to have a certain response, so what you are seeing is correlation rather than causation. For a true prediction you need to do double-blind prospective studies ie here are two sets of people without diabetes, I predict the first set will get diabetes and another won’t. And these studies need to be longitudinal ie follow someone for years. Not to mention that an algorithm will probably be judged much harsher than a human for false positives and false negatives. And finally these study results need to be repeated many times by different groups around the world to gain acceptance in the scientific / medical community.
3) Explainable Predictability
Medicine has to make very sure about the risks and that’s not a bad thing. Take genetic therapy just a decade ago, there were deaths that could have been prevented and you can argue it actually set back the whole industry.
I find the biggest challenge of big data is it is like a black box we know there is prediction but we can’t explain it. Getting doctors to accept advice without understanding that explanation is upending what modern medicine and in fact the scientific method has been preaching in the last 500 years. Until (If?) we get there, the very likely way that Health.AI will develop is to highlight issues — if it detects an anomaly or a risk, it will encourage you to seek a doctor. Which is worth in itself, as any provider and payor will tell you that can save individuals their life and society billions of dollars.
These are purposely short articles focused on practical insights (I call it gl;dr — good length; did read). I would be stoked if they get people interested enough in a topic to explore in further depth. I work for Samsung’s innovation unit called NEXT, focused on early-stage venture investments in software and services in deep tech, and all opinions expressed here are my own.
|
How to make Health.AI mainstream?
| 2
|
a-doctor-who-used-big-data-to-not-only-detect-but-beat-cancer-her-own-18ffb55e7156
|
2018-03-01
|
2018-03-01 01:13:24
|
https://medium.com/s/story/a-doctor-who-used-big-data-to-not-only-detect-but-beat-cancer-her-own-18ffb55e7156
| false
| 579
|
The Life and Tactics of Founders in Israel.
| null |
F2VCapital
| null |
Power to Founders
|
matan@f2vc.com
|
f2-capital
|
VENTURE CAPITAL,ISRAELI STARTUPS,ENTREPRENEURSHIP,STARTUP,STARTUP LESSONS
|
TheJunction32
|
Health
|
health
|
Health
| 212,280
|
Amit Garg
|
Venture Capitalist; based in Silicon Valley since 1999
|
2277fc030751
|
amitgarg
| 987
| 540
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-22
|
2018-06-22 20:03:41
|
2018-06-21
|
2018-06-21 09:51:07
| 1
| false
|
en
|
2018-06-22
|
2018-06-22 20:04:46
| 3
|
190086fdb5ff
| 2.577358
| 0
| 0
| 0
|
IT and business leaders are doing more than just talking about digital transformation. Many are making huge investments in related…
| 5
|
Worldwide Spending on Digital Transformation Soars in 2018
IT and business leaders are doing more than just talking about digital transformation. Many are making huge investments in related technologies this year, according to a new report from International Data Corp. (IDC).
The research firm predicts that worldwide spending on the technologies and services that enable the digital transformation of business practices, products and organizations will total more than $1.1 trillion in 2018, an increase of 17% over the $958 billion spent in 2017. The report analyzes current and future spending levels for more than 130 transformation use cases across 19 industries in eight geographic regions.
Digital transformation involves revamping business strategies, models, operations and processes. Increasingly, this is being done by adopting digital technologies such as cloud services, mobile devices and apps, big data/analytics, artificial intelligence, social media, enterprise applications, the Internet of Things (IoT) and other digital tools.
Among the key goals of transformation are to increase agility, efficiency and scalability — all of which can make companies much more competitive. By moving toward more digital operations, companies can more easily interact with their supply chain partners that are undergoing similar shifts.
Of the various industries IDC examined, discrete and process manufacturing lead the way in digital transformation spending. Companies in these sectors are forecasted to spend $333 billion combined on related products and services in 2018, accounting for nearly 30% of all digital transformation spending worldwide this year. These businesses will also set the agenda for many transformation priorities, programs and use cases, the firm says.
In terms of specific types of technology solutions, the biggest categories of spending will be applications, connectivity services and IT services. It’s somewhat surprising that among the strategic priority areas with lower levels of spending this year are building cognitive capabilities, data-driven services and benefits, operationalizing data and digital trust and stewardship.
These are all areas that have garnered lots of attention among IT leaders, who are looking to leverage the latest data management and predictive analytics tools to help their organizations gain a competitive edge. IDC says the findings suggest that many organizations are still in the early stages of their digital transformation journey.
As they move into more mature stages of development, IDC expects to see these priorities and spending, shift toward the use of digital information to further improve operations and to create new products and services.
To achieve its digital transformation strategic priorities, every organization will develop programs that represent a long-term plan of action toward these goals, the report says. The transformation programs that will receive the most funding in 2018 are digital supply chain and logistics automation ($93 billion) and smart asset management ($91 billion), followed by predictive grid and manufacturing operations (each more than $40 billion).
Each strategic priority includes a number of programs that are comprised of use cases, IDC says, and these are discretely funded efforts that support a program objective and the overall strategic goals of an organization. Use cases can be thought of as specific projects that use line-of-business and IT resources, including hardware, software and IT services.
The use cases that will receive the most funding this year include freight management, robotic manufacturing, asset instrumentation and autonomic operations.
Other recent research indicates that digital transformation is a top priority for organizations. Gartner Inc.’s 2018 CIO Agenda Industry Insights report released earlier this year shows that all industries in the study rank digital business as one of their top 10 business objectives.
The Gartner report is based on survey of 3,160 CIOs from 98 countries and 15 main industries. In 11 of the 15 industries participating, CIOs ranked digital business/digital transformation among the top three business priorities for 2018.
Originally published at www.jamasoftware.com on June 21, 2018.
|
Worldwide Spending on Digital Transformation Soars in 2018
| 0
|
worldwide-spending-on-digital-transformation-soars-in-2018-190086fdb5ff
|
2018-06-22
|
2018-06-22 20:04:46
|
https://medium.com/s/story/worldwide-spending-on-digital-transformation-soars-in-2018-190086fdb5ff
| false
| 630
| null | null | null | null | null | null | null | null | null |
Digital Transformation
|
digital-transformation
|
Digital Transformation
| 13,217
|
Jama Software
|
Jama Software transforms product development for companies creating complex products and mission-critical software systems. Better. Faster. Breakthrough.
|
b7f146a222e0
|
jamasoftware
| 348
| 469
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
81d081925a61
|
2018-02-10
|
2018-02-10 09:48:55
|
2018-02-10
|
2018-02-10 10:26:51
| 1
| false
|
en
|
2018-10-17
|
2018-10-17 11:43:12
| 19
|
1901c07a0055
| 3.981132
| 1
| 0
| 0
|
Roland Pihlakas, January 2018 — March 2018
| 5
|
Nomenclature of AI control problem. Reasonable AI.
Roland Pihlakas, January 2018 — March 2018
Pembrokeshire Coast Path, Newport, United Kingdom.
Steve Bittinger, https://unsplash.com/photos/Q4nToRISnIY
Publicly editable Google Doc with this text is available here for cases where you want to easily see the updates (using history), or ask questions, to comment, or to add suggestions.
The Wright brothers were first to fly because they developed a system of control that depended on feedback.
Everyone else was trying to build stable planes.
The Wright brothers built an unstable plane but developed a control system [that stabilised the plane].
(YouTube: Norbert Wiener — Wiener Today (1981))
There have been various terms in use for referring to safe AI:
AI safety
AI control
AI value alignment
Corrigible AI
Friendly AI
Benevolent AI
AI security
Accountability, transparency, and responsibility of AI
Probably any other names? Please leave a comment!
Analysis and a proposal.
I would like to analyse some of the names in use and propose one additional name due to the associated meanings.
Ponder about this: In case of humans, we may say “a reasonable human being” — which probably indicates, among other properties, certain social competence, and openness for honest feedback or perhaps even actively seeking it out. That is, someone who can be reasoned with, and who seeks out or at least cares about the reasoning of others. Someone who expects to be mistaken or underinformed, and even expects to be (at least partially) unwillingly evil, from time to time.
It is remarkable that this is also what cybernetics is about — constant feedback loops and social construction of evaluations.
This concept of “reasonable AI” looks related to the concept of “corrigibility”, but I think there is more to the former.
We are less frequently using terms “aligned human being”, or “safe human being”. Also neither of these two terms hint at bidirectional feedback.
“Friendly human being” seems to be a more vague term — one can be outwardly friendly and actually evil at the same time, often even unwillingly and without actively knowing it, for various kinds of systematically occurring reasons. See addendum for some of the explanations.
I would like to read possible explanations to the question — why is there such a difference of term usage in case of AI, as compared to describing humans, and how might it change in the future?
Addendum.
See also couple of my other essays which explain additional background about why I think it is important for the AI to be modest and reasonable according to the definition above:
Essay about a phenomenon I called self-deception, which arises from a fundamental computational limitation of both biological and artificial minds due to fundamental limits to attention-like processes and which can be observed on any capability level.
Essay about why the frameworks of AI goal structures should try to avoid maximising the utility and what should they aim for instead — Making AI less dangerous: Using homeostasis-based goal structures.
More detailed formula and analysis developed based on the above linked post: Diminishing returns and conjunctive goals: Mitigating Goodhart’s law. Towards corrigibility and interruptibility.
See also:
AI “safety” vs “control” vs “alignment”, Paul Christiano
The Orthogonality Thesis, Robert Miles
Paul Pangaro — Cybernetics
A toy model of the treacherous turn — LessWrong.
Excerpt from the text:
“notice something interesting: The more precautions are taken, the harder it is for [a reinforcement-based agent] to misbehave, but the worse the consequences of misbehaving are.”
And:
“while weak, an AI behaves cooperatively. When the AI is strong enough to be unstoppable it pursues its own values.”
— For me it means that instead of only looking for many precautions we should also strive for a cybernetic / conversational / feedback-based approach.
If I understand correctly, the most depressing thing about the Adversarial Goodhart case is that unlike the name says, the agents who turn bad are not necessarily “adversarial” or malignant to begin with. But because of the law, they still become dangerous when they are put under too much precautionary control.
A funny case of Goodhart’s Law / Adversarial Goodhart in action with dolphins:
https://www.theguardian.com/science/2003/jul/03/research.science
“dolphins at the institute are trained to hold onto any litter that falls into their pools until they see a trainer, when they can trade the litter for fish. In this way, the dolphins help to keep their pools clean.
Kelly has taken this task one step further. When people drop paper into the water she hides it under a rock at the bottom of the pool. The next time a trainer passes, she goes down to the rock and tears off a piece of paper to give to the trainer. After a fish reward, she goes back down, tears off another piece of paper, gets another fish, and so on. This behaviour is interesting because it shows that Kelly has a sense of the future and delays gratification. She has realised that a big piece of paper gets the same reward as a small piece and so delivers only small pieces to keep the extra food coming. She has, in effect, trained the humans.
Her cunning has not stopped there. One day, when a gull flew into her pool, she grabbed it, waited for the trainers and then gave it to them. It was a large bird and so the trainers gave her lots of fish. This seemed to give Kelly a new idea. The next time she was fed, instead of eating the last fish, she took it to the bottom of the pool and hid it under the rock where she had been hiding the paper. When no trainers were present, she brought the fish to the surface and used it to lure the gulls, which she would catch to get even more fish.”
Thanks for reading! If you liked this post, clap to your heart’s content and follow me on Medium. Do leave a response and please tell me how I can improve.
Connect with me —
Skype | Facebook | LinkedIn | E-mail
|
Nomenclature of AI control problem. Reasonable AI.
| 28
|
nomenclature-of-ai-control-problem-1901c07a0055
|
2018-10-17
|
2018-10-17 11:43:12
|
https://medium.com/s/story/nomenclature-of-ai-control-problem-1901c07a0055
| false
| 1,002
|
Topics about the "Three Laws of Robotics", AI related problems, and proposed solutions.
| null | null | null |
Three Laws
|
roland@simplify.ee
|
threelaws
|
ARTIFICIAL INTELLIGENCE,SAFETY,IGNORANCE,AUGMENTED INTELLIGENCE,SIDE EFFECTS
| null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Roland Pihlakas
|
I studied psychology, have 14 years of experience in modelling natural intelligence and in designing various AI algorithms. My CV: http://bit.ly/rppro25028
|
bc6310622bcf
|
levitation
| 25
| 80
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
c4e335427ad5
|
2018-04-20
|
2018-04-20 03:53:35
|
2018-04-20
|
2018-04-20 03:55:13
| 1
| false
|
en
|
2018-04-20
|
2018-04-20 05:57:16
| 3
|
19049fba591a
| 1.664151
| 1
| 0
| 0
|
What got us here today, won’t get us there tomorrow…
| 4
|
The Future is Digital and Regenerative
Image: UChicago New with copyright Getty Images: “Higher-order interactions better replicate natural biodiversity and resilience”
What got us here today, won’t get us there tomorrow…
“Within an incredibly short time and space, the Internet is becoming dramatically mobile, social, video and artificially intelligent — encompassing everyone and everything, becoming ubiquitous as water, air or electricity. In this context, it is crucial to embrace yet to humanise and transcend technology; to hone in on what makes us truly human rather than a ‘better machine’: things such as imagination and intuition, creativity, problem-solving, complex understanding, improvisation, emotions and plasticity. We must therefore let go of the ‘machine-work’ — and most its traditional metrics, KPIs and success measurements — to truly reinvent what our jobs, work and ‘professional services’ can mean in the future.” Gerd Leonhard
Deep technologies are developing fast while our understanding of human potential is not evolving at the same pace as we race towards exponential change.
As blockchain, smart contracts, robotics, artificial intelligence, machine learning, FinTech, VR/AR, nanotechnology, genetic engineering and the IoT revolutionize our world, we need to revolutionize our human practices to co-evolve with the digital transformation towards a future of sustained vitality.
Legacy human resources, leadership and organizational development models and practices were created for a mechanistic, silo’d world: centralized, controlled from the top, layered with many (sometimes) conflicting and 20th century philosophies and ideas about human psychology, motivation and potential. In the VUCA world these are frequently expensive and difficult to implement, and often require dependency on a few experts who come with industrial strengths best practice accreditations, making it almost impossible to be agile, creative, scale or change.
Merely focusing in disrupting people practices using new technology such as cognitive systems and decentralized technologies, will simply extend, accelerate and embed industrial operating systems, obsolete paradigms and extractive practices that currently harm other life forms in our quest for efficiency, productivity and resources.
This will re-create the problems that brought us here- just much faster and more efficiently — imprinting their continuum further into the future.
Despite the promise of singularity, it could prevent humans from realizing their potential to co-create a more positive and hope inspiring future.
We need to rewrite the technologies of human development if we want a future that is not just digital, but also regenerative.
#blockchainleadership #leadership-web3.0 . #token-economy #hackHR
|
The Future is Digital and Regenerative
| 1
|
the-future-is-digital-and-regenerative-19049fba591a
|
2018-05-09
|
2018-05-09 13:30:53
|
https://medium.com/s/story/the-future-is-digital-and-regenerative-19049fba591a
| false
| 388
|
Unlocking the Human Power of the Token and Blockchain Economy
| null |
leadershipfutures
| null |
Leadership Web 3.0 Best Practices
|
mari@digitalfutures.co
|
distributed-economy
|
REGENERATIVE ECONOMY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,CRYPTOECONOMICS,WEB3
|
MaRiEagar
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
MaRi Eagar
|
Web 3.o Leadership and Organizational Development. Personal & Cultural Transformations. Cultural Creative. Strategic Ecosystems Builder. Positive Maverick.
|
da98760ebfa9
|
strengthsbuilde
| 273
| 799
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
1d0e56b58778
|
2018-08-23
|
2018-08-23 17:29:24
|
2018-08-23
|
2018-08-23 18:23:13
| 2
| false
|
en
|
2018-09-11
|
2018-09-11 04:01:45
| 0
|
190530c9ea6d
| 2.809748
| 7
| 0
| 0
|
We have previously explained the current state of the Artificial Intelligence industry and some of the biggest bottlenecks it’s facing…
| 5
|
Introducing TuringNet — World’s First Open and Trustable Platform for AI
We have previously explained the current state of the Artificial Intelligence industry and some of the biggest bottlenecks it’s facing, such as centralized AI model ownership, opaque and duplicated model processing efforts, biased and incomplete data sets, insecure data storage and concerns over data exfiltration, last but not least expensive but much wasted computing processing power.
How does TuringNet plan to reshape AI?
TuringNet was born to enable the advancement of AI models and industry. We are creating world’s first open and trustable AI platform with:
100% accessible AI algorithms and decentralized models (DModels) for commercial usage
Completely AI verifiable and traceable model training/inferencing with transparent goals
Total data ownership and transparent data refinery process
Fair share of rewards to the general public based on participants contributions
This is a revolutionary approach to how the public participates in scalable and collaborative model training and prediction. It will significantly reduce duplicated model training and fully utilizes idle computing resources in a collective manner, at the same ensuring data security.
Our tailored solution specifically for AI
This is achieved by several novel designs on TuringNet’s platform. The new incentivizing Learning Byzantine Fault Tolerance (LBFT) consensus mechanism is world’s first tailor-made consensus for and fully compatible with various AI model training needs. In the full lifecycle of DModels on TuringNet’s platform, there will be a verifier and a confirmer to validate the results after each iteration of training or inferencing, LBFT consensus mechanism then allows them to pass and store the information in order to release a fair amount of rewards to all participants who helped to advance DModels.
To ensure extensibility and scalability of the network, TuringNet is implementing the next generation multi-layered architecture with a mainchain plus multiple subchains. This architecture will empower the network to handle 10,000 transactions per second.
Another novel part of our solution is TuringNet’s Graph Virtual Machine (GVM). The GVM adopts graphs as deep learning models, and allows participants on the network to migrate TensorFlow models, as well as other models from mainstream frameworks, into TuringNet’s platform easily, so that constructions of these AI models will not need to start from scratch.
So how are we incentivizing and making sure participants are happy in our ecosystem?
TuringNet offers a sole cryptocurrency, TNET Token, as a functional utility token that can be consumed in various ways onTuringNet’s platform. Instead of collecting funds through the traditional ICO (Initial Coin Offering) model, where a significant portion of the tokens are pre-mined, then distributed to investors and community, we are taking a bold new approach. Not saying the ICO model doesn’t work well, but the true value of tokens in our ecosystem, should play a critical role to incentivize real users who help to grow our ecosystem in the long term. We want to ensure the massive public receives maximized benefits on our self-governing platform.
Based on all these beliefs, we are introducing a new model — Community Mining Offering (CMO). In this CMO model, users can obtain TNET Tokens through accomplishing a number of simple mining tasks on TuringNet’s platform operated by smart contract. This way, legal and financial risks are both mitigated as users won’t have to put in real money to purchase these tokens, instead they get rewards by providing valuable of work on our platform. In the earlier stage, our aim is to lower the bar for the public to participate and start mining through some light tasks. We are exploring and developing a few different ways that will allow the public to enter our platform and start mining with minimum learning curves.
Our solution will foster a frictionless system and maximize benefits and values to all ecosystem partners and participants in the long term. Please stay tuned for more updates down the road.
|
Introducing TuringNet — World’s First Open and Trustable Platform for AI
| 15
|
introducing-turingnet-worlds-first-open-and-trustable-platform-for-ai-190530c9ea6d
|
2018-09-11
|
2018-09-11 04:01:45
|
https://medium.com/s/story/introducing-turingnet-worlds-first-open-and-trustable-platform-for-ai-190530c9ea6d
| false
| 643
|
TuringNet is world's first open and trustable Artificial Intelligence (AI) platform. We aim to use AI to empower a truly self-governing ecosystem to crowdsource the development of AI based on blockchain, and solve the fundamental problem of trust in today’s AI industry.
| null |
turingnet
| null |
TuringNet
|
marketing@turingnet.com
|
turingnet
|
BLOCKCHAIN TECHNOLOGY,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING AI,BLOCKCHAIN STARTUP,BLOCKCHAIN DEVELOPMENT
|
turing_net
|
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Bella Wang
|
Business Strategist, Tech Futurist, Blockchain Enthusiast
|
8b269b8c681c
|
bellawang_98919
| 19
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-24
|
2017-11-24 07:11:33
|
2017-12-13
|
2017-12-13 07:19:35
| 8
| false
|
en
|
2017-12-13
|
2017-12-13 07:19:35
| 1
|
19071c57a99c
| 4.574843
| 0
| 0
| 0
|
Natural language generation (NLG) ) is the natural language processing task of generating natural language from a machine representation…
| 5
|
Natural Language Generation:
Natural language generation (NLG) ) is the natural language processing task of generating natural language from a machine representation system such as a knowledge base or a logical form.Psycho linguists prefer the term language production when such formal representations are interpreted as models for mental representations.
It could be said an NLG system is like a translator that converts data into a natural language representation. However, the methods to produce the final language are different from those of a compiler due to the inherent expressivity of natural languages. NLG has existed for a long time but commercial NLG technology has only recently become widely available.
Natural Language Generation (NLG), a subfield of artificial intelligence which produces language as output on the basis of data input,is not a new concept. What is new, however, is the increase in adoption of NLG into the enterprise. There are a plethora of ways the technology is being employed, primarily to improve human productivity, customer engagement and operational efficiency.
Natural Language Generation has long lived in the shadow of Natural Language Processing. Which has received far more investment and attention; indeed the two are often wrongly used interchangeably and misunderstood by those in business. Natural Language Generation is slowly beginning to make a name for itself in its own right. There are now a number of startups operating in the space making it big, such as Narrative Science, and Automated Insights.
What Is Goal of NLG?
The Main goal of natural language generation systems is to figure out how to best communicate what a system knows. The trick is figuring out exactly what the system is to say and how it should say it. Unlike Natural Language Understanding, NLG systems start with a well‐controlled and unambiguous picture of the world rather than arbitrary pieces of text.
Simple NLG systems can take the ideas they are given and transform them into language. The simple mapping of ideas to sentences is adequate for these environments.
Why NLG is different from NLP?
Whereas NLP is focused on deriving analytic insights from textual data, NLG is used to synthesize textual content by combining analytic output with contextualized narratives.
In simple language, NLP reads while NLG writes. NLP systems look at language and figure out what ideas are being communicated. NLG systems start with a set of ideas locked in data and turn them into language that, in turn, communicates them.
Uses of Natural language Generation in different Segments:
1)Dashboard and Natural Language Generation in Finance:
We’ve all seen the photos of the brokers with three monitors in front of them with 30 different graphs. While these graphs may work for some highly-trained professionals, the truth is that a picture isn’t worth a thousand words. Graphics show changes in data but they don’t explain why changes occurred, they don’t compare, and they don’t advise. Dashboards and graphics really serve as a red flag, telling the user where to look, but then they need narrative to explain the analysis to them. NLG has several solutions to this problem, both through plug-and-play options as well as platforms that allow users to build bespoke reports.
2)Written Reports in Real Time:
We are overwhelmed with reports. Equity reports, stock analysis reports, quarterly earnings reports, stress test reports, the list goes on and on. The goal of a written report is to explain data and analysis to a lay person who isn’t a data scientist. The problem is there is too much data, too few analysts, and too little time. Despite this, companies are still trying to keep up writing these manually, a plan which is untenable.
3)Natural Language Generation in Finance News:
News in financial services doesn’t necessarily mean newspaper, magazines, or public content. On the contrary, news in financial services is often more personalized — often less “news” and more analysis. What do the market changes mean for my portfolio? What do quarterly earnings reports mean for specific funds? The point is that market data is shared in real time. We are barraged with numerical reports from different agencies. So, how do we distill all this data in real time into something understandable? NLG in finance isn’t just about creating AP news stories.
What are the different variations of NLG?
· Basic: Basic NLG automatically translates data into text via Excel-like functions. An example of this would be a mail merge that restates numbers into language.
· Template: Here, the user is responsible for writing templates, determining how to join ideas and interpreting the output. Essentially sentence building, it relies on business rules, basic calculations (ex: sum) and templates with boilerplate text to automate content. Template systems are limited in their ability to draw from multiple data sources, perform advanced analytics, achieve reusability from one project to the next and explain how it came to the story it created, with no understanding of what the user is trying to communicate or their particular domain.
· Advanced: Advanced NLG communicates the way humans do — infusing intelligence and intent into the process from the very beginning. It assesses the data to identify what is important and interesting to a specific audience, then automatically transforms those insights into Intelligent Narratives — insightful communications packed with audience-relevant information, written in conversational language. Backed by a knowledge base, Advanced NLG systems understand the domain and can write contextually about a user’s business at a scale that is not possible by human.
|
Natural Language Generation:
| 0
|
natural-language-generation-19071c57a99c
|
2018-03-20
|
2018-03-20 18:21:47
|
https://medium.com/s/story/natural-language-generation-19071c57a99c
| false
| 912
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
ICARUS Solution
| null |
d0cfcc6fb412
|
icarus.solution
| 3
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-11
|
2018-09-11 23:54:35
|
2018-09-11
|
2018-09-11 22:40:10
| 1
| false
|
en
|
2018-09-12
|
2018-09-12 13:20:02
| 5
|
1907d4e3726a
| 1.796226
| 0
| 0
| 0
|
Taking my turn (and the hint) for a public coding challenge.
| 5
|
My #100DaysOfCode: A Commitment to Fluency
Every failure is a learning opportunity. After a lengthy interview process, including a 5 hour remote “on-site” session, a potential dream job (Machine Learning Advocate) decided not to move me forward. Though they expressed confidence in my data ability and felt that I would be a great fit for the team, their final decision ultimately came down to a fluency issue. In the take-home, I’d not had this issue because I was refactoring existing code; in the on-site, I’d performed at 1/3rd the speed of the typical applicant — 1/8th the speed of the strongest candidates! Their concern in hiring me was that I might not be able to keep up.
As a former educator (and aspiring conference speaker) in the process of submitting talks on cognitive theory for professional development, I was all-too-familiar with the role fluency plays in developing expertise. While the feedback was devastating to my ego, especially as one of the many programmers struggling through imposter syndrome, it was the type of guidance we self-taught coders crave. They were confident in my conceptual understanding of machine learning! Take that, imposter syndrome.
One problem was that much of my learning and development was done in Spyder (run locally) or Google Cloud Platform. None of the code I wrote got uploaded to GitHub and I diligently deleted my GCP resources after every project to avoid draining my free credits. In addition to this, I had been balancing my full-time job with these passion projects, often coding in hour-long chunks at 4:30am or 8pm, which made it easy to get off-track and scattered. I’ve become fairly well-rounded as a programmer, but I needed to focus on my primary domains in order to strengthen fluency before moving too far forward.
I structured this commitment to fluency around the #100DaysOfCode challenge. People often post about their progress with the #100DaysOfCode challenge, and I’ll engage with that community occasionally. However, the feedback I received tells me that what I need to focus on now is fluency and evidence of that fluency, for those times that I choke on anxiety and need to prove my skill set in other ways. That evidence is hosted on GitHub Project Pages, accessible through https://mutterberg.github.io/ (which currently redirects to https://datadev.me/).
A pessimist sees the difficulty in every opportunity. An optimist sees the opportunity in every difficulty. ~Winston Churchill
View all posts on my professional portfolio site.
Originally published at utterbergdatadev.com on September 11, 2018.
|
My #100DaysOfCode: A Commitment to Fluency
| 0
|
my-100daysofcode-a-commitment-to-fluency-1907d4e3726a
|
2018-09-12
|
2018-09-12 13:20:02
|
https://medium.com/s/story/my-100daysofcode-a-commitment-to-fluency-1907d4e3726a
| false
| 423
| null | null | null | null | null | null | null | null | null |
Programming
|
programming
|
Programming
| 80,554
|
Marissa Utterberg
|
Machine Learning, Web Development, & Community Engineering
|
ee0411f54253
|
mutterberg
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-26
|
2018-07-26 18:20:21
|
2018-07-17
|
2018-07-17 00:00:00
| 1
| false
|
en
|
2018-07-26
|
2018-07-26 18:20:53
| 2
|
190852399a51
| 1.796226
| 0
| 0
| 0
|
While business leaders are quick to adopt some technological innovations, others fill them with a sense of trepidation. Particularly…
| 5
|
Getting Business Leaders To Trust Algorithms
While business leaders are quick to adopt some technological innovations, others fill them with a sense of trepidation. Particularly, applications that rely upon AI (artificial intelligence) technology are viewed with suspicion by many business owners, even when other businesses have successfully implemented similar systems. This may have something to do with fearing a loss of control, because the analytics these applications provide do much of the work.
People often prefer to “do the math” themselves to ensure it’s accurate and correct, which is why there’s so much aversion to AI algorithms. No matter how often the algorithm produces more accurate information at a more efficient rate, its human nature to mistrust the machine’s calculations. When an error does occur, business leaders and managers tend to see this as proof that the machine is not to be trusted.
So, how can we get business leaders to trust algorithms? One common method is by putting some of the power back into their hands. By designing systems that allow the user to alter options and influence the outcomes, they feel they have more control over the machine’s functioning. In turn, this helps them build trust in the device and eliminates concern over the fallibility of the machine. This has been found to be the case, even when the human interference was too insignificant to alter the outcome. As long as they can feel like they have more control, they can place more trust in the machine.
Another option is to develop machines that make it easier to access the information. While developers have created systems that a terrific at gathering information, the devices often don’t deliver that information precisely when it’s needed. The information must be made available through the devices that executives are already using to make AI driven tech more accessible and easier to use. By improving upon the way information id delivered, executives and business owners are more likely to adopt the new systems.
Additionally, the information should be assembled in a way that makes it easier to review, based upon the motives of the user. Determining what the user will be searching for and why he or she needs that information is essential to a more reliable system. It will also need to be accessible through the correct workstations, so productivity can be maximized. When AI driven tech begins to have a positive effect on productivity, business leaders and executives will be more excited about adopting these technologies.
Originally published at chartwestcott.com on July 17, 2018.
|
Getting Business Leaders To Trust Algorithms
| 0
|
getting-business-leaders-to-trust-algorithms-190852399a51
|
2018-07-26
|
2018-07-26 18:20:54
|
https://medium.com/s/story/getting-business-leaders-to-trust-algorithms-190852399a51
| false
| 423
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Chart Westcott
|
Chart Westcott is Co-Founder and COO at Ikarian Capital, LLC a long/short equity biotech focused investment adviser. Read more at http://chartwestcott.net.
|
27c380a347bd
|
chartwestcott
| 19
| 207
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-24
|
2018-07-24 20:32:08
|
2018-07-24
|
2018-07-24 20:34:39
| 2
| false
|
en
|
2018-07-26
|
2018-07-26 14:56:48
| 1
|
1908a199609
| 5.368239
| 11
| 1
| 0
|
Life gets more like Black Mirror every day. The most recent thinkpiece-inspiring digital nightmare, in case you haven’t heard, is…
| 5
|
Rise of the Machines
Life gets more like Black Mirror every day. The most recent thinkpiece-inspiring digital nightmare, in case you haven’t heard, is “deepfakes” — digitally manipulated videos of people that are nearly indiscernible from reality. Deepfakes are one of the most frightening applications of machine learning I have ever heard of. The fact that they are already being used to sexually degrade women is horrifying. Other potential uses, including hyper-realistic propaganda and fake news, could lead to a future where we can’t trust anything we don’t see with our own eyes.
Technologies like deepfakes do not inspire a lot of support for the ongoing Artificial Intelligence revolution. Perhaps this is a good thing: at a moment when technology is advancing at frightful speeds, it is wise to approach new developments with caution. Imagine if humanity had sat down and researched the long-term effects of fossil fuel usage before screwing up the Earth so badly.
It is tempting to shun a new technology before it creates problems for our grandchildren — or, in the worst case scenario, hastens the demise of our species.
However, I believe that at the heart of a lot of people’s fear is an ignorance of what Artificial Intelligence and Machine Learning actually are. Contemporary popular culture is abound with cautionary tales like The Terminator and Lawnmower Man, in which a malevolent AI turns against its human creators and tries to supplant and destroy them. I have enough faith in the readers of this blog to tell the difference between fantasy and reality. At the same time, however, it is difficult to overcome the existential anxiety evoked by the idea of an intelligent machine.
In this article I aim to dispel some of that fear by demystifying AI, and more specifically Machine Learning. By understanding that Machine Learning is just a set of routines and processes for computers — albeit highly complex ones — I hope that people will abandon the idea that some sort of digital sorcery is behind Artificial Intelligence. In fact, I hope that readers will walk away from this post with the knowledge that Machine Learning is capable of revolutionizing the way we interact with the world around us — and in many ways, it already is.
For starters, Machine Learning is not a new idea thought up by Silicon Valley whippersnappers. It has a relatively long history, dating at least back to the dawn of modern computing in the 1950s. The term Artificial Intelligence was coined by John McCarthy in 1956 at a Dartmouth Conference devoted to the subject — the first of its kind. In the decades that followed, computers were quickly taught how to solve complex algebra and calculus problems. The first computer languages were written during this time.
By the 1960s the intelligence of machines, an idea that was still foreign to the lay population, had become a sudden and startling reality.
It may help, at this point, to provide some definitions. The terms Artificial Intelligence (AI) and Machine Learning (ML) are often used interchangeably, and they do describe roughly the same idea: machines that are able to act in a way analogous to the human brain. The important difference lies in syntax. AI describes an entity: an intelligence that is made by man and not by nature. ML, on the other hand, refers to a process; specifically, the way in which AI are able to actually learn. For this reason, I find ML to be the more useful term. It points to a specific process that can, with relative ease, be understood by just about anyone.
Because computers are not able to make the intuitive connections that human neurons are so good at, they have to be rigorously taught how to think. This is accomplished through the use of training data. It may have been a few years since you took High School math class, but there’s a good chance you were tested on concepts relating to inputs, outputs, and graphing the relationship between them. This is, in essence, what Machine Learning is. Connections between individual data points — say, the correlation between the time of year and the shopping habits of consumers — are taught to computers through trial and error. Over time, computers that learn enough of these connections become pretty good at extrapolating from them and making new connections themselves.
The conclusions these now-intelligent machines come to are not perfect, but they are constantly improving — and not prone to human bias and miscalculation.
In the simplest form of ML, machines are tested on their ability to correctly predict the connection between different factors. If Machine A says that shoppers will buy more toys at Christmas time, and Machine B says they will buy more swimsuits, Machine A will be kept and Machine B discarded. Over a large number of trials, the machines that are the best at making these connections will propagate and improve, leading to machines that exhibit intelligence above and beyond what an individual person could do. (Shoutout to content guru CGP Grey for explaining this far more eloquently and entertainingly than I ever could.)
There is certainly a lot more nuance to the current iterations of ML than I have laid out, and any readers who are interested (and willing to slog through a lot of complicated math) are encouraged to do further reading. Of particular interest is the history of neural networks, which imitate the complex and manifold connections that occur between synapses in the human brain. For the average reader, however, I hope that a simpler explanation will suffice. The key to alleviating the fears some people have of AI is to understand that the processes really are quite simple beyond all the jargon.
There is no magic going on: it’s just math and mechanics.
Once you have a base-level understanding of ML, there is a whole world of industry applications to learn more about. Forbes did a rundown two years ago of “Use Cases Everyone Should Know About” in fields ranging from health care to malware security. The applications that you’re probably the most familiar with are used by sites like Amazon and YouTube to recommend you products and content based on what you have already enjoyed. But ML is also being used to teach cars to drive by themselves, to help doctors more effectively recognize breast cancer, and to enable computers to hold conversations with human beings.
Unless neo-luddites and smartphone-skeptics join forces to smash the world’s computers, AI and ML are not going anywhere. Along with Blockchain, AI is predicted to be a major game-changer in almost every industry in 2018. The major players in the tech field — the indomitable giants Google, Amazon, and Apple — all have their eyes on the AI prize. Look in the next few years for smart assistants like Alexa and Google Home to become increasingly helpful, and also increasingly ubiquitous. For any company, no matter what field they are in, the time to jump aboard the AI train is now.
Once again, I want to urge a note of caution before you embrace our new artificially intelligent friends. I do not personally believe that the development of a fully sentient, destructive AI is possible within our lifetime, but there are other risks to consider. In the right hands, AI can be a wonderful tool that helps increase convenience, safety, and overall quality of life. In the wrong hands, it can be used for sinister applications like deepfakes — or by hackers and criminals to perform massive cyber-fraud. As with any tool, it is the way we choose to use AI that will dictate how it shapes our future. Let’s make the decision now to use it for the benefit of all people, and to be careful that it doesn’t get so powerful we can’t control it.
By: Joseph Green
Joseph is currently a student at Emerson College, where he studies Visual Media and Communication. As Content Creator, Joseph develops engaging content via blog posts, push notifications, and ad-copy. Joseph also draws upon his research skills to help contextualize the company’s place within larger technological and social trends in the industry.
|
Rise of the Machines
| 84
|
rise-of-the-machines-1908a199609
|
2018-07-26
|
2018-07-26 14:56:48
|
https://medium.com/s/story/rise-of-the-machines-1908a199609
| false
| 1,321
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
X-Mode
|
We Solve for X. http://xmode.io/index.html
|
c4b6895ff2ba
|
xmodesocial
| 30
| 40
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
ea5ea01f7bef
|
2018-07-04
|
2018-07-04 12:33:44
|
2018-07-04
|
2018-07-04 12:34:52
| 1
| false
|
en
|
2018-07-04
|
2018-07-04 12:34:52
| 2
|
190c67bd88d9
| 2.543396
| 0
| 0
| 0
|
We invest a lot of time and effort into supporting other socially minded organisations. As part of our efforts, we regularly invite those…
| 4
|
DataKind UK: using data science for good
We invest a lot of time and effort into supporting other socially minded organisations. As part of our efforts, we regularly invite those organisations to use our Park area for meetups, talks and workshops.
Last week we hosted DataKind UK and Global Witness who spoke to a large audience of friends and supporters about their work uncovering problems in the UK’s Companies Register. In this guest post, Suzy East from DataKind UK tells us a bit more about the project they presented.
Using data science for good
What do you get if you cross four do-gooding data scientists, a corruption fighting nonprofit and more than 10 million data points?
That’s exactly what the attendees at our recent meetup — kindly hosted by our friends at Yoti — came along to find out.
Despite unprecedented temperatures and the World Cup beckoning, more than 80 people showed up on Tuesday evening to hear DataKind UK volunteers talk about their seven month endeavour to uncover the hidden patterns in UK corporate ownership data.
Who are DataKind UK?
We’re a charity that uses data science for social good. We manage teams of pro bono data scientists and technical experts to deliver on projects with our nonprofit partners. We’ve been running for five years now and have a thriving community of volunteer data scientists who love to use their data skills for good.
It started with a DataDive
It all started back in 2016, when we first worked with the anti-corruption organisation, Global Witness. We spent a weekend of exploratory analysis with them, which became known as DataDives. A team of 50 volunteers unearthed a wealth of insights on UK company ownership from the Companies House open dataset. Namely, a worrying lack of data integrity.
For example, in the nationality field, people had found over 500 ways to say ‘British’, including ten people who identified as ‘Cornish’.
This and other findings were fed directly back to Companies House, and they changed the way they collected data as a result! Problem solved, right? If only it were that easy.
Uncovering patterns
After an initial look at the data, it was clear there was more work to do. So in 2017, Global Witness and DataKind embarked on a DataCorps project. On DataCorps, we work with a charity partner over 6–9 months to build a data science solution.
The aim was to take a full snapshot of Companies House ownership data and build a network graph mapping beneficial owners, registered addresses and other key identifiers. This way we could better explore and visualise the data to spot emerging patterns.
A sneak preview
Some of the key findings which we shared at our June meetup include:
4,000 owners are listed under the age of 2 — including one who has yet to be born!
Over 40% of the beneficial owners of Scottish Limited Partnerships (SLPs) are either a national of a former-Soviet country or a company incorporated there — compared to just 0.1% of all Limited Companies.
5 beneficial owners control more than 6,000 companies — might some of these individuals simply be stooges put in place by the real owners?
For more info on what we unearthed throughout the project, check out the report from Global Witness, due to be released in July 2018.
As well as sharing some results from the project, attendees got to find out what makes for a successful data for good project, and what it’s like to volunteer on one.
If you’re interested in finding out more, or volunteering with DataKind UK, please come along to our next meetup or sign up to our mailing list.
Suzy East
Project & Events Coordinator
DataKind UK
|
DataKind UK: using data science for good
| 0
|
datakind-uk-using-data-science-for-good-190c67bd88d9
|
2018-07-04
|
2018-07-04 12:34:53
|
https://medium.com/s/story/datakind-uk-using-data-science-for-good-190c67bd88d9
| false
| 621
|
Opinions and developments in identity related technology. Curated by the team at Yoti, your digital identity app.
| null |
getyoti
| null |
Identity Talk
|
hello@yoti.com
|
identity-talk
|
DATA SECURITY,IDENTITY,TECHNOLOGY
|
getyoti
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Rachael Trotman
| null |
a56c49293fcb
|
rachael.trotman
| 6
| 8
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-07
|
2018-03-07 16:04:25
|
2018-03-07
|
2018-03-07 16:07:33
| 1
| false
|
en
|
2018-03-07
|
2018-03-07 16:07:33
| 1
|
190cef43566d
| 1.886792
| 1
| 0
| 0
|
What if you could build a character that could learn while it played? Think about the types of game play you could develop where the…
| 1
|
Advanced games AI with genetic algorithms, neural networks & Q-learning in C# and Tensorflow for Unity
What if you could build a character that could learn while it played? Think about the types of game play you could develop where the enemies started to outsmart the player. This is what machine learning in games is all about. In this course we will discover the fascinating world of artificial intelligence beyond the simple stuff and examine the increasingly popular domain of machines that learn to think for themselves.
In this course, Penny introduces the popular machine learning techniques of genetic algorithms and neural networks using her internationally acclaimed teaching style and knowledge from a Ph.D in game character AI and over 25 years experience working with games and computer graphics. In addition she’s written two award winning books on games AI and two others best sellers on Unity game development. Through-out the course you will follow along with hands-on workshops designed to teach you about the fundamental machine learning techniques distilling the mathematics in a way that the topic becomes accessible to the most noob of novices.
Learn how to program and work with:
genetic algorithms;
neural networks;
human player captured training sets;
reinforcement learning;
Unity’s ML-Agent plugin; and,
Tensorflow;
Contents and Overview
The course starts with a thorough examination of genetic algorithms that will ease you into one of the simplest machine learning techniques that is capable of extraordinary learning. You’ll develop an agent that learns to camouflage, a flappy bird inspired application in which the birds learn to make it through a maze and environment sensing bots that learn to stay on a platform.
Following this you’ll dive right into creating your very own neural network in C# from scratch. With this basic neural network you will find out how to train behaviour, capture and use human player data to train an agent and teach a bot to drive. In the same section you’ll have the Q-learning algorithm explained before integrating it into your own applications.
By this stage you’ll feel confident with the terminology and techniques used throughout the deep learning community and ready to tackle Unity’s experimental ML-Agents. Together with Tensorflow you’ll be throwing agents in the deep end and reinforcing their knowledge to stay alive in a variety of game environment scenarios.
By the end of the course you’ll have a well equiped toolset of basic and solid machine learning algorithms and applications that will see you able to decipher the latest research publications and integrate the latest developments into your work while keeping abreast of Unity’s ML-Agents as they evolve from experimental to production release.
|
Advanced games AI with genetic algorithms, neural networks & Q-learning in C# and Tensorflow for…
| 1
|
advanced-games-ai-with-genetic-algorithms-neural-networks-q-learning-in-c-and-tensorflow-for-190cef43566d
|
2018-06-10
|
2018-06-10 00:07:42
|
https://medium.com/s/story/advanced-games-ai-with-genetic-algorithms-neural-networks-q-learning-in-c-and-tensorflow-for-190cef43566d
| false
| 447
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
martins rodrgz
| null |
3ad31e094f43
|
martinsrodrgz
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-08
|
2018-02-08 03:36:20
|
2018-02-08
|
2018-02-08 03:37:11
| 0
| false
|
en
|
2018-02-08
|
2018-02-08 03:37:11
| 1
|
190dea7ba3f8
| 0.241509
| 1
| 0
| 0
|
The AI algorithm is being trained to spot signs of glaucoma, age-related macular degeneration and diabetic retinopathy, which are three of…
| 3
|
Google’s DeepMind develops AI to diagnose eye diseases
The AI algorithm is being trained to spot signs of glaucoma, age-related macular degeneration and diabetic retinopathy, which are three of the biggest eye diseases inhttps://biospectrumasia.com/news/46/10278/googles-deepmind-develops-ai-to-diagnose-eye-diseases.htmlthe world, quicker and more efficiently than a human specialist.
Click here to read more https://biospectrumasia.com/news/46/10278/googles-deepmind-develops-ai-to-diagnose-eye-diseases.html
|
Google’s DeepMind develops AI to diagnose eye diseases
| 4
|
googles-deepmind-develops-ai-to-diagnose-eye-diseases-190dea7ba3f8
|
2018-02-08
|
2018-02-08 06:55:49
|
https://medium.com/s/story/googles-deepmind-develops-ai-to-diagnose-eye-diseases-190dea7ba3f8
| false
| 64
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
BioSpectrumAsia
|
BioSpectrum is an integrated B2B media platform for the bioscience industry in the Asia Pacific region.
|
8ca91c35a292
|
AsiaBioSpectrum
| 1
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-13
|
2018-03-13 17:43:04
|
2018-03-13
|
2018-03-13 17:46:49
| 1
| false
|
en
|
2018-03-13
|
2018-03-13 17:46:49
| 7
|
190f2516fa7
| 1.856604
| 5
| 0
| 0
|
With more and more decisions relying on AI people started to realize that blind application of algorithms can lead to disturbing biases.
| 2
|
AI: The Good, the Bad and the Biased
Photo by phonlamai/ Depositphotos
Artificial Intelligence is getting into our everyday life and is expected to generate significant benefits for businesses and individuals in the near future. According to Accenture, AI technologies are expected to increase productivity by 40% and double economic growth in developed economies by 2035.
Still, despite the shiny perspectives there is a “BUT”. With more and more decisions relying on AI people started to realize that blind application of algorithms can lead to disturbing biases. An algorithm was trained on Google News articles to create word embeddings — vector representations of words with which you can relate words one to another, e.g. “man is to king as woman is to queen”. When you asked the algorithm “man is to computer programmer as woman is to?”, the answer was “homemaker”[1]. Definitely not the answer everyone expects today. While sentiment analysis and the likes might not seem to be a big issue the list goes on. COMPASS one of the risk-assessment algorithmsbeing used in the US justice system to identify potential re-offenders was repeatedly accused of being biased against minorities.
The negative consequences of biased predictions are clear: they create unhealthy environment for decision making and they may lead to “self-fulfilling forecasts” (if you lock down a person wrongly identified as a potential re-offender, this person obviously will not commit a crime, creating a positive feedback loop for wrong forecasts in the future).
Despite the gloomy outlook it appears that AI is not the one to blame. Even the best algorithms are trained on the data that exists in the world and is often generated by humans in one way or another. And those humans are well-known for stereotypes and opinion biases. One research identified that first-time borrowers assigned to officers of the other gender pay, on average, 35 basis points higher interest rates compared to borrowers assigned to same-gender officers[2]. Another study demonstrated that hiring discrimination against Blacks hasn’t changed in the last 25 years[3]. Gender biases, age discrimanation, you name it.
Luckily for everyone algorithms can be fixed to eliminate bias.
But who will fix the bias in our heads?
[1] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai
[2] Beck, Thorsten, Patrick Behr, and Andreas Madestam (2012), “Sex and Credit: Is There a Gender Bias in Lending?”, CentER Discussion Paper, 2012–062.
[3]1. Meta-analysis of field experiments shows no change in racial discrimination in hiring over time. Lincoln Quillian, Devah Pager, Ole Hexel and Arnfinn H. Midtbøen
|
AI: The Good, the Bad and the Biased
| 9
|
ai-the-good-the-bad-and-the-biased-190f2516fa7
|
2018-03-23
|
2018-03-23 23:28:55
|
https://medium.com/s/story/ai-the-good-the-bad-and-the-biased-190f2516fa7
| false
| 439
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Anatolii Iakimets
| null |
270f31af848a
|
anatoliy.yakimets
| 96
| 93
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-20
|
2017-12-20 15:58:07
|
2018-01-24
|
2018-01-24 07:31:32
| 1
| false
|
en
|
2018-01-24
|
2018-01-24 07:32:42
| 1
|
190f4f2e9b85
| 1.581132
| 3
| 0
| 0
|
The term Data Science by the definition is self-explanatory, it simply means applying the scientific method (yep, that involves creating…
| 3
|
Data Science ( and Statistics) for Humans
The term Data Science by the definition is self-explanatory, it simply means applying the scientific method (yep, that involves creating mathematical or statistical models) to a set of data. Still didn’t get it? LET ME EXPLAIN.
Data Science is Statistics on steroid.
In fact Data Science is a discipline that existed even before Computers and Artificial Intelligence.
Data Science is the discipline of Storing data in a organized manner, Fetching the data and Analyzing that data.
For example when the accountants and auditors in the olden days used to store the financial records and the shipping logs, they were storing data in a defined structure for future use.
There are 3 pillars of work done by the data scientists in the past.
Storing Data
Scrubbing or cleaning data (Data munging)
Analyzing the data
In addition to the above after the advent of computers and AI technologies and after the Datafication( https://en.wikipedia.org/wiki/Datafication) of industries scientists had treasure troves of data to work with (Both OLAP and OLTP more on that in another post), hence they started utilizing Machine Learning to get Predictive analysis reports and clustering records of data to find similar attributes and behaviors.
Therefore we are well advised to assume the 4th pillar of data science as Prediction.
4. Predicting and Clustering data.(using machine learning)
So for nerds out there Storing + Cleaning + Analyzing = Statistics.
Statistics + Machine Learning = Data Science.
Data Science is an interesting discipline of work, and needs lot of Mathematical skills. However I intend to make the experience as easy as possible in future where I will go through all the four pillars of Data Science in detail and the technologies utilized in making them happen.
In each chapters please read the document, and then go through the supporting codes and test supporting codes of your own at the end of each chapter.
Finally as you read through, please click the heart at the bottom of each page to recommend this to more people so you can help beginners like yourself who are passionate in data science find this series.
Thank you for reading, wishing us all the best as we commence on this journey together.
|
Data Science ( and Statistics) for Humans
| 6
|
data-science-and-statistics-for-humans-190f4f2e9b85
|
2018-06-13
|
2018-06-13 19:36:03
|
https://medium.com/s/story/data-science-and-statistics-for-humans-190f4f2e9b85
| false
| 366
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Mohamed Ayoub
|
An AI and Data Science researcher by day, and a Jiu Jitsu coach by night. I love riding my bicycle and sometimes I cook. Physics and Technology is my dope.
|
7cbb06b667c5
|
mohamedayoob01
| 6
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-11
|
2018-08-11 16:08:38
|
2018-08-11
|
2018-08-11 16:09:54
| 1
| false
|
en
|
2018-08-11
|
2018-08-11 16:17:24
| 1
|
1910dbd2c7ac
| 0.592453
| 1
| 0
| 0
|
Contest “the most overcrowded service offering”
| 5
|
🔥🔥🔥 WIN 1 ETH!!! 🔥🔥🔥 Vote for ServAdvisor professionals!!!
Contest “the most overcrowded service offering”
From August 10th to October 10th:
While traveling or at home take a picture of a street with overcrowded services offering (restaurants, clubs, shops etc.) and place it here with a brief description of the place.
Share your picture on our official group https://t.me/ServAdvisor.
The best picture will be selected by the team and you can win one of our SRV token prizes!
Prizes:
1st place: 65 000 SRV ( = 1 ETH!!!)
2nd place: 32 500 SRV (= 0,5 ETH!!!)
3rd place: 21 500 SRV (= 0,3 ETH!!!)
|
🔥🔥🔥 WIN 1 ETH!!! 🔥🔥🔥 Vote for ServAdvisor professionals!!!
| 40
|
win-1-eth-vote-for-servadvisor-professionals-1910dbd2c7ac
|
2018-08-11
|
2018-08-11 16:17:24
|
https://medium.com/s/story/win-1-eth-vote-for-servadvisor-professionals-1910dbd2c7ac
| false
| 104
| null | null | null | null | null | null | null | null | null |
Ethereum
|
ethereum
|
Ethereum
| 76,961
|
ServAdvisor
| null |
64017f48c363
|
ServAdvisor
| 32
| 1
| 20,181,104
| null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.