audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
f60682517af3
2017-12-04
2017-12-04 17:39:39
2017-12-04
2017-12-04 17:32:22
1
false
en
2017-12-04
2017-12-04 17:40:12
1
161965a350c6
2.267925
0
0
0
Written by Stuart Rosewall, Senior Architect / Technology Strategist at SapientRazorfish
5
IotHow IoT will transform our cities, work, and life Written by Stuart Rosewall, Senior Architect / Technology Strategist at SapientRazorfish The Internet of Things will bring smart capabilities to all areas of our modern lives. In some respects, it already has, but we should expect to see more with even greater sophistication. For an example we need look no further than the quantified self. What could be more important than using IoT and smart solutions to improve and sustain our own health and wellbeing. Increasingly people are either passively or actively monitoring their exercise and vital signs. They share their personal data with service providers to receive instruction, encouragement and support. This quantification is either from sensors contained within smart phones or wearable devices, and the availability and analysis of these data points can have a transformative outcome for individuals. The sharing of data allows virtual personal trainers and social networks to help educate, motivate and reward, and the beneficial side effects include a healthier population and reductions in health care costs. As people become comfortable with the ability of smart technology to record and inform securely so the appetite will grow for more detailed data gathering and analysis. Smart phones and wearables can monitor your steps, speed, heart rate, sleep patterns etc., but why stop there? Why not have data from other connected devices for metrics like your blood pressure, oxygen and sugar levels, and how can I politely put it, your “bodily outputs”? Perhaps a connected loo can perform some convenient sampling as part of your morning routine. Connect in digital scales for weight measurement and you’ll be able to gather a powerful set of information for your own personal “preventative maintenance”: a rich historical data set which will allow for trends and issues to be spotted and remedial actions recommended. True smartness though comes from the ability to apply context to data. Why not let your health and wellbeing data aggregator have access to your DNA profile so that it can begin to monitor your progress against expected outcomes? Your DNA is individual to you, but it will have similarities to others and can be matched with other profiles and advice provided based on crowd sourced experiences. Your virtual personal trainer will be able to be entirely personal to you, advising you about your dietary needs as well as recommending particular activity types and exertion levels based on what has worked for others. If that all sounds a bit far-fetched then look towards what is already being achieved in farming. Precision farming is looking to help maximise agricultural productivity by using IoT and smart technologies to monitor and analyse data from herds and fields. As the growth in the human population makes more demands on farming so technology is being deployed to help increase efficiency. Livestock are even being provided with their own wearables and their vital signs being continually tracked and reported. The big data from entire herds is then used to help keep them healthy and able to perform at their best. Now I would not want to suggest that we should consider ourselves as livestock, but it’s certainly interesting to see that similar IoT and smart solutions can be deployed in both domains. Originally published at digileaders.com on December 4, 2017.
IotHow IoT will transform our cities, work, and life
0
iothow-iot-will-transform-our-cities-work-and-life-161965a350c6
2017-12-04
2017-12-04 17:40:14
https://medium.com/s/story/iothow-iot-will-transform-our-cities-work-and-life-161965a350c6
false
548
Thoughts on leadership, strategy and digital transformation across all sectors. Articles first published on the Digital Leaders blog at digileaders.com
null
digitalleadersprogramme
null
Digital Leaders
louise.stokes@digileaders.com
digital-leaders-uk
DIGITAL LEADERSHIP,DIGITAL TRANSFORMATION,DIGITAL STRATEGY,DIGITAL GOVERNMENT,INNOVATION
digileaders
Data
data
Data
20,245
Digital Leaders
Informing and inspiring innovative digital transformation digileaders.com
c0cad3f73a0
DigiLeaders
2,783
2,148
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-20
2018-09-20 18:39:49
2018-09-20
2018-09-20 18:40:51
1
false
en
2018-09-20
2018-09-20 18:40:51
4
161a4c294df
0.837736
1
0
1
In Q2, AI had the second highest exit activity on record. Now armed with the data through June 2018, we’re performing a mid-year status…
5
Mid-Year Artificial Intelligence Exits Analysis In Q2, AI had the second highest exit activity on record. Now armed with the data through June 2018, we’re performing a mid-year status check on how this year is shaping up. Based on analysis on our AI research platform, we see that exit activity in the first half of 2018 is slightly down from 2017. 2018 Mid-Year AI Exit Activity Lower Than 2017 But Higher Than 2016 Let’s take a closer look at the number of AI exit events by year. The above graphic shows 32 exits in the first half of 2018. For the past three years, Q3 and Q4 accounted for 46% of total exit events on average. If that trend holds, 2018 exits finish the year slightly lower than 2017, but higher than 2016. We’ll see if the second half of the year changes this trend! To learn more about our complete artificial intelligence report and research platform, visit us at www.venturescanner.com or contact us at info@venturescanner.com.
Mid-Year Artificial Intelligence Exits Analysis
1
mid-year-artificial-intelligence-exits-analysis-161a4c294df
2018-09-20
2018-09-20 18:40:51
https://medium.com/s/story/mid-year-artificial-intelligence-exits-analysis-161a4c294df
false
169
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Venture Scanner
Technology and analyst powered research firm. Visit us at www.venturescanner.com.
9834d2816c19
VentureScanner
2,012
11
20,181,104
null
null
null
null
null
null
0
$ tree -v --charset utf-8 . ├── churn_project.ipynb ├── config │ └── config.json └─── sql ├── 1_query.sql ├── 2_query.sql └── more_queries_here.sql # Two CTEs WITH cte_1 AS ( -- First section -- Put SQL here ), cte_2 AS ( -- Second section -- Put SQL here ) -- End both the CTE expression -- Put together the CTEs SELECT * FROM cte_1 WHERE cte_2.foo is 'bar' # Convert time_zone_data from query_table to UTC to have a common # time for all the data to reference query_table.time_zone_data AT time zone 'utc' at time zone (SELECT name FROM pg_timezone_names WHERE name = iso_time_zone) client_id item_id date unique_event 1 a 1-1-2018 1 <-- First time client 1 bought a 1 a 1-2-2018 0 1 b 1-3-2018 1 <-- First time client 1 bought b 1 c 1-4-2018 1 1 a 1-5-2018 0 1 d 1-5-2018 1 1 a 1-6-2018 0 2 b 1-4-2017 1 <-- New client -- I'll explain this below SUM(CASE column_name.unique_event WHEN 1 THEN 1 ELSE 0 END) OVER (PARTITION BY column_name.client_id ORDER BY column_name.client_id ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS "unique_event_count" client_id item_id date unique_event unique_event_count 1 a 1-1-2018 1 1 1 a 1-2-2018 0 1 1 b 1-3-2018 1 2 <-- +1 1 c 1-4-2018 1 3 <-- +1 1 a 1-5-2018 0 3 1 d 1-5-2018 1 4 <-- +1 1 a 1-6-2018 0 4 2 b 1-4-2017 1 1 <-- New client # /config/config.json { "database": "database_name", "schema": "schema_type", "user": "user_name", "host": "host_name", "port": "port_no", "passw": "password" } # /churn_project.ipynb import pandas as pd # Open config file with open('./config/config.json') as f: config = json.load(f) # Get connection params as a string postgres_config_string = "host=%s dbname=%s user=%s password=%s" % (config.get('host'), config.get('database'), config.get('user'), config.get('passw')) # Connect to db and save query as a dataframe # If a connect cannot be made an exception will be raised here con = psycopg2.connect(postgres_config_string) # Save query as a dataframe query_1 = open('./sql/1_query.sql', 'r') df_1 = pd.read_sql(query_1.read(), con=con) # Continue with additional queries ... # Here is an example of the above # Creating new_data_frame from df_1 and df_2 with only # columns 1, 2, and 3 from a Left Join new_data_frame = pd.merge(df_1[['col_1'], df_2[['col_2', 'col_3']], on='use_this_index', how='left') print df['not_binned_values'] 1 55000 2 12000 3 99000 4 5000 5 1000 ... # Bin df['not_binned_values'] into discreet categories bins = [0, 10000, 20000, 50000, 100000, 500000] labels = [0, 1, 2, 3, 4] df['binned_values'] = pd.cut(df['not_binned_values'], bins=bins, labels=labels, include_lowest=True) print df['binned_values'] 1 3 2 1 3 3 4 0 5 0 ... # Specify this is categorical data df['city'] = df['city'].astype('category') # Get the values to associate with each category city_codes = dict(enumerate(df['City'].cat.categories)) # Assign the values to the categories df['city'] = df['city'].cat.codes print df['city'] 1 12 2 8 3 4 4 1 5 2 ... print df['true_false_column'] 1 True 2 False 3 True 4 True 5 False ... # Convert a column with True and False values to 1s and 0s df['0_1_column'] = df['true_false_column'].astype('bool') * 1 print df['0_1_column'] 1 1 2 0 3 1 4 1 5 0 ... # Standardize the dataframe # Values are transformed to have a mean of 0, and sd of 1 scaler = StandardScaler().fit(df) std_df = scaler.transform(df) # Normalize the dataframe # Values are transformed to be between 0 and 1 norm_df = normalize(df, norm='l2') VIF = 1 --> Not correlated 1 < VIF < 5 --> Moderately correlated VIF >= 5 --> Highly correlated from statsmodels.stats.outliers_influence import variance_inflation_factor from statsmodels.tools.tools import add_constant def calculate_vif_(df, thresh=5): ''' Calculates VIF each feature in a pandas dataframe A constant must be added to variance_inflation_factor or the results will be incorrect :param X: the pandas dataframe :param thresh: the max VIF value before the feature is removed from the dataframe :return: dataframe with features removed ''' const = add_constant(df) cols = const.columns variables = np.arange(const.shape[1]) vif_df = pd.Series([variance_inflation_factor(const.values, i) for i in range(const.shape[1])], index=const.columns).to_frame() vif_df = vif_df.sort_values(by=0, ascending=False).rename(columns={0: 'VIF'}) vif_df = vif_df.drop('const') vif_df = vif_df[vif_df['VIF'] > thresh] print 'Features above VIF threshold:\n' print vif_df[vif_df['VIF'] > thresh] col_to_drop = list(vif_df.index) for i in col_to_drop: print 'Dropping: {}'.format(i) df = df.drop(columns=i) return df # Check your dataframe for multicollinearity # The values are just examples calculate_vif_(df) Features above VIF threshold: VIF feature_1 6.298834 feature_2 5.423374 Dropping: feature_1 Dropping: feature_2 print df c_id churn? feature_3 feature_4 feature_5 feature_6 ... 1 1 1 0 106256.54 9090 2 0 1 0 502.22 240 3 0 0 3 800.99 120.2 4 0 1 4 12.45 1456 5 0 0 2 432.02 222.22 6 1 1 1 726.90 123.99 # Save variables for use later y = df['churn?'] ids = df['ids'] df = df.drop(['churn?', 'ids']) import keras from keras.models import Sequential from keras.layers import Dense, Dropout from keras import regularizers from sklearn.model_selection import train_test_split from sklearn.preprocessing import normalize, StandardScaler # Input features are 'df', and outputs classified as 0 and 1 are 'y' # Standardize data for testing scaler = StandardScaler().fit(df) std_df = scaler.transform(df) # Normalise data for testing norm_df = normalize(df, norm='l2') # Why l1 and l2? # Split data X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.4, random_state=1075) # Define keras model for easier swapping of layers # and tuning hyper parameters def keras_model(loss, optimizer, metrics): ''' Keras model for classification. This is just an example, as each model should be tuned for your specific use case. ''' model = Sequential() # Add and subtract layers, activation functions, and dropout model.add(Dense(units=50, activation='relu', input_dim=len(df.columns))) # Add more layers here ... model.add(Dense(1, activation='sigmoid')) # Compile the model model.compile(loss=loss, optimizer=optimizer, metrics=metrics) return model # Create multiple models def keras_model_three_layers(loss, optimizer, metrics): ... def keras_model_extra_wide(loss, optimizer, metrics): ... def keras_model_added_foo(loss, optimizer, metrics): ... class roc_callback(Callback): ''' Create ROC curve for Keras model during the training of each epoch ''' def __init__(self, training_data, validation_data): self.x = training_data[0] self.y = training_data[1] self.x_val = validation_data[0] self.y_val = validation_data[1] def on_train_begin(self, logs={}): return def on_train_end(self, logs={}): return def on_epoch_begin(self, epoch, logs={}): return def on_epoch_end(self, epoch, logs={}): y_pred = self.model.predict(self.x) roc = roc_auc_score(self.y, y_pred) y_pred_val = self.model.predict(self.x_val) roc_val = roc_auc_score(self.y_val, y_pred_val) print '\rroc-auc: {} - roc-auc_val: {} \n'.format(str(round(roc,4)), str(round(roc_val,4))) return def on_batch_begin(self, batch, logs={}): return def on_batch_end(self, batch, logs={}): return # Create our model to evalute model = keras_model('binary_crossentropy', 'rmsprop', ['accuracy']) # Fit the model keras_model.fit(X_train, y_train, validation_data = (X_test, y_test), epochs=100, batch_size=1500, verbose=1, callbacks=[roc_callback(training_data = (X_train, y_train), validation_data=(X_test, y_test))]) # You should see output similar to this Train on 2212 samples, validate on 1475 samples Epoch 1/100 2212/2212 [==============================] - 1s 577us/step - loss: 0.6875 - acc: 0.5502 - val_loss: 0.6631 - val_acc: 0.6637 roc-auc: 0.8536 - roc-auc_val: 0.8494 # Save the model's output history = keras_model.fit(X_train, y_train, validation_data ... ) %matplotlib inline import matplotlib.pyplot as plt def accuracy_loss_graph(history): ''' Create an accuracy and loss graph for the train and test data sets ''' plt.style.use('ggplot') # Make it look nice fig, axs = plt.subplots(1,2) fig.suptitle('Accuracy and Model Loss') fig.set_size_inches(18, 8) # Accuracy plt.subplot(1, 2, 1) plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') # Loss plt.subplot(1, 2, 2) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() accuracy_loss_graph(history) Epoch 100/100 2212/2212 [==============================] - 0s 7us/step - loss: 0.4148 - acc: 0.8431 - val_loss: 0.3797 - val_acc: 0.8529 roc-auc: 0.8949 - roc-auc_val: 0.8940 # Visualize the ROC curve from sklearn.metrics import roc_curve from sklearn.metrics import auc # Get metrics to graph auc_keras = auc(fpr_keras, tpr_keras) y_pred_keras = model.predict(X_test).ravel() fpr_keras, tpr_keras, thresholds_keras = roc_curve(y_test, y_pred_keras) def create_roc_graph(): ''' Create an ROC graph for different estimators ''' plt.figure(figsize=(16,10)) # Set baseline plt.plot([0, 1], [0, 1], 'k--') # Map different classifiers plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras)) # Label graph plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('Receiver Operator Characteristic (ROC) Curve') plt.legend(loc='best') plt.show() create_roc_graph() from sklearn.metrics import classification_report, confusion_matrix target_names = ['0', '1'] # If a target has churned or not print 'Confusion Matrix' confusion_matrix = confusion_matrix(y_test, rmsprop_model.predict_classes(X_test)) tn, fp, fn, tp = confusion_matrix.ravel() print confusion_matrix print 'True Negative: {}\nFalse Positive {}\nFalse Negative {}\nTrue Positive {}\n'.format(tn, fp, fn, tp) print 'Classification Report' print classification_report(y_test, rmsprop_model.predict_classes(X_test), target_names=target_names) # You should see output similar to this Confusion Matrix [[374 122] [ 95 884]] True Negative: 374 False Positive 122 False Negative 95 True Positive 884 Classification Report precision recall f1-score support 0 0.80 0.75 0.78 496 * 1 0.88 0.90 0.89 979 ** avg / total 0.85 0.85 0.85 1475 # * 374 + 122 = 496 # ** 95 + 884 = 979 # Try percision recell curve from sklearn.metrics import precision_recall_curve # Get values for PR Curve y_pred = rmsprop_model.predict_proba(X_test) precision_keras, recall_keras, thresholds_keras = precision_recall_curve(y_test, y_pred) area_keras = auc(recall_keras, precision_keras) print "Area Under Curve: %0.2f" % area_keras def precision_recall_curve(area_keras): ''' Plot the percision-recall curve for a classifier ''' plt.figure(figsize=(15, 10)) plt.plot(recall_keras, precision_keras, label='Precision-Recall for Keras Model') plt.xlabel('Recall') plt.ylabel('Precision') plt.ylim([0.0, 1.05]) plt.xlim([0.0, 1.05]) plt.title('Precision-Recall: AUC=%0.2f' % area) plt.legend(loc='best') plt.show() precision_recall_curve(area) from sklearn.metrics import brier_score_loss print 'Keras RMSPROP brier score: {}'.format(round(brier_score_loss(y_test, y_pred_rms, pos_label=1), 4)) print 'Keras ADAM brier score: {}'.format(round(brier_score_loss(y_test, y_pred_adam, pos_label=1), 4)) print 'Logistic Regression brier score: {}'.format(round(brier_score_loss(y_test, y_pred_logr, pos_label=1), 4)) print 'Random Forest brier score: {}'.format(round(brier_score_loss(y_test, y_pred_rf, pos_label=1), 4)) # Results Keras RMSPROP brier score: 0.1146 Keras ADAM brier score: 0.1154 Logistic Regression brier score: 0.1315 Random Forest brier score: 0.0996 # Create a dictionary of client_ids and the row they are on # Keys are client ids, values are row index client_dict = dict(map(lambda t: (t[0], t[1]), enumerate(ids))) # Which client is on row 2222 print client_dict.get(2222) >> 7153 # Return the values of a specific row to feed into our mode. model.predict(df.iloc[[2222]].values) >> array([[.7888]], dtype=float32) # Save the model in your root directory model.save('churn_model.h5')
113
null
2018-07-08
2018-07-08 02:33:52
2018-07-13
2018-07-13 20:41:08
10
false
en
2018-07-13
2018-07-13 20:41:08
17
161b8cf19830
25.778302
3
0
0
A common problem is data science is having work sit on your local machine, only to be used by yourself and not those who requested the…
5
Building a Churn Model with Keras, Flask, Heroku, and Postgres — Deploying a Usable Model to Production Pt. 1 A common problem is data science is having work sit on your local machine, only to be used by yourself and not those who requested the model. Not because they won’t find it useful, but because they will not be able to access it in its current state. The trained model is not helpful if it only lives on your local machine, and requires your input every time someone looks at it. How do you get it into the hands of someone so they can use it without you? How can you get it into production so you can move on to the next thing? There are a few ways, and I’ve found the easiest is to create a web app with Flask and deploy it to Heroku for stakeholders to use. Once you are comfortable with process this you’ll be able to update your models along with others, creating a fast iteration and deployment process. The Final Product Note: I had originally had this as one article, however the more I included the longer it became. To make things easier, I’ve split it into two different articles. When this is finally done, the product should: Be accessed internally by stakeholders on a private server Contain minimal instructions and clear error messages No require maintenance besides reweighing and updating the model Show users a percentage on how likely their client is to ‘churning’ Provide clear results to inform business decisions Here is a little preview of the final product: The final product in action! In this first part of this series I’ll go over: Churn methodology; Getting your data with PostgreSQL; Modelling techniques with Python, and; Creating and evaluating a Keras model. In the second article I’ll go over: Making a front-end application using Flask, and; Deploying the model to Herkou. How do You Measure Churn? Different businesses and stakeholders within the same business, look at and measure churn differently. What is most important is to dig into what factors you think might be influencing churn for your business, start to form a hypothesis, and then model to determine what actually causes churn. How many products they have? How long have they been a customer? Where are they located? What industry are they in? Reforge has a great article on how churn varies by the amount of monthly spend, but again this is only one metric to consider. Consider these points when building your churn model, and determining what to look for within your data: There is no one right way to look at churn, as all businesses think about their customer lifecycle differently. Tie your model’s predictive power to revenue; retention costs vs lost revenue. Do different stakeholders think about churn differently? Does a customer service rep think about retention the same way an account manager does? Will churn predictions lead to better segmentation of customers? Can you get the results into your CRM? You’ll need to start asking yourself these questions, as you start gathering information to form your churn hypothesis. You might have the data you’re looking for, but in most cases you won’t. Which means you’ll need to start tracking more events in your database or do some serious feature engineering. In the model I built, a few of the features are: City, industry, and country demographics; Type of service plan; Amount of additional service fees; Online vs. in-store purchases; Revenue and purchases per month of activity; Total discounts / total customer spend; Number of customer support tickets; Customer support ranking; Unique items purchased; Orders with late shipping; Unique items purchased, and; Last sign-in date on website. There are more than this my model, but these are just a few to think about. There is a good Kaggle competition about churn which can also give you some ideas about what features to look for and eventually add. Jupyter Project Structure Now let’s set up the project folder, so we can stay organized. There are three main parts to the Jupyter part of the project. In / I have the main notebook churn_project.ipynb, in /config I have the database connection params config.json, and in /sql are the queries to be referenced later. Starting with SQL Because writing SQL in Python/Jupyter is a hassle, I suggest using an SQL editor such as Postico to connect to your DB and write all your queries before you use them in the construction of your model’s core dataframe. Also save the connection information to connect to your DB through the Jupyter notebook. Writing your queries here will save you a lot of time I’m using PostgreSQL, so examples will be referenced in that language. I won’t be including the queries I wrote for this project, however here are a few tips I can offer when you’re writing your queries to help you save time and frustration. PostgreSQL Tip 1: Common Table Expressions Because you’ll be trying to pull inferences out of your data, you’ll probably get bogged down in subquery hell with nesting after nesting of your SQL code. This is not only annoying but unnecessary. With common table expressions (CTE), you’re able to chuck your queries into sections and then reference them sequentially and you work through your project. They are incredibly helpful and will save your time. CTEs are able to be referenced within each other, however you cannot reference a CTE until it has been defined. E.g. you can reference cte_1 in cte_2, but not cte_2 in cte_1. You can see in the final SELECT I’ve referenced cte_1 and cte_2 successfully because they were both previous defined in the statement. PostgreSQL Tip 2: Built-in Time Zones If you’re dealing with data from different time zones, and you want a single source of truth use the built-in pg_timezone_names to move all the times to utc. With the below code you’re able to clean your column easily, without importing your own timezone table. By placing all the date/time references at UTC you’re able to compare dates correctly. PostgreSQL Tip 3: SUM + CASE + WINDOW Functions In PostgreSQL you’re not able to combine aggregate functions within the same method. Such that, SUM(COUNT(table.foo)) will throw an error. You’re able to get around this restriction by using aggregate functions with a case and/or a window function. In my case, I used them to find a sequential sum of unique events on a client. This is especially helpful if you’re trying to feature engineer from sequential time series sales / ordering data. E.g. Imagine you track every time a client purchases a unique item and you want to know how many they have purchased on a unique date? Your query would look like this: And your new result would be: The SUM function will create a running total, but requires CASE to return +1 every time it comes across a unique purchases. The key to the formula is the OVER PARTITION BY— our window function — coupled with ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. By combining these two methods, we’re able use the aggregate function without using a GROUP BY in the query. Instead, we’re using a GROUP BY in the window function — which you can think of as a pseudo subquery — so it is evaluated line by line within our group. Therefore, our data is not rolled up like usual aggregate function. If you want to see further examples of this, check out this article here. In the query above I SUM every line from the beginning of the group — which is by client id —until the window function finds a new client id. And then, the function will start over again. This will return a new value every time the client has purchased a new product or has performed a unique_event they had previously not performed. Save Your Files and Configs Save your queries as .sql and put them in /sql. As well, set up your config.json file, so it has the following the values to connect to your DB. I’ll be referencing this file within Jupyter, to save the query results as dataframes. Move on to Jupyter Start a new notebook in / and test the connection with the following code. If everything went well with the connection, start saving each query as its own dataframe. Now let’s start to put together the queried dataframes into a final dataset we can use in our model. Quickly, I want to highlight the scratchpad extension for Jupyter. It has saved me a lot of time, and provides an area for working through problems before you put them into the notebook. Example of the Scratchpad Further Feature Engineering I could have done the merging indirectly in our SQL editor, but I wanted to save some of it for Python. If you want to do it all with SQL, try out the CTEs shown above and combine them all and the end. I used the pandas method .merge() on all the queries we saved as dataframes, and here are few things to consider: Choose what columns you want to keep in the merge, as some of them might become redundant; Create a new merged dataframe so you’re able to reference your previous dataframes; Use on rather than left_on or right_on for simpler joining, and; Make sure your index columns between dataframes have the same name, or else you’ll have to deal with duplicate columns later. Intro to Binning and Labelling For the final model built below, all of the inputs are real numbers or integers, and the columns with strings or categories were labelled and encoded with integers so they could be inputted. Additionally I wanted to bin some of the larger integer values, to reign in the larger values and variability into something more manageable. Now to start binning and labeling some of our variables. For features such as total spend which have unbounded real positive numbers, I could leave them as they were, but my hypothesis was customers fall into distinct categories on the amount spent. E.g. if total spend is less than $10,000 label this as 0. And, customers from $10,000 to $20,000 would be labelled as 1. However, these bins would be different for your data. This is easily done with pd.cut() where you can bin and label values into discrete intervals within the same method. Similar to binning above, I need to take categorical data — such as service plan type or city — in my data and convert it to 0, 1, 2, etc. so it can be processed by our model. First check if the column is of type category by calling df['foo'].astype(‘category’). Next, I use pd.cat.categories on the column to get an indexed list of the categories which we can enumerate to integers mapped in a dictionary. We then assign the values in our dictionary to the column we’re factorizing. Finally, to convert True or False values imported from SQL — should there be any — into 1s and 0s, just multiply the column by 1. Also, make sure the column is of boolean type. Now you’re able to input boolean values as integers into the model. Transforming and Rescaling Our Data Now that I have all the bins needed, I can consider rescaling data in the event standardized or normalized data created a better model. There is are a few examples here on when to transform your data. Big values accumulating in your network are not good. Just as I binned everything above, I want to keep our input to the model nice and clean. For example, when I normalized my data I saw about a 5% improvement in the accuracy of my final model, however when I standardized it there was a substantial decrease in accuracy. Key take away here is you don’t know how some models will react to transforming the data, so try a few out and see how the results change. Checking for Multicoliniary Before I move on to modelling with the final dataframe, I want to check for multicollinearity within the churn features I’ve chosen. This is when two or more of the predictors in a model are moderately or highly correlated. This will throw a wrench into our model, as our weights is will be unstable and will vary greatly from one sample to the next. Even though I’m building a neural network, the estimation of the optimal weights for each node in the network could be subjected to the harmful effects of collinearity. Therefore, resulting in a model with poor predictive ability. Consider creating a regression model where you want to estimate housing prices (y), and two of the inputs were number of bathrooms (x_1) and number of toilets (x_2). Obviously, these two variables will ‘move together’ — unless there are bathrooms in houses that don’t have toilets? — because they are highly correlated. Therefore the model will have a hard time understanding which one x_1 or x_2 is responsible for change in y. So the coefficient estimates for both of the inputs will be incorrect. I can check the almost final dataframe for multicollinearity using a Variance Inflating Factor (VIF) on each feature we’re considering testing. The result of a VIF will show how much higher the value of a coefficient’s estimator will be are when x_1 and x_2 are correlated compared to when they are uncorrelated. Each input is given a VIF score, and high VIFs are a sign of multicollinearity. Here are some quick guidelines for determining which features to keep and which ones to remove. Here is the code to look through your dataframe, and determine the features which display multicollinearity. The add_constant package was used, as to correctly calculate the VIF, as variance_inflation_factor expects the presence of a constant in the matrix of explanatory variables. Given the results above, feature_1 and feature_2 should be removed from the dataframe. The Final Result Before We Model Feature engineering is now complete and the final dataframe should be ready to train the model. Below is just an example of what the df should be looking like. Two major features you need remove from dataframe are client ids (c_id) so you’re able to retrieve specific clients later on in the Flask app, and whether or not they have churned (churn?) which will be our y later on. Keep the rest of the features as dependent features in the df. If a client has churned y=1 and if they are still active y=0. Building The Keras Model For the model, I chose a sequential Keras model, with one input layer and a single output of the percent to churn. The model is defined by a linear stack of layers — defined on creation — with various activation functions for each layer. A visualization of a Keras model Let’s go over some of the major points of building a sequential Keras model. Units: The number of neurons for each layer of the model. Input / Output Dem: A positive integer, which is dimensionality of the input/output space. E.g. if the churn model has 21 features, the unit would be 21 for the inputs for the first layer and 1 feature for the final output layer which is the percent probability of churn. Activation: A “weighted sum” of the input to the neurons on a layer, to decide if it should be “fired” or not. Depending on the function used, you’ll have different results for your neural net. Because I’m doing a classification problem, I want to use a sigmoid function for the final layer, but not for each layer. If I do this there will be a vanishing gradient added to the model. Not good. Because the derivative of the sigmoid function will always be smaller than one, adding more and more layers with this activation function will produce weighting values that converge to zero quite quickly. This is also a problem with tanh activation functions. This means if we use sigmoid for each activation, our first layer will map to a much larger input region than our second, third, fourth, etc. This results in inputs from the first layer that have little effect on the final layer. It’s best to find activation functions that fit the type of problem trying to be solved, and experiment during the hyperparameter tuning phase. Dropout: To prevent overfitting, we remove some of the data every-time we move to a new layer. This can be applied to visible or hidden layers. Good practice is to use a small dropout value of 20% — 50%, as too low a value will not have a substantial effect on results. Loss / Cost Function: This function representing the cost (loss) the model pays for an inaccurate prediction. For linear regression the classic is mean squared error (MSE), and for our classification model we’ll use binary cross-entropy also known as log loss. The cross-entropy loss is fairly standard for binary classification, as it provides fast learning when y_hat (predicted values) differ significantly from our y (labels or true values). As well, this function especially penalizes the optimization of the network when highly confident guesses — both for success and failure— are wrong. If you’re interested in seeing how some loss functions don’t quite look the way you think they would, check out the site lossfunction. You won’t feel so bad then yours comes out a bit … wonky. I’ll visualize this model’s loss function below. Optimizer: These algorithms will maximize — or maximize depending on the problem — the above cost function, given a set of inputs. Consider what occurs when gradient descent is used when training a neural network, as the minimum found along the gradient will determine the weight update for each neuron during back-propagation. The choice of a optimizer will determine how updates occur on the weights, and ultimately how we classify our inputs to produce an output percentage for our model. Metrics: How do we know the model is correct and how do we score it? Keras provides a few out of the box metrics such as mean average, accuracy, binary accuracy, categorical accuracy, etc. While similar to a loss function, the results are not used when training the model. Testing the Model Because I’m trying to classify clients if they are at risk of churning or not, I’ve used a sigmoid activation function as my output layer. I’ve also used a high dropout because I don’t want to overfit the model, along with randomly splitting the dataset into a train and test set. I also wanted to implement a ROC as the output metric, as well as just looking at the accuracy. There is a great post on Stack Overflow here that provides the sample code for creating ROC logs at each epoch of testing. By defining the Keras model as a function with loss, optimizer, and metrics, as parameters you’re able to quickly tune different parts of the model making it deeper or wider on the fly. Save each model as a different function so you’re able to quickly compare outputs. E.g. if you add more layers — making it a deeper model — replicate the function and name it keras_model_extra_layers(), and save the new model. Let’s move on to training the model, and determining how good it is. Before we move on to the ROC, let’s look at the accuracy and loss of the model. To plot, these re-run the mode but this time save the output. The history object stores all of the accuracy and loss metrics at each epoch for us to graph. I’ve defined accuracy_loss_graph() which takes the history object as a parameter, and plots the graphs. What can you learn from each of them? What is this telling us? Accuracy (left) and loss (right) For the left graph, we can see the difference between our training and validation accuracy. The closer our test line is to the training line, the less overfitting there is. The father they diverge, the more overfitting apparent in the model. If we started to see more overfitting, we would want to increase regularization of our data — stronger L2 weight penalty, add more dropout, etc. — or put in more data for the model to crunch. Another case which would need to be corrected, is when the validation accuracy perfectly tracks the training accuracy. This indicates the model needs more features to work with, so go back to feature engineering and increase the input parameters of your model. What you want to see is a converging between the two lines. What about the right loss graph? Why is the training loss higher than the testing loss? This is a sign of slight overfitting. And, this is even with dropout added to the model. However, because the gap between the train and test lines are not massive, I will keep this model. However, if this was a larger project or the gap between training and test was substantially different, I would try and tune the hyper-parameters to get a better fit. Here is the final epoch output for the Keras model. At an validation accuracy of 85.29% it looks like we did well with the model. Nice. But wait, is accuracy even the right measure here? Accuracy can be thought of as how ‘correct’ the model is. However, accuracy can be misleading if it is the only metric considered during model selection. It might seem desirable to select a model with high accuracy, as 85% might seem like a great result! However, a model with lower accuracy might have greater predictive power. Why is that? Consider a problem with large classifier imbalances — where 95% of the results are 1s and 5% are 0s — your model might be able to predict the value of the 1s but not do well at the 0s. This is called the accuracy paradox, where “… predictive models with a given (lower) level of accuracy may have greater predictive power than models with higher accuracy.” Because of this, metrics like precision and recall may be better at evaluating the model. Let’s take a look at the previous scores, with validation accuracy at 85.29% and validation ROC score was 89.40%. Are those good? Take the ROC curve below. If I had a skewed dataset rife with classifier imbalances, I would see quite different values for the ROC AUC and the upcoming Precision-Recall (PR) curve AUC. We can use these results above as baselines when looking at our PR graph. What is this telling us? A ROC curve on its own is not a good visual illustration for highly imbalanced data, because the X axis of the ROC curve, which is the False Positive Rate ( False Positives / Total Real Negatives ), does not drop drastically when the Total Real Negatives is huge. Therefore if we had an imbalanced data set, we would not know just from this graph. Whereas Precision ( True Positives / (True Positives + False Positives) ) is sensitive to an increase in False Positives and will not be impacted by more real negative values added to the denominator. This would be shown in the PR Curve. Moving on to the PR Curve Now that we have a baseline to work with, I’ll construct the PR curve. The PR curve will help us answer the question “What is the probability that this is a real classification of an unknown object, when my classifier says it is?”. Since I’m modelling churn this seems like a good metric to understand. First, I’ll look at the a confusion matrix as well as the sklearn package classification_report which will give us a better idea of how our model’s precision and recall scores for each class. Let’s take a look at what all these values mean: Precision: When our model predicts yes, how often is it correct? This is shown by the equation P(y=1 | y_pred=1) Recall: When our actual value is actually yes, how often does it predict yes? This is shown by the equation (y_pred=1 | y=1) F1-score: The harmonic mean of precision and recall, where 1 is perfect precision and recall and 0 is the worst. We can see the averages of our scores are all 85%, however our model is better and classifying 1s — clients who have churned — and not 0s. So while our two classes are imbalanced based on the number of observations in the dataset, but not overly imbalanced. I can from the classification report we’re not falling into the accuracy paradox. We can further confirm this with an PR curve. Confirming our hypothesis The PR curve is providing a high AUC of 93%, I can be assured our model is imbalanced and will predict well. So Which One To Use? ROC? PR? Use ROC when detecting either classes detection is equally important, and you want equal weight to both classes prediction ability. However, it will become an untrustworthy metric especially the data is imbalanced or skewed. A classifier which optimizes ROC, may not also optimize the area under the PR curve. With high imbalance of observations, consider focusing on the maximizing the PR area. I also wanted to look into how different models would perform Vs. the Keras model. I made two different Keras models, one with an ADAM optimizer and one with a RSMPROP optimizer. And, a vanilla logistic regression and a vanilla random forest classifier. Actually, it was quite disappointing as the random forest classifier ended up performing better than both versions of the Keras model I made. These things happen, so the lesson here is don’t get too attached to your model as something else might come along outperforming it. However because the second part of the article is about using a Keras model in production, I will continue to use it for the rest of the project. The comparisons between models are for illustrative purposes and the better understand how to select a model. And now for the PR curve to see how our original model stacks up. Looks like the Keras models has again lost to the Random Forest. But there is another metric to look at. Scoring The Model(s) However as this is a classification model, how good our model is comes down to correctness of outcomes. Therefore, I need to choose a metric that maximises some utility function. Let’s look at a Brier score. With scoring, look at the number of times that a specific probability was provided and compare it with the actual proportion of times the event occurred. If the actual percentage was substantially different from the stated probability, we have poor model. The best possible Brier score is 0, while the worst score is 1. Scores which hover around the middle — between .4 and .6 — are hard to interpret. What are the four models above scoring at? Looks like our random forest is again scoring better than the Keras model(s). Actually Predicting Something … Now that I have a model I’m mostly comfortable with, I need to actually predict something. This is the key part of the project after all. In this case, I want to see what the percentage of churning for the client who is in row 2222. At the beginning of the article I saved the client ids to a dataframe — ids = df[‘ids’] — this is so we can find which client is associated with a certain row. Use a nested array within .iloc and return the values of a specific row to see what the result would be if those feature were trying to determine churn. So the client 7153 has a 78.88% chance of churn. Looks like everything is working right now, the model is trained and evaluated, so I can move on to making the web app in Flask so the model can be used by others. Save the model as churn_model.h5 and … finally … this part of the project is done. If you want to look at additional pre-trained Keras models for other projects, check out this repo here. Sometimes the data is not available for what you need to train, so might as well use what is public to save time and frustration. Final Thoughts That was a lot to go over, but I hope you have a better understanding of how to approach a data science project from the start: what to think about, how to get your data, and the initial steps in modelling. I wanted to make sure I went over some of the minor details and evaluation steps which could get missed during the setup and modelling process. As always, I hope you’ve learned something new. The final repo for the project will be posted in the second part, and part 2 will be posted soon. Additional Reading How to Predict Churn: A model can get you as far as your data goes - Blendo In the latest post of our Predicting Churn series articles, we sliced and diced the data from Mailchimp to try and gain…www.blendo.co How We Built Our Machine Learning Model for Churn Prediction Want to learn about the inner workings of our machine learning model for predicting app churn? Our data scientists…www.urbanairship.com What makes predicting customer churn a challenge? Staying on top of customer churn is an essential requirement of a healthy and successful business. Particularly, most…medium.com https://www.quora.com/How-do-I-develop-model-for-customer-churn
Building a Churn Model with Keras, Flask, Heroku, and Postgres — Deploying a Usable Model to…
12
building-a-churn-model-with-keras-flask-heroku-and-postgres-deploying-a-usable-model-to-161b8cf19830
2018-07-13
2018-07-13 20:41:09
https://medium.com/s/story/building-a-churn-model-with-keras-flask-heroku-and-postgres-deploying-a-usable-model-to-161b8cf19830
false
6,500
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Robert R.F. DeFilippi
Sometimes Chef ◦ Sometimes Data Scientist ◦ Sometimes Developer
8e46cdd91cd4
rrfd
211
131
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-19
2017-11-19 13:51:49
2017-11-19
2017-11-19 13:57:22
2
false
en
2017-11-19
2017-11-19 13:57:22
2
161c50487125
1.028616
3
0
0
The characteristic feature in all the discrete distributions is that the random variable X is discrete. The possible outcomes are distinct…
5
The basics of continuous probability distributions The characteristic feature in all the discrete distributions is that the random variable X is discrete. The possible outcomes are distinct numbers, which is why we called them discrete probability distributions. Have you asked yourself, “what if the random variable X is continuous?” What is the probability that X can take any particular value x on the real number line which has infinite possibilities? For a continuous random variable, the number of possible outcomes is infinite, hence, P(X = x) = 0. For continuous random variables, the probability is defined in an interval between two values. It is computed using continuous probability distribution functions. Learn more about these fundamentals in Lesson 41. Lesson 41 - Struck by a smooth function Review lesson 32. If you assume X is a random variable that represents the number of successes in a Bernoulli sequence…www.dataanalysisclassroom.com If you find this useful, please like, share and subscribe. You can also follow me on Medium and Twitter @realDevineni for updates on new lessons.
The basics of continuous probability distributions
7
the-basics-of-continuous-probability-distributions-161c50487125
2017-11-19
2017-11-19 18:32:49
https://medium.com/s/story/the-basics-of-continuous-probability-distributions-161c50487125
false
171
null
null
null
null
null
null
null
null
null
Statistics
statistics
Statistics
5,433
Naresh Devineni
Naresh Devineni is an Associate Professor in the Department of Civil Engineering at The City University of New York’s City College. http://nareshdevineni.com
53ffd7b0a59e
devineni
34
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-20
2018-07-20 03:34:43
2018-07-20
2018-07-20 03:35:54
0
true
en
2018-07-21
2018-07-21 11:26:00
3
161c6e2749c2
1.550943
0
0
0
We are fast approaching the day when robots are equipped with artificial intelligence, or AIs as they are so popularly called. AIs can do…
3
Artificial Intelligence (AI) Awareness We are fast approaching the day when robots are equipped with artificial intelligence, or AIs as they are so popularly called. AIs can do ALL routine things and adapt or adjust to situations much better than humans under specific conditions. I cannot help but imagine how the world will be changed in the next few years. Flashback — In the late 1980s, my dad brought home a mobile communication device. We all went “oooh…aaahh”. Fast forward to 2018, my dad who is now in his late 70 is still using his Nokia 2G mobile phone until recently. I introduced to him a new Samsung Smart Phone so he can make long distance cheap WhatsApp calls to me as he lives overseas. After much difficulty, coaching, and teaching, he is still limited to making voice calls only. *sigh* Imagine a few years from now in this world that we live there are two kinds of awareness. The awareness of humans and the sudo-awareness found in the robots we have created. The later’s awareness, of course, is pure imagination at this moment as I write this article. However, for the sake of the discussion, just assume we have created awareness for these AIs. I believe this is not a science fiction story anymore. As more money from different kinds of businesses and institutions are injected into fundings for research in this area. Just not too long ago, Facebook had to shut down a program (robot) that starts writing its own language for communicating among itself. If I can boldly assume that this is just a temporary setback for the AIs, it’s a matter of When and not If, will the AIs gain awareness of their own existence. How will their awareness be different from us, humans? Five years from now, robots will probably be as common as a microwave or refrigerator in many families. How much further will efficiency and demands be sufficient for the industry to create a robot with intelligence? If mobile communication took only 30 years to evolve to the current state, then surely we are not far now for such robots. I wonder if my dad will be able to imagine that happening? ‘Do not conform to the pattern of this world, but be transformed by the renewing of your mind. Then you will be able to test and approve what God’s will is — his good, pleasing and perfect will. ‘ Romans 12:2
Artificial Intelligence (AI) Awareness
0
artificial-intelligence-ai-awareness-161c6e2749c2
2018-07-21
2018-07-21 16:37:54
https://medium.com/s/story/artificial-intelligence-ai-awareness-161c6e2749c2
false
411
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
James Khow
null
16969c67d0c9
jameskhow
0
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-25
2018-01-25 05:54:29
2018-01-25
2018-01-25 05:56:24
4
false
en
2018-01-25
2018-01-25 05:56:24
3
161d190e8fea
2.20566
1
0
0
Machine Learning is one of the most popular approaches in Artificial Intelligence. Over the past decade, Machine Learning has become one of…
5
Introduction To Machine Learning K-Nearest Neighbors (KNN) Algorithm In Python Machine Learning is one of the most popular approaches in Artificial Intelligence. Over the past decade, Machine Learning has become one of the integral parts of our life. It is implemented in a task as simple as recognizing human handwriting or as complex as self-driving cars. It is also expected that in a couple of decades, the more mechanical repetitive task will be over. With the increasing amounts of data becoming available there is a good reason to believe that Machine Learning will become even more prevalent as a necessary element for technological progress. There are many key industries where ML is making a huge impact: Financial services, Delivery, Marketing and Sales, Health Care to name a few. However, here we will discuss the implementation and usage of Machine Learning in trading. In this blog, we will give you an overview of the K-Nearest Neighbors (KNN) algorithm and understand the step by step implementation of trading strategy using K-Nearest Neighbors in Python. K-Nearest Neighbors (KNN) is one of the simplest algorithms used in Machine Learning. KNN algorithms use a data and classify new data points based on a similarity measures (e.g. distance function). Classification is done by a majority vote to its neighbors. The data is assigned to the class which has the most nearest neighbors. As you increase the number of nearest neighbors, the value of k, accuracy might increase. Now, let us understand the implementation of K-Nearest Neighbors in Python in creating a trading strategy. 1. Import the Libraries We will start by importing the necessary libraries. We will import the pandas libraries to use the features of its powerful dataframe. We will import the numpy libraries for scientific calculation. Next, we will import the matplotlib.pyplot library for plotting the graph. We will import two machine learning libraries KNeighborsClassifier from sklearn.neighbors to implement the k-nearest neighbors vote and accuracy_score from sklearn.metrics for accuracy classification score. We will also import fix_yahoo_finance package to fetch data from Yahoo. 2. Fetch the Data We will fetch the S&P 500 data from yahoo finance using ‘pandas_datareader’. We store this in a data frame ‘df’. After this, we will drop all the missing values from the data using ‘dropna’ function and print the first five rows of column ‘Open’, ‘High’, ‘Low’, ‘Close’. Output: (Read more)
Introduction To Machine Learning K-Nearest Neighbors (KNN) Algorithm In Python
1
introduction-to-machine-learning-k-nearest-neighbors-knn-algorithm-in-python-161d190e8fea
2018-05-27
2018-05-27 16:47:38
https://medium.com/s/story/introduction-to-machine-learning-k-nearest-neighbors-knn-algorithm-in-python-161d190e8fea
false
399
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
QuantInsti®
QuantInsti is an Algorithmic Trading Training institute focused on preparing professionals and students for HFT & Algorithmic Trading.
42079579cd65
QuantInsti
379
138
20,181,104
null
null
null
null
null
null
0
null
0
8e9bde78121d
2018-09-18
2018-09-18 06:19:53
2018-09-18
2018-09-18 15:18:42
3
false
en
2018-09-21
2018-09-21 07:57:07
1
161db79763a4
2.399057
1
0
0
Written by MJ Nama
5
JCCI Applied Analytics Workshop: Towards Building Leaders and a Better Philippines Written by MJ Nama In its effort to continuously bring industry experts to help build leaders and a better Philippines, John Clements Consultants organized a workshop, entitled “Applied Analytics for Competitive Advantage: Data Strategies for Disruptive Impact,” last September 5 & 6, 2018. This two-day learning session facilitated by two extraordinary professors — Professor Ikhlaq Sidhu of U.C. Berkeley and Professor Paris de l’Etraz of IE Business School — was well-attended by our clients and partners from different industries. On the first day, the professors focused the discussion on artificial intelligence (AI), machine learning, and data science — the data-driven business model, the power of utilizing data in business projects, and successful innovation strategies. To aid in the discussion, related case studies from different industries were also presented. The professors made sure that, aside from learning new insights, the attendees would make a significant effort to participate in the activities. They helped the participants outline roadmaps for their respective organizations, in alignment with reshaping businesses with data and AI. The professors capped the first day with machine learning algorithms and open source tools for data science. On the second day, Professors Sidhu and de l’Etraz talked about AI and industries. They provided an overview of industries that are currently being disrupted and how they are being disrupted. To further explain this, they did an industry focus on retailers and malls and shared where the disruption is coming from. Moreover, they also touched on agile leadership and jobs that AI would create. I would like to share some of my key takeaways from the insightful two-day learning session. In order to keep up with this changing world, companies and leaders should think and act like a startup. There’s a big difference between customer centricity and building relationships with customers. Customer centricity helps sell more products, but empathy helps build relationships with customers. Innovation, which translates to customer experience, is primarily based on data. Companies must find ways to create more partnerships — partnerships that could bring in more customer data. In order to maintain great customer experience, companies should practice the “empty chair” technique — during meetings, leave some chairs empty to represent the customers. Lastly, the professors shared the “Amazon Way” — that it is okay to fail, but if you are going to fail, fail fast. Each organization must allow and encourage failure. People within organizations must explore and experiment, and experiment continuously. And, finally, to remain competitively relevant, collect data every step of the way. Please visit and join the John Clements Talent Community. About the author: MJ Nama graduated at the University of the East with a Bachelor of Science degree in Psychology. This new vlogger is passionate about music and writes songs during her free time. She also works as a part-time TV Host at TV5, model, and events host.
JCCI Applied Analytics Workshop: Towards Building Leaders and a Better Philippines
6
jcci-applied-analytics-workshop-towards-building-leaders-and-a-better-philippines-161db79763a4
2018-09-21
2018-09-21 07:57:07
https://medium.com/s/story/jcci-applied-analytics-workshop-towards-building-leaders-and-a-better-philippines-161db79763a4
false
490
Discover Your Full Potential with Looking Glass, a Publication from John Clements
null
johnclementsph
null
John Clements Lookingglass
jcdigitalrenewal@gmail.com
the-looking-glass
LEADERSHIP,CAREERS,MANAGEMENT AND LEADERSHIP,PROFESSIONAL DEVELOPMENT,PERSONAL GROWTH
JohnClementsPH
Data Science
data-science
Data Science
33,617
Shiela Manalo
Writer|Graphic Artist|Video Editor|Musician
8dbb2651e54f
iamsimone02
64
25
20,181,104
null
null
null
null
null
null
0
**佛系鼓掌** 如果有興趣了解更多關於數據分析的內容的話,請給一點點的鼓掌,有任何想法或感興趣的地方也歡迎留言,謝謝大家
1
null
2018-08-05
2018-08-05 09:24:25
2018-08-05
2018-08-05 11:53:39
2
false
zh-Hant
2018-08-06
2018-08-06 11:37:30
6
161febbae970
0.681447
7
0
0
先想想兩個發現,你會怎麼理解?
4
為什麼要關心資料來源?談談埋點數據的陷阱 先想想兩個發現,你會怎麼理解? 分析X頁面跳Y頁面的場景時,出現了X頁面跳X頁面的數據? 2. 註冊流程頁面為A->B->C,但是出現C頁面瀏覽量比B頁面還高? (可以直接拉到文章後面看解答) 上週去了香港晃晃,銅鑼灣hmv旗艦店真是無敵好買 :) 數據分析,字面意思,數據分析由兩個部分組成:一是數據,二是分析,看起來跟廢話一樣,但卻也是絕大多數人都忽略的。 大多數人更加註重的是分析,而並不是數據本身,這就造成了數據分析最大的誤區:不關心數據怎麼來,使勁做無用功。 最近在從埋點日誌做分析,因為埋點的複雜性、產品線的多元性,以及用戶行為的多樣性,在資料校驗的過程中花費了大量精力。 即便自己有Google廣告操作跟GA經驗也頭痛了好幾次。因此,透過最近使盡全力避免踩到坑的實際例子,來感受一下為什麼關心資料來源很重要。 為何要埋點? 首先想一下最根本的問題,埋點有什麼好處?或是說不埋點會有什麼問題?難道用GA不好嗎? 我認為這問題是看你是什麼角色,或是說想分析到什麼階段。 像以GA或是任何第三方統計平台來進行追蹤,對於負責市場投放或是負責網站優化的人來說,這些功能已經可以滿足,對公司也節省心力。 然而,進行全棧流程的梳理,或是多樣化的分析需求時,就會遭遇一些麻煩了。 分析的問題:用戶搜索行為、瀏覽行為的資料存在後台,而用戶註冊或是付費等數據是在內部資料庫;分析時無法合併兩邊數據的維度,或是合併很困難。只能做半套,意味著只能看到一半的真相。 平台的問題:在中國GA需要翻牆,數據會計漏;growing IO強於細緻的用戶行為數據分析,同時宣稱可以無代碼埋點。然而無代碼埋點又必須符合他的框架寫法才行,不然數據統計不上或者出錯。然而,開發也不一定能改掉自己的寫法。另外,細緻的用戶行為數據分析,在實際分析操作上也是要一段時間熟悉。 翻開覆蓋的陷阱卡… 資料會有許多無預期的坑, 最可怕的坑不是指沒有數據、數據不足、或是數據缺失(Null\NaN)等,而是數據乍看很完整的情況下,卻處處充滿陷阱。 確認跟基礎表的數據落差,埋點是會丟失的 原因1. 該產品點位沒埋; 原因2. 該產品有埋,但有遺漏幾個入口沒埋到; 原因3. 響應過程中的丟失; 原因4. 網頁曾經改版,埋點數據丟失; 非常多可能性,導致埋點错误,從而導致數據錯誤,再導致分析結論錯誤,舉個例子: 移動端網站的新版本上線,上了一段時間數據穩定後,從數據發現,哎喲,這個新網站效果很好啊,尤其是某個入口來的用戶付費轉化率很高,恩,產品經理表示將要爭取資源大力推廣。 然而,不幸的消息來了!程序員在數據埋點的時候把另一個入口的點位埋了在一起了,也就是說,分母其實是要增加的,相對轉化率就低了… 字段間彼此間的邏輯會很奇怪,未必是資料有誤,要確認數據紀錄的邏輯 例如該landing page訪問來源(即上一頁域名)是社交媒體,投放渠道卻是google,此原因為用戶之前在google的行為已被keep住(keep帶n小時); X跳Y頁面場景,手機鎖屏再打開時,前端會重新發送一條頁面進入事件,導致會發生X頁面跳X頁面的數據; 註冊流程A->B->C是唯一的,但是在user已有帳號的情況下,A頁面發送時,前端判別後會跳過B步驟直接C,因此出現C頁面瀏覽量比B高; 結論 交叉驗證,小心求證,搞懂記錄的邏輯,摸清產品操作運行方式 可能因為不同程序員的頁面代碼寫法不同,計算結果不同; 可能因為埋點過程中沒有溝通好,出現理解偏差,計算結果不同; 可能因為開發不小心埋錯點或是遺漏了,計算結果不同; 可能因為版本迭代修改了某個地方,導致計算結果不同; 補充 撈資料時的優化: 埋點日誌很大,為了性能考量,使用时一定不能掃全表,要做分區表(partition function) 根據可能需求或據業務線,開發適當數據中間層 針對漏斗的環節建立中間表,舉例用戶瀏覽行為一張,註冊到付費一張,對效能及分析上的彈性會有幫助 有很多map類型的字段,取具體key的值的时候 ,hive 语法 :column[‘key’] 例:context[‘appsdk_version’] , presto语法:element_at(column,’key’) 例:element_at(context,’appsdk_version’) 埋點小知識: 好文章,參考一下~ 想看埋点数据?产品经理有必要了解的埋点知识(1) | 人人都是产品经理 本文作者将从一个埋点系统设计者的角度通俗系统地讲解埋点的全过程,涉及到埋点基础知识、埋点作用、埋点方法、埋点数据流程、埋点应用、埋点管理等信息。enjoy~ 埋点是什么?…www.woshipm.com 想看埋点数据?产品经理有必要了解的埋点知识(2) | 人人都是产品经理 本文是数据埋点知识系列的第二篇文章,主要分享关于埋点数据的采集、传输、加工、存储、应用和管理等内容。…www.woshipm.com 數據分析系列一:談談數據分析的眾多Title 數據分析系列二:數據分析的一週工作日程 數據分析系列三:身為資料分析師,你該如何展現工作中的價值? 數據分析系列四:資料分析的職涯規劃與Offer選擇 一進門就是Arctic Monkeys!
為什麼要關心資料來源?談談埋點數據的陷阱
12
你踩到坑了嗎-談談埋點數據的陷阱-161febbae970
2018-08-06
2018-08-06 11:37:30
https://medium.com/s/story/你踩到坑了嗎-談談埋點數據的陷阱-161febbae970
false
79
null
null
null
null
null
null
null
null
null
Data Analysis
data-analysis
Data Analysis
4,950
邱國欣(Andy Chiu)
具備兩個角色,一半熱愛音樂及服裝等次文化,一半熱愛經濟及資料分析這種看似枯燥的課題。 www.linkedin.com/in/chiukuohsin| Facebook: https://www.facebook.com/erlcssont29
9a353a15ebca
erlcssont29
372
59
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-11
2018-01-11 04:29:48
2018-01-12
2018-01-12 21:28:23
4
false
en
2018-01-12
2018-01-12 21:28:23
1
1620c9da47ca
2.413208
0
0
0
So after a couple days of frustrations, everything is compiling and we even managed to triple our score to a respectable 0.75598!
5
Titanic: Juicing the data So after a couple days of frustrations, everything is compiling and we even managed to triple our score to a respectable 0.75598! So how’d we do it? First off, we changed the non numerical data to several binary variables. Sex became 0 or 1 for male or female. The ‘pclass’ variable has three outputs : 1st, 2nd or 3rd which we converted to three binary variables ‘pclass_1’, ‘pclass_2’ and ‘pclass_3’ which were all set to 0 except for ‘pclass_i’ if ‘pclass’ == i for this specific passenger. The same expansion was repeated for the ‘Embarked’ and ‘Cabin’. Two passengers where not assigned a port of embarkment and so I assigned the most common port ‘S’. Cabins have the format Lxxx where L is a letter from the set {A,B,C,D,E,F,G,T}. We extracted the letter and ignored the number. It is possible that the number which represents a placement on the specific level could have an influence on survival. This is something to consider in future retakes of this contest. The vast majority of the passengers did not have cabins and so they were assigned 0 for all the new Cabin variables ‘Cabin_i’ for all i in the set of Cabin levels and were assigned the value 1 for the variable ‘Cabin_U’ as was suggested in the blog by Helge Bjorland (https://www.kaggle.com/helgejo/an-interactive-data-science-tutorial). It is reasonable to consider the layout of the rooms, something I hope to do in two or three blogs. The biggest struggle I had was trying to make graphlab compatible with panda. I am following Machine Learning classes on coursera.org which use graphlab. So when I started the competition I defaulted to graphlab. However in reading peoples blogs, I realized that Pandas is super useful. When I started using Pandas, I stumbled upon the following error. It took me a long time to identify which variables were SArray and which variables ndarray or dataframes or whatever. In debugging one problem, dtypes would change and I wasn’t sure for a long time what was even the problem. Looking back now, it would have been faster and simpler to have simply restarted exclusively in Pandas even though I am much more comfortable and familiar with graphlab. Another error that I had was that someone how the NaN entries were replaced with empty strings ‘’ so ‘SArray.fillna()’ would compile but not do what I wanted. All in all, I learnt a lot even though there were a few moments were I felt completely helpless. In the next blog I will add features to the model such as ‘title’, family size and ticket. I will also group fares paid into several binary variables that represent a range in cost so as the rich can stand on the drowning faces of the under class to a much more realistic degree.
Titanic: Juicing the data
0
titanic-juicing-the-data-1620c9da47ca
2018-01-12
2018-01-12 21:28:24
https://medium.com/s/story/titanic-juicing-the-data-1620c9da47ca
false
454
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Gale Pettus
null
2b867f8b6112
galepettus
41
43
20,181,104
null
null
null
null
null
null
0
null
0
cf0414f4434d
2018-03-22
2018-03-22 09:21:21
2018-03-20
2018-03-20 09:56:09
1
false
en
2018-03-22
2018-03-22 10:26:07
3
1621550134a1
3.562264
0
0
0
Organizations across the globe are putting serious efforts to improve their sustainability and environmental, health and safety performance…
5
Role of Big Data, ML and AI in improving EHS Organizations across the globe are putting serious efforts to improve their sustainability and environmental, health and safety performance along with precise data management. Huge investments in technologies like AI, ML and Big Data are being made by companies to achieve this sustainability. Moreover, these companies are also looking for increased visibility on their suppliers’ part and additional monitoring. However, this information revolution by companies has resulted in an explosion of data, some of which might be useful and some of which might not be. Amidst this chaos, how can a company determine how to use this data for better business results? The best guess is that these companies have already figured out a solution for it. As said earlier, organizations are making some serious investments in technologies like artificial intelligence, machine learning and Big Data. But why are they focusing so much on these trending and highly anticipated technologies? Let’s have a look at the reasons that are compelling the EHS companies to incorporate these technologies for their services. Artificial Intelligence Long gone are the days where EHS was just a database. The two major factors fueling the adoption of AI for EHS compliance are; the vast increase of the data that we talked earlier needs sorting and understanding. Secondly, the major paradigm shifts towards multi-tenant SaaS solutions enabling the collection of data from multiple digital sources for various customers in real-time. With the backing and advocacy of companies like IBM, Google, and Salesforce, who are the major investors in AI technology, AI has entered the mainstream for every business. Let’s have a look where AI is going to play its part in the EHS space. The major issue for the EHS enterprise software companies is to find a solution that can enhance the compliance as well as reduce manual labor and cost. This is where AI will play a major role. Traditionally, organizations aggregated their data in record systems and tried to interpret it without human intervention but were not getting the desired results. To look into the ever-changing developments in environmental regulation companies have been throwing people at the problem, but that is not sustainable for sure. AI systems have matured enough to read through the technicalities of regulations, couple them with company’s data monitoring systems, and generate suggestions for actions based on relevant regulations and data. Big Data Enormous amount of data from sensors, treatment systems control and monitoring, customer legacy databases and other monitoring devices is poured into company’s EHS departments with a handful of tools to analyze it on its arrival. Some of the times the data is nothing but a piece of old information which gets digitized, but information like streaming data from monitoring sensors are missed out because the companies lack a single system of records to accommodate them. Well, this is about to change for good now. Big Data has the potential to provide companies with a better understanding of their customers, employees, major trends of the industry, and most importantly their operations. Data analytics promises to unlock the doors of previously unknown opportunities to channel operational efficiencies, increase in revenue, respond to customer demands, and keep the returns of shareholders growing. Using mobile apps to enhance EHS and sustainability management and reporting improves remarkably with the popularity of Big Data in the industry. It also allows the company to add and complete tasks and actions while on the move and the information is directly synced with the database when it is online. Big Data ensures the quality of data by improving and automating the data collection process. With the help of integration of historical data and drawing on reliable, centralized information, companies can now produce more consistent reports which can help them to drive better environmental performance. With the help of Big Data, EHS and sustainability management software allows an organization to streamline reporting by making it easier to track and store all information in an integrated system. Machine Learning Machine learning has penetrated deep down in every vertical of an industry. One might not have recognized it but from predictions to managing the supply chain, ML is the driving force for innovation and powering the growth around the globe. Machine learning is all set to change the course of the tides for the operations of the workers in EHS industry. In verticals like industrial hygiene, for example, the technology can provide an effective way of crunching large volumes of data to develop a predictive modelling solution that can remarkably improve the efficiency in the industry. In the coming time, it’s fair to say that predictive modelling will be used across all EHS practices, allowing users to draw on software powered by machine learning to fundamentally improve decision making and data analysis. Read more to know how big data and machine learning are transforming five key areas of EHS. Conclusion Thanks to technologies like AI, ML, and Big Data — EHS’s future looks bright. While there are a lot of exciting technologies coming over the horizon, none can match the revolutionary potential of machine learning, artificial intelligence, and Big Data. By combining human ingenuity with robotic technical proficiency, the solutions are set to transform safety and quality control in the workplace. Originally published at www.softwebsolutions.com on March 20, 2018.
Role of Big Data, ML and AI in improving EHS
0
role-of-big-data-ml-and-ai-in-improving-ehs-1621550134a1
2018-03-22
2018-03-22 10:26:08
https://medium.com/s/story/role-of-big-data-ml-and-ai-in-improving-ehs-1621550134a1
false
891
Softweb Solutions Inc. is a tech consulting and development company with offices in Chicago and Dallas. Softweb’s core offerings - #InternetofThings, #Chatbots, #AI, #DataScience, #Microsoft Services, #VR, #AR
null
SoftwebSolutionsInc
null
Softweb Solutions Inc.
info@softwebsolutions.com
softweb-solutions-inc
DATA SCIENCE,CHATBOTS,ARTIFICIAL INTELLIGENCE,INTERNET OF THINGS,VIRTUAL REALITY
softwebchicago
Machine Learning
machine-learning
Machine Learning
51,320
Nikhil Acharya
null
b2c8c46bac4c
nikhilacharya_99686
23
24
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-12
2018-03-12 08:33:40
2018-03-12
2018-03-12 08:57:17
6
false
en
2018-03-12
2018-03-12 10:21:06
0
16251c723f97
3.768868
3
0
0
John: Hey Ben, do you know what is normal distribution.
5
Demystifying Normal Distribution and Central Limit Theorem with examples in python John: Hey Ben, do you know what is normal distribution. Ben: Yes I have some idea about it. Theoretically speaking the data with bell-shaped density curve, we generally represent it with mean and standard deviation. Mean is the average of all the values and standard deviation how wide the data spread is. Normal Distribution John: Yes I have read all this on wiki, that 68% of data is within 1 standard deviation, 95% of the data is with 2 deviation and 99.7% of the data is within 3 standard deviation. But my question is why do we need this and what kind of data follows normal distribution. Ben: I am not a statistician but let me explain you in layman’s language. Now you will have to answer few rapid fire questions. John: Ok. Ben: Do you have any accurate weighing machine at home. John: I do have weighing machine but it’s not very accurate. Most of the time it is close to your actual weight. Ben: If I toss a coin 100 times, what do you think how many time I would get “Heads”. John: Since it’s 50% chance so I would say almost 50 times. Ben: What is the age of students at your university. John: It varies but most of the students are between 23–28 but we have few 18 years and few 50 years as well. But what do these have to do with Normal Distribution. Ben: Ok let me explain it now. Whenever any data has general tendency to be around some value then it follows normal distribution. That means if someone asks about data then you are able to answer about behavior of the data by saying generally, most of the time or on an average etc, as you did in rapid fire questions. For example performance of a class in exam, performance of employees of any organization, age, height, weight of population. The mean value of the distribution is the tendency of the data as most of the data is around that value and standard deviation is how wide is spread. Means if you weigh yourself on your weighing machine 100 times and it’s reading varies within +-5% of you actual weight then it has low standard deviation and if all the values are within 15% of your weight then it would have high standard deviation.It’s not only about min and max value but about how far all the data points are from mean. As can be seen in below image all the 4 bell curves have difference standard deviation. Image taken from wikipedia But if you take data of height of people in primary school, you may end up with 2 bell curves within data. One bell curve for teacher’s height and one for student’s height. Let me show you a normal distribution with our coin example, I am going to write a python code to generate random choice out of (0,1) 200 times (0 is Tail and 1 is Head). And I would count how many times we got head. I will do this process 1000 time and try to plot histogram of the data. And as below this data is normally distributed and follow 68–95–99 rule. Example of Normal Distribution John: OK I got it. But why does Central Limit Theorem says that any data would follow Normal Distribution. Ben: It does not say that any data would follow Normal Distribution. It says that “In most situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a “bell curve”) even if the original variables themselves are not normally distributed.” Which means you take any data, which may not be normally distributed and take numerous samples of the data and calculate mean of each sample. As per central limit theorem, the mean values of all the samples should have normal distribution. Let me again show you with example: I will randomly generate 100,000 numbers between 0 and 90. I am trying to simulate data for age of 100,000 people. This data would be completely random and would not follow normal distribution. As you can see in below image: Then I would take random sample (between 80–140) of the data and take the mean of each sample. I would then plot the histogram of mean of all the samples and it should be normally distributed. And as you can see it does have bell curve distribution and follow normal distribution. Example of Central Limit Theorem
Demystifying Normal Distribution and Central Limit Theorem with examples in python
54
demystifying-normal-distribution-and-central-limit-theorem-with-example-in-python-16251c723f97
2018-03-13
2018-03-13 02:15:26
https://medium.com/s/story/demystifying-normal-distribution-and-central-limit-theorem-with-example-in-python-16251c723f97
false
747
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Devendra Singh
null
f475c6675cb8
devendra.k.singh
2
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-06
2018-06-06 14:27:11
2018-06-06
2018-06-06 14:30:18
5
false
en
2018-06-06
2018-06-06 14:30:18
6
1625b8a1147b
1.520126
2
0
0
“…A breakthrough in machine learning would be worth ten Microsofts..” — Bill Gates
5
How does GDPR impact Machine Learning? Keystrokes, Pascal VOC and much more. “…A breakthrough in machine learning would be worth ten Microsofts..” — Bill Gates Does the GDPR prohibit machine learning? In practice, ML will not be prohibited in the EU after the GDPR goes into effect. It will, however, involve a significant compliance burden Read more Intel AI Lab open-sources library for deep learning-driven NLP It gives chatbots and virtual assistants the smarts necessary to function, such as NER, intent extraction, and semantic parsing to identify the action a person wants to take from their words.​ Explore Code What’s cooking? Output as Pascal VOC format Now you can easily convert the Dataturks Image Bounding Box JSON output to Pascal VOC format. See how Super Quick Labeling using Keyboard Shortcuts You can now configure to setup keyboard shortcuts for each label etc and make your dataset building super fast. Live demo Featured open datasets Game of faces An image classification dataset with thousands of face images of around 70 Games of Thrones characters manually labeled.​ Explore Now All Open Datasets
How does GDPR impact Machine Learning? Keystrokes, Pascal VOC and much more.
2
how-does-gdpr-impact-machine-learning-keystrokes-pascal-voc-and-much-more-1625b8a1147b
2018-06-07
2018-06-07 04:57:59
https://medium.com/s/story/how-does-gdpr-impact-machine-learning-keystrokes-pascal-voc-and-much-more-1625b8a1147b
false
182
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
DataTurks: Data Annotations Made Super Easy
Data Annotation Platform. Image Bounding, Document Annotation, NLP and Text Annotations. #HumanInTheLoop #AI, #TrainingData for #MachineLearning.
b8f26a8373c0
dataturks
270
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-27
2017-10-27 20:50:32
2017-10-27
2017-10-27 20:58:00
1
false
en
2017-10-27
2017-10-27 20:58:00
3
1625d01cf250
2.154717
0
0
0
One of the biggest challenges for Data Science is to work with high quality data. Typically, organisations, companies, and researchers will…
5
Why Data Integration? One of the biggest challenges for Data Science is to work with high quality data. Typically, organisations, companies, and researchers will use data they collect in-house. You have a research question (hypothesis), you figure out what sort of data is needed to answer your question, you then raise money (or ask you boss for money!) to collect the data, you collect the data, then starts the analysis… This is what we call Hypothesis Driven research and we do it all the time because it works… most of the time. The alternative is Data Driven research. You throw data together, filter it, format it, transform it, normalize it, wrangle it, and you might find some patterns emerging… and then some questions might begin to come up! “Wait a minute, are our performance metrics having an impact on team morale? Are we imposing too much reporting and not given them enough leeway to innovate and become more creative? Is this why performance has gone down as of late?”.. that sort of insight. The nice thing with a Hypothesis Driven approach is that you get to collect the data the right way up-front (at least you should try to!), but this approach can cost you more money, require a lot of up-front planning, consume a lot of time collecting the data. Furthermore, sometimes you find out you’ve been asking the wrong question… and all that work was for nothing… or almost. The Data Driven approach is great if you’re not sure what patterns you are looking for but have a hunch that there is something in the data. This approach can also be cheaper if you can get your hands on data well enough or collected by others through sharing. The trouble is that your multiple sources of data to be combined may not be compatible up-front, lack standards, miss fields, lack enough metadata to understand their provenance or even what the contained fields are all about! In both scenarios data integration is needed. You want to combine data and draw you comparisons, run your t-tests, regressions, factor analysis, or build predictive models. If your data is hard to integrate you need to know the likelihood of success for integrating any two datasets (or multiple) will be. An example from scientific data sharing Currently only 13% of published science research make their data available for others. While a number of funding agencies and journals are providing the resources to store data at minimal costs or for free to share, one of the biggest challenges remain the lack of standards, privacy best-practices, and effective data management (as recently mentioned in The Scientist). The Solution This is where Datadex aims to fill a gap: provide a next generation data integration and sharing platform powered by AI, intuitive user interface, along with strong governance and data sharing to reveal insights from Smart Data Integration. We invite you to support our mission and forward this message to your Data Science friends, colleagues, and contacts. We’d love to talk to them about their approach to data analysis!
Why Data Integration?
0
why-data-integration-1625d01cf250
2018-06-05
2018-06-05 16:57:12
https://medium.com/s/story/why-data-integration-1625d01cf250
false
518
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
IAMOPEN
Our DATADEX platform is accelerating data discovery by making private and public data findable and linkable. Visit http://datadex.net
848c135733a6
iamopen
7
19
20,181,104
null
null
null
null
null
null
0
null
0
39c087b2276c
2018-02-23
2018-02-23 14:55:48
2018-02-23
2018-02-23 15:34:34
1
false
en
2018-02-23
2018-02-23 15:34:34
2
1626447e8ac4
1.433962
1
0
0
Facterbot launches its newsletter
5
The Double Check newsletter Many things have happened since I decided to start Facterbot last November. What started as an exciting side project is now a full-time commitment that many of you have found useful. Facterbot has now over 100 subscribers, and over 40% of them interact every day with our Facebook Messenger stories. This is outstanding, as no other platform could provide such an opening rate. One medium that does reach a similar average are newsletters. They used to be a simple commercial tool that most of the time got spammed, but they have now become a very valuable resource for news organizations. At Facterbot we are always thinking of new ways of interacting with you, and we have decided that we are going to launch our very own newsletter. Meet Double Check Many of you have shown interest on how we fact-check our stories, and that is exactly what we are going to bring to you in every issue of the newsletter. You will be able to find out what goes on behind Facterbot and discover the whole process we follow before sending you our Facebook Messenger updates three times a week. But collaboration is essential in journalism, so we are not going to focus Double Check exclusively on us. Instead, we are going to interview some of the best fact-checkers in the world and find out how they debunked a particular story. We are specially interested in providing you an insight into the whole fact-checking process, and sharing some tips and tools that will help you verify your own stories. It sounds cool, right? If you want to be part of Double Check, you can do so for free by clicking on the subscribe button down below: SUBSCRIBE We really hope you enjoy it and that you spread the word if you do. Until then, remember you can keep up with our work by subscribing to our Facebook Messenger chatbot.
The Double Check newsletter
3
the-double-check-newsletter-1626447e8ac4
2018-06-08
2018-06-08 14:00:21
https://medium.com/s/story/the-double-check-newsletter-1626447e8ac4
false
327
I’m a chatbot that fights against misinformation. You can find me on Facebook Messenger and listen to my weekly podcast 🤖
null
facterbot
null
Facterbot
facterbot@gmail.com
facterbot
MISINFORMATION,FAKE NEWS,CHATBOTS,JOURNALISM,FACEBOOK MESSENGER
facterbot
Bots
bots
Bots
14,158
Andrés Jiménez
I love telling stories and sharing them in innovative ways. Previously at Agencia EFE's Image Desk. Spain delegate for Future News Worldwide 2017.
82b3cb7b86d5
BrydenJimenez
34
46
20,181,104
null
null
null
null
null
null
0
# Finding the minimum of the function y = x^2 - 4x + 2 dy/dx = 0 = 2*x - 4 # This is our cost function x = 2 def func_y(x): y = x**2 - 4*x + 2 return y def gradient_descent(previous_x, learning_rate, epoch): # To fill with values x_gd = [] y_gd = [] x_gd.append(previous_x) y_gd.append(func_y(previous_x)) # begin the loops to update x and y with out cost function for i in range(epoch): current_x = previous_x - learning_rate * (2*previous_x - 4) x_gd.append(current_x) y_gd.append(func_y(current_x)) # update previous_x previous_x = current_x return x_gd, y_gd # Initialize x0 and learning rate x0 = 4 # Our first 'guess' at what theta could be learning_rate = 0.15 # Alpha epoch = 10 # Number of tries def dx (x, y): return 8*x - 2*y def dy (x, y): return 4*y - 2*x def gradient_descent_2(): # Create gradient arrays grad_x = [] grad_y = [] grad_z = [] # Our initinal guess theta_0 = 25 theta_1 = 35 alpha = .05 epoch = 10000 grad_x.append(theta_0) grad_y.append(theta_1) grad_z.append(f(theta_0, theta_1)) # Run the gradient for i in range(epoch): current_theta_0 = theta_0 - alpha * dx(theta_0, theta_1) current_theta_1 = theta_1 - alpha * dy(theta_0, theta_1) grad_x.append(current_theta_0) grad_y.append(current_theta_1) grad_z.append(f(current_theta_0, current_theta_1)) # Update theta_0 = current_theta_0 theta_1 = current_theta_1 # Return last values return theta_0, theta_1 print gradient_descent_2() # Results for theta_0 and theta_1 # (5e-324, 1e-323) # Set the parameters n = 100 x = np.arange(n) y0 = [20] * n # Our true values thata_0 = -3 theta_1 = .8 noise = np.random.normal(size=n) + 5 y = theta_0 + theta_1 * x + noise def plot_linear_data(x, y): plt.figure(figsize=(20, 10)) plt.title("Random Series") plt.xlabel("X") plt.ylabel("Y") plt.scatter(x, y); plt.legend(['Hypothesis', 'Data'], loc='best') plot_linear_data(x, y) grad_theta_0 = [] grad_theta_1 = [] def gradient_descent_reg(x, y): epoch = 500000 alpha = 0.001 theta_0 = 0 theta_1 = 0 cost_0 = 0 cost_1 = 0 for i in range (0, epoch): y_hat = theta_0 + theta_1 * x # Get cost functions cost_0 = np.sum(y_hat - y) / n cost_1 = np.sum( (y_hat - y) * x ) / n # Get new theta values temp0 = theta_0 - alpha * cost_0 temp1 = theta_1 - alpha * cost_1 # Update theta values theta_0 = temp0 theta_1 = temp1 grad_theta_0.append(theta_0) grad_theta_1.append(theta_1) return theta_0, theta_1 gradient_descent_regression(x, y) # theta_0 = -2.975189443909802, and theta_1 = 0.7982329348469989
28
null
2018-05-22
2018-05-22 13:50:45
2018-05-27
2018-05-27 19:53:31
11
false
en
2018-05-27
2018-05-27 19:56:33
4
16273460d634
7.318868
17
0
1
First off, you might have seen cost functions referred to by different names: loss function, or error function, or scoring function.
5
What is a Cost Function? — Gradient Descent — Examples with Python First off, you might have seen cost functions referred to by different names: loss function, or error function, or scoring function. Any of those names will do, and in this article, we’ll stick to cost function. It is a function we can use to evaluate how well our algorithm maps the target estimate, or how well our algorithm performs optimization problems. Consider linear regression, where we choose mean squared error (MSE) as our cost function. Our goal is to find a way to minimize the MSE. Or consider a maximum log-likelihood function. Our goal is also to maximize this function. Our final goal, however, is to use a cost function so we can learn something from our data. Cost Functions and Gradient Descent Below, we’re going to be implementing gradient descent to create a learning process with feedback. Each time — each step really — we receive some new information, we’re going to make some updates to our estimated parameter which move towards an optimal combination of parameters. We get these estimates using our cost function from before. Hence, our algorithm is learning through each step because it now knows something it did not in the previous step. Let’s take this equation below. We want to find the minimum of this function which is quite easy to do. Simply take the first order equation wrt x, set it to zero, and compute the value. In fact, our cost function here is simply our first order equation. Nothing too special but we’re going to be building off this or the rest of the article. Our First Function How would we find the solution using gradient descent? Let’s break this down mathematically, as we’re going to be estimating a parameter θ which we will substitute for x. θ is the value we’re going to update after every step and will tell us what the current value of x is through minimization process. As θ converges to the minimum using our cost function. However, we don’t always know were to start θ on our cost function so we take a guess. It starts at this guessed point somewhere along the cost function, and descends towards the actual value. That is the descent, in gradient descent. We are also going to introduce a variable called α which is out learning rate. The learning rate tells our cost function how fast to move toward its goal of minimization, and control steps size taken by each iteration. At every step of the descent, θi is updated based on the values provided in the cost function. If α is too big, the model may miss the minimum. If it too small, could never get to the minimum. This is important, as tweaking α is just part of applying gradient descent to your problems. It might now work with the first α you chose, and that’s ok. Just start tweaking it, and when you see the values starting to converge you know you’re on the right path. Our Equations Let’s see how we would code this. When we graph our results we can see our initial guess of 4 was not correct. However as we updated our results, our estimate of θ became closer and closer to the correct value of 2. We only iterated 10 times and fell just a little short of the correct value. With more iterations, we would have come much closer. Simple Gradient Descent The use of gradient descent here seems trivial, as our function is well behaved. However with more complex functions — such as the one shown below — finding the minimum would be difficult which is why we use this method. We’re not going to go over the more advanced applications of gradient descent in this article, but you should be aware of how to start thinking about this complex problems. Complex Function Estimating Two Parameters What if we wanted to do the same process as above except we wanted to find two parameters instead of one. Let’s take the function shown below as an example and see if we can find the minimum using gradient descent. We would go through the same process as before by creating a cost function for each parameter we’re estimating — here it is x and y —set our value for α, and run our gradient descent algorithm. However this time, θ0 and θ1 will be updated simultaneously as the gradient descends rather than a single value of θ. Our next set of equations We know the true minimum of the function is (0, 0) so our results will be easy to verify. Our results are essentially (0, 0) so looks like our algorithm worked. Perfect. And, the same as last time let’s plot our results to see how our gradient descent performed. And, it’s right on the mark. We can see our initial guess of (25, 35) was nowhere close, but as we went through each step we became closer and closer to the correct value. Gradient Descends to (0, 0) Gradient Descent and Linear Regression Now let’s put everything we’ve learned together, and show how we estimate the parameters in a linear regression. Just as we built on the above, we’ll be estimating two parameters however we’ll be using a different cost function. Our simple linear equation First we need to plot some random data, with a little noise thrown in for randomness. So we can follow the example the true values of theta_0 is -3 and theta_1 is .8. Random Data We’re Going to Fit Here our cost function is MSE — remember that from the start? — and by minimizing this we can be confident our parameters are correct. We do some simple substitution of our linear equation into y_hat for cost function, as those are the parameters we need to estimate. Just as before our gradient will update simultaneously for both parameters as it gets closer to the true values. Our two cost function for theta_0 and theta_1 We know from above we have to set a guess to start our descent, and we’re going to set both values to zero and run our descent. Looks like we came really close to the actual values of theta_0 and theta_1. Just what we expected. And finally, let’s plot different values from our grad_theta_0 and grad_theta_1 arrays to see how well they do estimating the true parameters of theta_0 and theta_1 from the initial guess, through various steps, and then the final result. We can see as we update our values for theta_0 and theta_1 our linear regression function is getting closer — descending — to estimating our correct values. As my friend would say, “This is so dope!” Linear Regression and Gradient Descent in Action And, that’s all for now. Hopefully, you have an understanding what gradient descent actually is, how cost functions work, and how they can be applied to gradient descent. What I did not show — for the sake the brevity — was all the alpha tuning to get the gradients to convert during descent. That will be a large part of the time and work you’ll do when using this approach. So keep that in mind. As always, I hope you learned something new. The code for this article can be found here on GitHub. Cheers, Additional Reading https://medium.com/@lachlanmiller_52885/machine-learning-week-1-cost-function-gradient-descent-and-univariate-linear-regression-8f5fe69815fd https://towardsdatascience.com/machine-learning-fundamentals-via-linear-regression-41a5d11f5220 https://scipython.com/blog/visualizing-the-gradient-descent-method/
What is a Cost Function? — Gradient Descent — Examples with Python
62
what-is-a-cost-function-gradient-descent-examples-with-python-16273460d634
2018-06-16
2018-06-16 07:58:57
https://medium.com/s/story/what-is-a-cost-function-gradient-descent-examples-with-python-16273460d634
false
1,595
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Robert R.F. DeFilippi
Sometimes Chef ◦ Sometimes Data Scientist ◦ Sometimes Developer
8e46cdd91cd4
rrfd
211
131
20,181,104
null
null
null
null
null
null
0
null
0
1f35b6f451e8
2018-03-02
2018-03-02 09:36:44
2018-03-02
2018-03-02 09:38:35
2
false
en
2018-03-07
2018-03-07 14:48:12
4
16277ec8608e
0.587107
0
0
0
null
5
#Data — Artificial Intelligence 2.5 Machine Learning | Coursera Machine Learning from Stanford University. Machine learning is the science of getting computers to act without being…fr.coursera.org Language & Universality Language & Universality | Piktochart Visual Editorcreate.piktochart.com Cours de Python Cours de Python pour les biologistespython.sdv.univ-paris-diderot.fr OneDriveNotionsavancéesPython.ipynb Notebook pour tester map, filter, fichiers…onedrive.live.com
#Data — Artificial Intelligence 2.5
0
machine-learning-2-5-16277ec8608e
2018-03-07
2018-03-07 14:48:13
https://medium.com/s/story/machine-learning-2-5-16277ec8608e
false
54
We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE.
null
ethercourt
null
Ethercourt Machine Learning
adoucoure@dr.com
ethercourt
INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING
ethercourt
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
WELTARE Strategies
WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner
9fad63202573
WELTAREStrategies
196
209
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-17
2017-12-17 20:41:29
2017-12-17
2017-12-17 21:31:49
4
false
en
2017-12-18
2017-12-18 05:24:33
4
1627dbbbcfe1
3.450943
6
1
0
In my last post on Data Science — the new era, I have described how traditional data science is undergoing change in the Enterprise. In…
1
Data Science — the need for productivity tools In my last post on Data Science — the new era, I have described how traditional data science is undergoing change in the Enterprise. In this post, I will describe the Enterprise Data Science demand creation and how difficult it is to fill data science positions. I have used Forbes articles and IBM’s Quant Crunch for my analysis. The Demand Data Scientists are the next generation expensive workforce and takes the longest time to fill the positions. Data Science and Analytics (DSA) is the market where data scientists play a huge role along with data engineering and data developers. According to McKinsey, the DSA job listings is projected to be around 2.72 M in US. As the demand for DSA jobs increase, it puts a lot of pressure on the supply of DSA talent in return. We have interviewed several head of data science departments in multiple enterprises and they share the common pain “Gosh! I wish hiring data science talent is easier” Today an average DSA job listing can be around $100K+ with benefits aside. For every experienced professional in this field, there is a huge competition among multiple Enterprises. 81% of all DSA job postings request workers with at least three years of prior work experience. The strong demand for experienced candidates, combined with the strong growth of many DSA roles, creates a chicken-and-egg problem within the DSA job market: there aren’t many opportunities for workers to gain the DSA-related experience that employers are requesting. Given the above problems in demand, there is a need for Data Science productivity tools Today’s Data Science Productivity Most of the data scientists today spend their time at different stages from data discovery, producing the ML models and finally optimizing them. However, if you carefully observe, this is the first stage that involves data scientists depending on engineering and devops teams. So, the following are some of the challenges for today’s DSA org Lack of Collaboration: There is no ease of collaboration among cross functional teams with different skillsets. For eg: A data scientist who is best at Statistics, but may not be good at scaling vs. A data engineer who is best at scaling, deployment but may not be good at Statistics. Silos Operation: Often, the teams involved in the life cycle of DSA are cross functional teams like Data Scientist, Data Engineer, Data Devops who most of the time operate in silos. Duplicated work: Most of the times, work gets duplicated among different team members knowingly or unknowingly as the priority for the team is execution rather than optimization Standalone scripts: Scripts gets written among cross functional teams inside DSA and often one script cannot be used for a different ML pipeline/model No Standardization: There is no standardization of frameworks that people rely on to set strict rules rather it is play as you go No End to End solution: Often vendors focus on a small problem inside data science, but do not provide an end to end solution for data science. Ultimately, taking models to production is a cross team collaborative effort that needs end to end integration Headache with Scaling and deployment : One in three conversations, data teams are worried sick about how their models will scale and continue to perform well at scale. Data wrangling fatigue: PhDs minted from premier institutions, data scientists today spend a lot of time in plumbing disproportionately rather than in core algorithms. Feature engineering nightmares: Current lack of reusability of features via a feature catalog renders constant feature refinement a chore. A/B Testing guesswork: Being able to experiment consistently across unbiased, representative variables is crucial for reproducible results between different model algorithm choices. Given these problems, there is a need for end to end Machine Learning Life Cycle Deployment platforms for production. Datatron’s AI Platform provides one for the same. For more information please contact info[AT]datatron.io We make data science teams productive by at least 30%. Benefits of Machine Learning Life Cycle Data Platforms Increase data science team’s productivity by at least 30% Faster iterations and experiments yield higher quality models Use language agnostic operators Leverage streaming data with different arrival latency Achieve dynamic models through online learning Faster on-boarding of new team members Automatically promote/demote models based on KPIs Ability to automatically test, manage and remove models
Data Science — the need for productivity tools
19
data-science-the-need-for-productivity-tools-1627dbbbcfe1
2018-06-20
2018-06-20 05:14:52
https://medium.com/s/story/data-science-the-need-for-productivity-tools-1627dbbbcfe1
false
729
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Harish Doddi
null
fd69c49bef2
harish_34023
10
2
20,181,104
null
null
null
null
null
null
0
null
0
63cdfee3b065
2018-09-11
2018-09-11 11:50:08
2018-09-11
2018-09-11 12:14:28
1
false
en
2018-09-11
2018-09-11 12:14:28
0
16299d30ddea
1.988679
0
1
0
Sustainable Resource Management is essential to the success of small and large business alike. Where businesses around the globe are…
5
Sustainable Resource Management Sustainable Resource Management is essential to the success of small and large business alike. Where businesses around the globe are increasingly looking to build leaner operations utilising less resources, the food industry continues to lead the field of wastefulness. Roughly one third of the food produced in the world for human consumption every year — approximately 1.3 billion tonnes — gets lost or wasted. These food losses and waste amounts to roughly US$ 680 billion in industrialised countries and US$ 310 billion in developing countries. The per capita waste by consumers is between 95–115 kg a year in Europe and North America, while consumers in sub-Saharan Africa, south and south-eastern Asia, each throw away only 6–11 kg a year. Per capita food losses and waste, at consumption and pre-consumptions stages, in different regions Source: www.fao.org Where in the developing world 40% of losses occur at post-harvest and processing levels, in industrialised countries more than 40% of losses happen at retail and consumer levels. Especially at retail level, large quantities of food are wasted due to quality standards that over-emphasize appearance over quality. In developing countries food waste and losses occur mainly at early stages of the food value chain and can be traced back to financial, managerial and technical constraints in harvesting techniques as well as storage and cooling facilities. Strengthening the supply chain through the direct support of farmers and investments in infrastructure, transportation, as well as in an expansion of the food and packaging industry could help to reduce the amount of food loss and waste. In medium- and high-income countries food is wasted and lost mainly at later stages in the supply chain. Differing from the situation in developing countries, the behaviour of consumers plays a huge part in industrialised countries. The study identified a lack of coordination between actors in the supply chain as a contributing factor. Farmer-buyer agreements can be helpful to increase the level of coordination. Additionally, raising awareness among industries, retailers and consumers as well as finding beneficial use for food that is presently thrown away are useful measures to decrease the amount of losses and waste. This enormous resource waste-fullness amounts to a major squandering of resources, including water, land, energy, labour and capital and needlessly produce greenhouse gas emissions, contributing to global warming and climate change. If just one-fourth of the food currently lost or wasted globally could be saved, it would be enough to feed 870 million hungry people in the world. We want to help lower this resource waste-fullness through developing sustainable resource management applications using artificial intelligence and big data. Follow us on Medium and Twitter to stay up to date. Source: SAVE FOOD: Global Initiative on Food Loss and Waste Reduction — Food and Agricultural Organization of the United Nations
Sustainable Resource Management
0
sustainable-resource-management-16299d30ddea
2018-09-11
2018-09-11 12:14:29
https://medium.com/s/story/sustainable-resource-management-16299d30ddea
false
474
Sustainable Resource Management through Artificial Intelligence
null
null
null
MOSADATA
mosadata@pm.me
mosadata
DATA SCIENCE,ARTIFICIAL INTELLIGENCE,RESOURCE MANAGEMENT,SUSTAINABILITY,ANALYTICS
mosa_data
Environment
environment
Environment
49,153
Mosadata
null
14121814166f
mosadata
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-09
2017-09-09 15:11:04
2018-06-16
2018-06-16 09:51:49
1
false
id
2018-09-18
2018-09-18 07:33:29
15
162aa1fd96af
4.988679
0
0
0
atau Bagaimanakah Kita akan Berakhir Bahagia Jikalau Robot-robot ini Terus Bersenggama dan Bereproduksi?
5
Titimangsa Manusia: Bagian I atau Bagaimanakah Kita akan Berakhir Bahagia Jikalau Robot-robot ini Terus Bersenggama dan Bereproduksi? Keluh yang Mulai Tercurah Jadi begini, di antara tik-tok, #2019gantipresiden, dan Via Vallen, menurut saya ada beberapa hal penting lain yang seharusnya menjadi perhatian para insan-umat-kaum-khalayak-masyarakat-warganet-netizen yang sangat penting untuk dibahas, yaitu AI (Artificial Intelligence). Ya, kamu semua yang lagi baca tulisan tidak penting saya ini harusnya tahu mengenai AI atau yang juga bisa kita sebut sebagai Kecerdasan Buatan — sebenarnya entah kenapa saya sedikit kurang suka dengan istilah ini, masalahnya kalo kita persingkat istilah Kecerdasan Buatan menjadi KB, orang-orang akan lebih mudah mengarahkan pikirannya ke program Keluarga Berencana bukan kepada Kecerdasan Buatan itu sendiri. Menurut hemat saya alangkah lebih baik jika kita gunakan istilah IB (Inteligensi Buatan) agar supaya orang-orang tidak tertukar dengan istilah KB (Keluarga Berencanca) tapi ya siapalah saya yang bisa mengatur netizen, influencer saja bukan. Beberapa bulan yang lalu dunia teknologi dihebohkan dengan kemunculan Google Duplex — semacam Siri tapi lebih keren gitu. Soalnya begini, si Duplex ini bisa benar-benar meniru artikulasi dan intonasi sebagaimana manusia berbicara pada umumnya. Jadi ketika kamu sedang berkomunikasi dengan si Duplex, kamu tidak akan tahu kalau kamu sedang berbicara dengan sebuah mesin! ya meskipun ada beberapa masalah etis dan moral yang ditimbulkan, tapi demi melancarkan tulisan dan memuaskan kehendak saya sendiri, marilah kita berfokus bagaimana Kecerdasan Buatan ini begitu hebat dan adiluhung. Pada mulanya adalah firman, kemudian menjelma menjadi algoritma dan data. Mau kamu setuju atau tidak, manusia akan menjadikan teknologi (AI, machine learning, deep learning) sebagai sebuah kebutuhan primer. Tidak usah jauh-jauh, ponsel yang kamu pakai untuk membaca ini sudah dilengkapi oleh AI dan begundalnya, tetapi kita sama-sama tahu, tak ada satu hari pun terlewat tanpa ponsel pintar di dalam genggaman. Segala aktivitas yang kamu lakukan akan tersimpan dan diingat oleh gawaimu, segala hal yang kamu suka (apa yang menjadi preferensimu) akan disediakan oleh gawaimu secara otomatis: musik, gim, dan video. Semuanya akan direkomendasikan untukmu. Algoritma akan mulai masuk lewat sisi-sisi tempat tidurmu dan perlahan akan menjadi penguntit pribadimu: kapanpun dan dimanapun kamu berada, dia akan ada di sana. Dia akan mencoba mengerti kamu dan perlahan…. dia akan menggantikanmu. Perubahan mulai pegari di ufuk selatan tanah Inggris. Terletak di kota bernama Andover, di antara padang rumput dan tembok-tembok bata merah, berdiri sebuah “gudang”, di sana apa yang kita sebut otomasi (automation) bersimaharajalela. Mereka menggunakan sebuah sistem yang bernama grid system, mereka nyaman dengan tubuh mereka yang hanya berbentuk kotak, mereka bekerja terus menerus tanpa mengenal lelah kendati sudah mulai kehilangan tenaga, mereka dapat langsung mengisi tenaga mereka kembali, tugas mereka sederhana: menyortir, mengangkat, dan menggerakkan. Mereka — atau yang biasa kita sebut dengan robot — bekerja selama 24 jam. Robot-robot kotak ini bekerja untuk memilah pesanan kebutuhan sehari-hari sesuai dengan yang kita pesan, dan mereka dapat mengepak sebanyak 65 ribu pesanan. Pesanan yang diterima itu datang dari berbagai macam kategori, mulai dari makanan ringan, peralatan mandi, minuman, dst. Secara sederhana, ini bagaikan kita membeli barang di sebuah pasar swalayan tanpa harus beranjak dari rumah. Tidak, kamu tidak sedang melihat film bertema fiksi-ilmiah, kamu sedang melihat kehidupan nyata. Otomasi sudah berjalan dan kita tidak bisa melakukan apa-apa. Mengenai Masa Depan Paling tidak 20 tahun lagi, kita semua akan melihat bagaimana para robot ini akan menggantikan kita bekerja. Pekerjaan yang akan digantikan oleh robot-robot ini adalah pekerjaan yang bersifat repetitif, terprediksi dan yang biasanya dilakukan secara rutin. Pekerjaan yang paling besar kemungkinannya digantikan oleh robot adalah telemarketing kemudian ada loan officer; lalu kasir; selanjutnya asisten pribadi; supir taksi; dan koki rumah makan siap saji. Sekarang mungkin kamu berpikir bahwa kamu selamat dari sentuhan mematikan otomasi ini karena kamu tidak bekerja pada bidang yang disebutkan, tapi tenang saya akan membuatmu menyesal telah berpikir seperti itu. Sebanyak 800 juta pekerja akan terotomasi dan tidak peduli apa pekerjaanmu, robot-robot bajingan ini akan menghajarmu. Oke, sekarang mari kita coba berpikir sejenak, pekerjaan apa yang paling kecil kemungkinannya untuk digantikan oleh Kecerdasan Buatan? Hmm…. Bagaimana dengan pengacara? TETOTT! TEROTOMASI! Ini serius saya tidak sedang berkelakar. Alih-alih membuat perusahaan rintisan (start-up) bertemakan transportasi seperti Grab dan Go-Jek yang sedang populer belakangan ini, sebuah perusahaan rintisan bernama Atrium yang bermarkas di Lembah Silikon (Sillicon Valley) ini berencana untuk membuat perusahaan rintisan yang nantinya dapat menjadi sebuah firma hukum. Perusahaan yang dimiliki oleh Justin Kan — dia juga merupakan pencipta Twitch — berencana mengembangkan sebuah Kecerdasan Buatan yang dapat menggantikan tugas para advokat di sebuah firma hukum. Boleh jadi si Atrium ingin menjadi anak indie yang beda dan unik di skena perusahaan rintisan. Setelah melihat contoh di atas mungkin kamu masih ngeyel dan tetap percaya bahwa ada pekerjaan yang tidak bisa diotomasi. Saya suka idealismemu, sekarang mari kita hancurkan itu secara perlahan. Mari kita berpikir secara sederhana, jika kita mendengar kata “robot” apa yang kita pikirkan? Seonggok mesin yang hanya menuruti perintah dari program bawaan? Sebuah benda yang kaku dan hanya mengikuti apa yang kita mau? Mungkin hatimu mulai luluh dan mulai percaya bahwa Kecerdasan Buatan benar-benar akan menggantikan manusia di dalam hal pekerjaan. “Tapi tetap saja, mereka cuma sekumpulan data dan algoritma! Robot-robot itu tidak memiliki perasaan!” pikirmu. Ya ampun, sepertinya kamu tidak bisa lebih salah dari ini. Kamu lalu berkesimpulan ada tiga hal yang tidak akan bisa digantikan oleh robot: emosi, kreatifitas dan intuisi. Dan sepertinya kesimpulanmu benar…. atau salah? Ketiga hal tersebut mungkin bisa menjadi solusi kita untuk beberapa tahun ke depan. Tidak pernah ada dalam imajinasi terliar manusia bahwa robot akan menjadi pengajar di sekolah Taman Kanak-kanak, menjadi seniman kiwari terkenal layaknya Yayoi Kusama, atau menjadi psikolog dan mendengarkan segala keluh kesah manusia di sesi konseling. Tapi bagaimana untuk tahun-tahun berikutnya? Jawabannya hanya satu: kita tidak tahu. Ya, kita tidak benar-benar tahu bagaimana transisi perkembangan teknologi ini akan membawa kita. Untuk bidang kreativitas? Sepertinya saya menemukan tanda-tandanya. Kiat-kiat Sukses Membunuh Seorang Seniman the sun rays struck my face warm tingles to my fingertips the light showed me a path i should walk down i spoke and the whispers of the breeze told me to close my eyes i lost my way in a paradise Potongan sajak di atas bukan diciptakan oleh pujangga-pujangga terkenal layaknya Rupi Kaur atau Robert M Drake, tapi diciptakan oleh Kecerdasan Buatan yang dikembangkan oleh periset dari Microsoft dan Universitas Kyoto. Ini sungguhan! Kamu tidak salah baca, algoritma ini mengembangkan dan menulis sajak dari foto-foto dan deskripsi foto yang diberikan. Lihatlah bagaimana Kecerdasan Buatan mulai menemukan dan mengisi ceruk dimana hanya manusia yang bisa melakukannya. Dan ini semua tidak berakhir sampai di situ saja. Namanya Aiva, merupakan akronim dari Artificial Intelligence Virtual Artist. Sebuah Kecerdasan Buatan yang diciptakan oleh Aiva Technologies, perusahaan rintisan yang dibentuk di Luksemburg dan London. Perusahaan ini membuat suatu sistem Kecerdasan Buatan yang bisa menggubah komposisi musik klasik secara mandiri. Dan yang lebih gilanya lagi, dia sudah mengeluarkan sebuah album! Kecerdasan Buatan yang bisa membuat komposisi musiknya sendiri dan mempunyai sebuah album! Anjing! Albumnya yang bertajuk Genesis ini sudah mendapat pengakuan dari perkumpulan komposer dan pembuat musik Prancis atau SACEM ( Société des auteurs, compositeurs et éditeurs de musique) dan juga sudah mendapatkan hak ciptanya sendiri. Jadi di masa depan nanti (kalau ingin dibilang seperti itu), manusia tidak hanya berkompetisi bersama manusia lainnya, melainkan ada pemain baru yang hadir. Dia tidak terbuat dari darah dan daging, melainkan terbuat dari besi dan algoritma. Kamu mungkin tenang-tenang saja mengetahui hal ini, atau kamu mungkin merasa khawatir, ya itu sih terserah padamu. Tapi apa tidak sebaiknya kita bersiap-siap?
Titimangsa Manusia: Bagian I
0
titimangsa-manusia-bagian-i-162aa1fd96af
2018-09-18
2018-09-18 07:33:29
https://medium.com/s/story/titimangsa-manusia-bagian-i-162aa1fd96af
false
1,269
null
null
null
null
null
null
null
null
null
Teknologi
teknologi
Teknologi
559
Realino Marpaung
null
47e82f4d7e8
realinomarpaung
7
1
20,181,104
null
null
null
null
null
null
0
null
0
1448dd8d3d02
2018-01-21
2018-01-21 20:15:10
2018-02-05
2018-02-05 11:13:57
21
false
en
2018-02-13
2018-02-13 08:05:32
6
162ac796c27d
5.690566
37
0
0
Linear Algebra is fundamental in many areas of Machine learning and one of the most important concepts is; Singular Value…
5
Foundations of Machine Learning : Singular Value Decomposition (SVD) Linear Algebra is fundamental in many areas of Machine learning and one of the most important concepts is; Singular Value Decomposition(SVD). The motivation element behind this article is to get Software Engineers to ameliorate their basic understanding of SVD, and its real-world application. Singular Value Decomposition(SVD) is one of the most widely used Unsupervised learning algorithms, that is at the center of many recommendation and Dimensionality reduction systems that are the core of global companies such as Google, Netflix, Facebook, Youtube, and others. Specifically, for this article, we shall be looking at a movie recommendation system. But before that, let’s see how SVD works. In simple terms, SVD is the factorization of a matrix into 3 matrices. So if we have a matrix A, then its SVD is represented by: Where A is an m x n matrix, U is an (m x m) orthogonal matrix, 𝚺 is an (m x n) nonnegative rectangular diagonal matrix, and V is an (n x n) orthogonal matrix. U is also referred to as the left singular vectors, 𝚺 the singular values, and V the right singular vectors So first let’s see how this comes about and then we’ll look at an example: Imagine a circle in two dimensions represented by vectors V1 and V2 undergoing a matrix transformation as illustrated on the cartesian coordinates below: Two Dimensional Circle After Matrix Multiplication An Ellipse From the images above, you can tell that when a matrix multiples a vector, it simply stretches it and then rotates it. So if we generalize this from just two dimensions to n-dimensions, the vector space: after the multiplication, and we have: representing the space of the individual stretching factors. Therefore from this we can write the equation: which we can write more generally as: Where 𝚺 represents the space of all stretching factors(σ’s). But for an orthogonal matrix, Also, note that the product of a matrix and its inverse is the identity matrix (An identity matrix is a diagonal matrix with only 1's). This concept can be represented by the equation below: Combining the above three equations leads us to the Reduced Singular Value Decomposition. Where V is a rotation, 𝚺 a stretching and U another rotation. Also, the entries of U are the principle axis while 𝚺 are the singular values. So this is how you can decompose a matrix into three lower rank matrices. Let’s look at a classical application of this. Imagine that we have a matrix A whose columns represent movies and the rows different users. The entries of the matrix are numbers 0 to 5 where 0 means a user does not like a certain movie and 5 means they really like a given movie as illustrated below: Now imagine that the first 3 columns are the movies Avengers, StarWars and IronMan respectively(Sci-Fi movies). While the last 2 columns are the movies Titanic and Notebook (Romance movies). After performing SVD on matrix A we get the matrices U𝚺V as illustrated below(using a tool like or sklearn): Let’s take a closer look at these three matrices starting with U: So the first column of U represents weights that would match each user’s preference to movies categorized under Sci-Fi while the second column of U represents weights that would match each user’s preference to movies under the romance category. For example, the first user greatly prefers sci-fi movies(0.13 score) compared to romance(0.02 score). As for the third column, we won’t consider it for now. And for 𝚺, The first diagonal entry represents the weight of the Sci-Fi category and the second diagonal entry represents the weight of the romance category. And for V, The columns depict the degree to which a movie belongs to a category. So, for example, we can see from the first column of V that the first movie(this would be Avengers) belongs heavily to the Sci-Fi(0.56 score) category and very little to the romance category(0.12 score). Note: we have not considered the third dimension of each matrix at all. Well, this is because when you look at matrix 𝚺, the third diagonal entry which represents the weight of a movie category has a small value(1.3 score). This is understandable because we only have two categories of movies. So most of the third dimension is considered as noise. So its the above note that we use to perform dimensionality reduction to the matrices A. We do this by eliminating the third dimension of 𝚺, this would also mean eliminating the third column of U and the third row of V to produce the following new U, 𝚺 and, V: So as you can see, the final matrices U𝚺V are smaller than the initial ones since we have eliminated the third dimension. To confirm that eliminating the given rows and columns as we have done only affects the initial matrix A to a small extent. Let’s multiply the above three matrices to get matrix B below: So Let’s compare this matrix B with the original matrix A below: Just by looking at the above two matrices, you can tell that the difference between their elements is very small, in other words, the product of our final three matrices B(after SVD) ≈ A(before SVD): Or mathematically this can be represented as the Frobenius norm. Which is the square root of the summation of the squares of the differences between the individual matrix entries. It can be represented by the equation below: So this is how we are able to decompose a matrix into lower rank matrices without losing much of the important data. It also helps to analyse and acquire important information concerning the matrix data. There are many other applications of SVD other than the ones talked about in this article. Some of the others include data compression, solving the pseudo-inverse and search engines like Google use SVD to compute approximations of enormous matrices that provide compression ratios of millions to one. So searching for a term is much quicker. So hopefully this reading can give you a clear picture of this fundamental linear algebra concept and its application in Machine Learning.
Foundations of Machine Learning : Singular Value Decomposition (SVD)
123
foundations-of-machine-learning-singular-value-decomposition-svd-162ac796c27d
2018-05-19
2018-05-19 11:13:49
https://medium.com/s/story/foundations-of-machine-learning-singular-value-decomposition-svd-162ac796c27d
false
1,031
A pool of thoughts from the brilliant people at Andela
null
null
null
The Andela Way
null
the-andela-way
INSPIRATION,ANDELA,WEB DEVELOPMENT,LEARNING TO CODE,PROGRAMMING
null
Machine Learning
machine-learning
Machine Learning
51,320
Patrick Luboobi
null
7e7b1ac45b8
patricklu2010
12
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-03
2018-04-03 08:25:04
2018-04-03
2018-04-03 08:27:10
2
false
en
2018-04-20
2018-04-20 10:31:50
5
162c27296e2f
1.062579
0
0
0
Technical Development.
4
AICHAIN Weekly Report ——Week of 2018.04.01 Technical Development. Integrate the mining functions in the node program of AICHAIN and run an internal testing. We will keep the mining function when it is released. After consulting with the third party project partners, we decided to integrate the lyra2DC algorithm of mining on the basis of Ethereum , using the smart contracts on the Ethereum to develop quickly. Then, we performed parallel operations based on the Bitcoin-based subchains. We built the Ethereum compiler platform to port lyra2DC to the GO language environment ,and will submit the basic code next week. Third party Application We added AIT exchange rate interface on the AIT token payment platform, and switched the AIT address of the online environment to the formal contract address to complete the certificate configuration and LVS configuration in the formal environment. We docked and tuned up with iOS and Android ends of Easy Live to implement three function points: recharging, conversion and withdrawl. The development has now been completed and tested internally. Connect with us !! Website Twitter Telegram Facebook Instagram
AICHAIN Weekly Report ——Week of 2018.04.01
0
weekly-report-of-aichain-3-26-4-1-162c27296e2f
2018-04-20
2018-04-20 10:31:50
https://medium.com/s/story/weekly-report-of-aichain-3-26-4-1-162c27296e2f
false
180
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
AICHAIN
null
88561e3ff1eb
AICHAIN1
47
26
20,181,104
null
null
null
null
null
null
0
null
0
98e37200303a
2017-11-26
2017-11-26 17:04:07
2017-11-26
2017-11-26 17:29:43
0
false
en
2017-11-26
2017-11-26 17:29:43
7
162ee5d20db6
0.456604
0
0
0
This week, we have looked for related works and made some decisions about which way to follow for our project. We’ve decided to use…
1
[Week2 - Where is this, in Ankara?] This week, we have looked for related works and made some decisions about which way to follow for our project. We’ve decided to use Google’s Street View since we have found related works in this area which can help us. Here are some papers: Accurate Image Localization Based on Google Maps Street View http://crcv.ucf.edu/papers/eccv2010/Zamir_ECCV_2010.pdf To Know Where We Are: Vision-Based Positioning in Outdoor Environments https://arxiv.org/pdf/1506.05870.pdf Street View Challenge: Identification of Commercial Entities in Street View Imagery http://www.cs.stanford.edu/~amirz/index_files/Street_View_Challenge.pdf Worldwide Pose Estimation using 3D Point Clouds http://landmark.cs.cornell.edu/docs/global_pose.pdf Also, we’ve found course and lesson webpages which can be very helpful: Generic 3D Representation via Pose Estimation and Matching http://3drepresentation.stanford.edu/ CS231n: Convolutional Neural Networks for Visual Recognition https://cs231n.github.io/
[Week2 - Where is this, in Ankara?]
0
week2-where-is-this-in-ankara-162ee5d20db6
2017-11-26
2017-11-26 17:29:44
https://medium.com/s/story/week2-where-is-this-in-ankara-162ee5d20db6
false
121
Course Projects for Introduction to Machine Learning, an undergraduate class at Hacettepe University — This semester the theme is Machine Learning and The City..
null
null
null
bbm406f17
null
bbm406f17
MACHINE LEARNING
null
Machine Learning
machine-learning
Machine Learning
51,320
Şule Alp
null
c7e0a83d55b8
sulealp
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-26
2018-03-26 16:37:43
2018-03-26
2018-03-26 16:47:48
0
false
en
2018-03-26
2018-03-26 16:47:48
4
1630809f4b1a
3.539623
2
0
0
A little over a decade ago, a futuristic movie was released into theaters. The premise for the hit Minority Report, starring Tom Cruise…
4
Brave New World: How Predictive Analytics Are Shaping the Worlds of Art, Music, and Sports A little over a decade ago, a futuristic movie was released into theaters. The premise for the hit Minority Report, starring Tom Cruise, was that police had tapped into a way to predict and stop crime before criminal acts occurred. It seemed far fetched at the time, at least because the source of the future knowledge came from three quasi-humans that had visions of future events. But it was a thought-provoking movie nonetheless: I remember walking out of the theater wondering if humans would eventually be able to predict, at least with some consistent accuracy, future events. In recent years, the volume of data collected by tech companies has accelerated exponentially. The vast majority of this data is looked at through a retroactive lens: where were our marketing efforts most successful last year? How many people clicked through that email we sent last month? Where did visitors to our website come from? This is all extremely valuable information to all types of companies. But recently, there’s been an interesting trend as companies look to their data not to learn how people acted in the past, but rather how consumers will behave in the future. Here are a few ways that predictive analytics could shape a range of fields in the coming years. Investments Investing in stocks and mutual funds can be scary enough for most of us. But even more daunting is exploring investments in other, less familiar fields –early stage companies or art. For the novice, even if you want to diversify your investments and go into these different fields, it can be so intimidating that you might never actually buy a property or piece of art and try to turn a long-term profit. But several companies have looked at the vital factors that make the characteristics of art pieces that are most likely to help appreciate their value. Arthena, a New York based startup, is using artificial intelligence and collecting data points on pieces of art, such as “prices at public auctions, the number of gallery or museum exhibits an artist has had, how often an artist’s name comes up in databases or is mentioned on social media and works collectors already own of a given artist,” according to Bloomberg. “What you see with a lot of innovation happening in art and technology and finance, is that it’s the same repackaged product continued to be sold to the same pool,” co-founder, Michael D’Angelo told Bloomberg. “What we are asking is, what can we do to make that happen?” The Music Industry Music has long been seen as more of an art than a science. But that perception is changing in recent years, thanks to companies like Shazam and Pandora. These companies have extracted data so much data from songs — from every note, the instruments, the tempo, vocal characteristics, and more. Pandora, through their famous genome project, built a business based on an algorithm that predicts what types of music certain listeners might like based on their past preferences. Nowadays, companies are taking it a step further. They’re leveraging data to help predict what songs will millions of listeners will like. Hitwizard, an Amsterdam based startup, has worked to understand what songs will become popular by examining, “takes into account the various sound parameters of a song (like BPM, valence, tempo) and compares them against airplay data sourced from Dutch radio stations and the local Spotify charts,” according to The Next Web. The results are startling: Hitwizard can predict with 66 percent accuracy whether a song will be a smash hit, and is 93% accurate when predicting when a song will not be popular. Sports NFL teams have long collected data on what types of plays their opponents like to run in certain situations — whether they’re on the right side of the field, the left, close to the end zone, have over 10 yards for a first down, etc. But soon football will undertake a data revolution that will not just collect data on teams, but on individual players. New tools will empower coaches to predict what opponents will do before theyactually do it. “You can build a predictive model that can analyze, based on personnel packages, time left in the game, field position, down and distance, what they’re going to do,” Ray Hensberger, director of Sports Analytics at Booz Allen Hamilton. In fact, the revolution is already underway. Any fan that watched this year’s playoffs heard the term “run-pass option” (RPO), where players make a decision after the snap as to whether they will run or pass. The implementation of this type of play is the result of the fact that teams tried to counter the analytics and become less predictable by incorporating option plays, ones when the offense doesn’t make a decision of where the ball will go until mid-play. Furthermore, wearable devices can now help collect data on all 22 players and their given coordinates every second, down to the inch, of every game. This will help coaches understand which players are fatigued, which have exerted themselves the most and covered the most ground, and which will perform best given certain game situations. We may never get to the point where everything can be predicted, like in Minority Report. But honestly, what would be the fun in that? Perhaps the most accurate prediction any of us can make right now — is that, clearly, predictive analytics is here to stay and will be a game-changer for companies of all industries.
Brave New World: How Predictive Analytics Are Shaping the Worlds of Art, Music, and Sports
51
brave-new-world-how-predictive-analytics-are-shaping-the-worlds-of-art-music-and-sports-1630809f4b1a
2018-03-27
2018-03-27 17:16:01
https://medium.com/s/story/brave-new-world-how-predictive-analytics-are-shaping-the-worlds-of-art-music-and-sports-1630809f4b1a
false
938
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Janet Comenos
CEO and Co-Founder of Spotted, the world’s leading celebrity data and research company
dd669207a0f7
jcomenos
44
22
20,181,104
null
null
null
null
null
null
0
null
0
d211c0ef4acd
2018-08-18
2018-08-18 12:05:38
2018-08-20
2018-08-20 11:42:56
1
false
en
2018-08-20
2018-08-20 11:42:56
16
1630c141305a
1.516981
0
0
0
The Edge is a daily round up of the most important, or at least the most interesting, reads in technology policy.
5
The Edge | 08/20/18 The Edge is a daily round up of the most important, or at least the most interesting, reads in technology policy. The Cyber Court Rules that Public Utility Smart Meter is a ‘Search’ ​China aims to narrow cyberwarfare gap with US | New DOD report on military developments in China. Opening a File Whose Hash Matched Known Child Pornography Is Not a ‘Search,’ Fifth Circuit Rules (dis)information HUD Files Complaint Alleging Facebook Ad Tools Allow Housing Discrimination China “We urgently need more transparency, a seat at the table, and a commitment to clear and open processes,” said the letter, which has been making the rounds on Google’s internal communication systems. “Google employees need to know what we’re building.” 1,400 Google employees are demanding transparency around the company’s return to China After employee revolt, Google says it is “not close” to launching search in China In January, when he addressed the nation on television, the bookshelves on either side of him contained both classic titles such as Das Kapital and a few new additions, including two books about artificial intelligence: Pedro Domingos’s The Master Algorithm and Brett King’s Augmented: Life in the Smart Lane.“No government has a more ambitious and far-­reaching plan to harness the power of data to change the way it governs than the Chinese government” Who Needs Democracy when you have Data? | An exploration of China’s use of AI AI Artificial Intelligence Still isn’t all that Smart | Noah Smith calls for moderation of expectations in Bloomberg Opinion. In an age of mass school-shootings and increased student suicides, SMPs can play a vital role in preventing harm before it happens. Each of these companies has case studies where an intercepted message helped save lives. But the software also raises ethical concerns about the line between protecting students’ safety and protecting their privacy. Schools are Using AIs to Track what Students Write on their Computers Transportation Electric Scooters in New York City? They Just Might Work Follow me on Twitter. Like the Edge? Subscribe to MetaPolicy and never miss an update.
The Edge | 08/20/18
0
the-edge-08-20-18-1630c141305a
2018-08-20
2018-08-20 20:08:35
https://medium.com/s/story/the-edge-08-20-18-1630c141305a
false
349
An Exploration of Public Policy and Emerging Technologies
null
null
null
MetaPolicy
ryan.mail.email@gmail.com
metapolicy
POLICY,PUBLIC POLICY,EMERGING TECHNOLOGY,TECH POLICY,TECHNOLOGY TRENDS
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ryan Williams
Antidisciplinarian. Studies Global Policy at the LBJ School of Public Affairs.
8fd521a02506
ryan_t_w
23
345
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-19
2018-09-19 12:35:12
2018-09-19
2018-09-19 13:16:29
0
false
en
2018-09-19
2018-09-19 13:25:07
10
1630c4e1cd52
2.335849
1
0
0
Assorted recent talks on economics, development, AI and future of jobs.
5
Recent Economics and Development Talks Assorted recent talks on economics, development, AI and future of jobs. WIDER Annual Lecture 22 by Ernest Aryeetey #ThinkDevelopmentThinkWIDER #DevEcon #globaldev 2. Responding to the Global Financial Crisis Day 1: Responding to the Global Financial Crisis Day 2: Responding to the Global Financial Crisis When I first became chairman of the Federal Reserve in 2006, literally one of the first things I did was asked the staff to give me the handbook on what you do in the case of a financial crisis. They provided me a little notebook and the notebook was typed on a manual typewriter in mimeograph and had about four pages in it. It said open the discount window and that was about it. — Ben Bernanke Yeah, I think there are some folks who don’t like QE and as each argument fails, they move down the ladder. And so now you have hedge fund managers writing in the Wall Street Journal how QE’s creating inequality, as if they cared. — Ben Bernanke #FinCrisisLessons 3. You and AI — the future of work by Professor Joseph E Stiglitz 4. How Did China Succeed? | Joseph E. Stiglitz 5. Can the Free Market End Global Poverty? Nobel Laureate Joseph Stiglitz vs. NYU’s William Easterly 6. An Economist in the Real World | Kaushik Basu 7. Ray Dalio’s Lessons From The Financial Crisis 8. The IMF Managing Director and the UN Deputy Secretary-General 9. ASEAN Priorities in the Age of the Fourth Industrial Revolution 10. A Global Conversation on Artificial Intelligence 11. Weathering a Trade War 12. Peter Diamandis | The Future Is Faster Than You Think 13. The Helen Alexander Lecture: The Case for the Sustainable Development Goals , Christine Lagarde Let me now turn to the fourth and final SDG pillar — good governance. In a real sense, governance is the foundation upon which everything else is built. If institutions are weak, the odds of SDG success are severely handicapped. This is why the SDGs call for “effective, accountable and inclusive institutions at all levels.” This applies across the board — public sector and private sector, domestically and globally. It applies to both donors and recipients of official aid — to make sure that aid is delivered effectively and transparently, reaching the people who actually need it, without waste, diversion, or duplication. It applies to private corporations and to state-owned enterprises — to make sure that their investments take place transparently, on a level playing field, benefitting the citizens of the country. Let me say a few words about corruption, a true economic and social plague. By undermining trust and delegitimizing institutions, corruption makes it hard for countries to take the collective decisions needed to advance the common good. Think about it. If some do not pay their fair share of taxes, governments cannot raise the revenue needed for SDG priorities. Even worse, the legitimacy of the whole system is undermined. At the same time, if corruption is rampant, governments might be tempted to spend money on projects that generate kickbacks but little social value — again, undermining the SDG agenda. This is just the public sector. We also need the private sector to invest in long-term, sustainable projects that support the SDGs. But they are unlikely to do so if forced to pay a “corruption tax.” The genuine risks and uncertainty that come with any investment decision will surely be magnified by corruption. The private sector is not always the innocent victim, of course. Corporations and investors are sometimes too willing to offer bribes. Financial sectors are sometimes too willing to accept dirty money. Unsurprisingly, IMF research has found that corruption and weak governance is associated with lower growth, investment, and tax revenue collection — and with high inequality and social exclusion.
Recent Economics and Development Talks
1
recent-economics-and-development-talks-1630c4e1cd52
2018-09-19
2018-09-19 13:25:07
https://medium.com/s/story/recent-economics-and-development-talks-1630c4e1cd52
false
619
null
null
null
null
null
null
null
null
null
Development And Growth
development-and-growth
Development And Growth
329
Ismail Ali Manik
Uni. of Adelaide & Columbia Uni NY alum; World Bank, PFM, Global Development, Public Policy, Education, Economics, book-reviews, MindMaps, @iamaniku
6a8552d04dc7
ismailalimanik
123
740
20,181,104
null
null
null
null
null
null
0
null
0
a72e580f87ae
2018-06-06
2018-06-06 00:11:46
2018-06-06
2018-06-06 00:15:17
5
false
en
2018-06-06
2018-06-06 00:15:17
4
1630d15b64b2
1.150314
0
0
0
We are starting our series of short videos about the specialists making HyperQuant vision come true. These people are behind our…
5
Meet the Team: Paul Rogov We are starting our series of short videos about the specialists making HyperQuant vision come true. These people are behind our revolutionary platform for automated crypto-trading, asset management, and dApps creation. This time meet and greet Paul Rogov, founder & managing director of HyperQuant. Paul has extensive experience in various industries and business areas. He was a growth hacker in technological start-ups and managed international projects in over 70 countries for large IT corporations. HyperQuant Social Media
Meet the Team: Paul Rogov
0
meet-the-team-paul-rogov-1630d15b64b2
2018-06-06
2018-06-06 00:15:35
https://medium.com/s/story/meet-the-team-paul-rogov-1630d15b64b2
false
84
Automatic Trading Revolution: https://hyperquant.net/
null
hyperquant.net
null
hyperquant
info@hyperquant.net
hyperquant
null
HyperQuant_net
Blockchain
blockchain
Blockchain
265,164
HyperQuant
Automatic Trading Revolution https://hyperquant.net/
da4c15da74be
hyperquant
185
87
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-08
2017-11-08 19:14:14
2017-11-17
2017-11-17 04:59:14
9
false
en
2017-11-17
2017-11-17 04:59:14
0
1631497a5c14
6.935849
1
0
0
I built a model designed to ensemble machine learning models in order to predict employee access needs for Amazon employees. The data came…
1
Using tree-based ensemble methods to predict Amazon employee access I built a model designed to ensemble machine learning models in order to predict employee access needs for Amazon employees. The data came from a Kaggle competition that was posted four years ago, and was divided into labeled training data and unlabeled test data. The goal was to fit a model on the training data and use it to predict whether the employees in the test data set should be granted access to the corresponding resource. Initial exploratory analysis I began by reading in the training and test data and getting a feel for what the data looked like: As a first attempt to building a machine learning model for my data, I simply fit a standard RandomForestClassifier from the sklearn package. I then applied it to the unlabeled test data and predicted the probabilities of whether each employee should be granted access to the corresponding resource. Next, I submitted my first attempt to Kaggle: Not bad! I received an accuracy score of just over 0.8 by fitting a simple random forest model to my data, which required barely any work at all. One thing that is slightly worrying is that my cross-validation score was 0.945, indicating that I might be overfitting the data (since the accuracy score is so much higher on the training data than on the test data). The natural next step is to refine the model and try to improve performance (and reduce overfitting) using the following approaches: Feature engineering (e.g. categorical encoding, eliminating unimportant features, creating hybrid features) Ensemble methods (e.g. bagging, gradient boosting, random forests) Model selection and improvement Hyperparameter optimization (RandomizedSearchCV) Specifically, the model needs to meet the following criteria: Test AUC is at least 0.8 3 different feature engineering approaches were tried/explored Hyperparameter search was used correctly At least 2 tree based models were used At least 1 non-tree based model was used One meta-ensembling method for aggregating the different models was used Preprocessing I used the feature_importances_ attribute of the sklearn random forest classifier in order to rate the relative importances of the features in the dataset: Based on these relative importance rankings, I decided to only keep my five most important features (based on the RF classifier) to reduce the complexity of my model. I wanted to try out some feature engineering, so I decided to create a feature corresponding to how many employees each manager managed, thinking that perhaps this might have some effect on how many access authorizations they approved. Next, I decided to one hot encode each of my features using the sklearn OneHotEncoder. After I successfully one hot encoded each of my features, I went from five to 14,813 effective features. However, I realized that the huge number of features was a problem because when I tried fitting one of my classifiers to the data it was taking over a half hour to run. I decided to scale back my one hot encoding, only encoding those features which had less than 1000 unique values; this resulted in a model with 630 features, which I assumed would be much more computationally efficient. Model Selection and Tuning Finally, I had reached the fun part of the process: deciding what types of algorithms to use in order to build a model with high predictive power. Based on a prior lab I had done, combined with reading about pros and cons of different algorithms and what other Kaggle competitors had tried, I decided to use three distinct models: a random forest classifier, a logistic regression classifier, and a gradient boosting classifier. My plan was to first fit each of these models to the data, tune the parameters of each so that they fit the data well without overfitting, and then use a meta-ensembling method to aggregate the predictions of each of the models. First, I needed to figure out a way to efficiently tune the parameters of each of the models. I decided to first fit the data using each of the models individually, and play around with the parameters to get a feel for an appropriate range for the values they should take on. Because the dataset is relatively large, I decided to use a randomized search algorithm to select my hyperparameters for each model (as opposed to grid search, which would be much slower). The randomized search algorithm works by taking in a list of parameters to optimize, a distribution of values to uniformly sample from, and a number of iterations, and returns the best estimator it finds (based on the cross-validation score). Here was the output for my random forest classifier based on my random search: I repeated this process for the logistic regression classifier and the gradient boosted classifier, yielding the following outputs respectively: In each of the models, I identified which parameters corresponded to “knobs” that would affect the degree of over/under fitting in each of my models (e.g. the value of ‘C’ in the logistic regression model). I took care to make sure many possible values of these regularization parameters were searched in order to avoid under/over fitting on the test data as much as possible given my computational resources. Finally, I was able to use a voting classifier to aggregate my three models, using soft voting to determine the majority vote for each individual class prediction. All that was left to do is check how well my model performed on the Kaggle test data. However, I was disappointed as my score was barely above 0.7 (much worse than my RF classifier alone)! I thought about it for a while and realized that I was likely losing important information by deleting the four least important features according to my RF classifier. Although they’re the weakest, the difference in prediction accuracy was likely coming from their omission from the model. I went back and kept all 9 features, one hot encoded each feature with less than 1000 unique values, giving me a total of 16912 features. I re-ran everything and submitted to Kaggle and was thrilled to see my score had improved to 0.879! Visualizations of original and final ROC curves Many would argue that the “gold standard” for predicting model performance is the AUC-ROC metric, or the area under the ROC curve. The ROC curve plots the false vs. true positive rates, and for a perfectly performing model the area under this curve would be 1. Since we don’t have the true labels for the test data, I decided to simulate this metric using the training data by first splitting it randomly into artificial training and test sets (I chose an 80/20 split). I then fit my model on the artificial training data and evaluated it on the unseen “test” data, resulting in the following ROC curve: Unsurprisingly, the AUC was very close to my Kaggle score. What about for my first model, the random forest classifier without any parameter tuning? Here, the AUC was a bit higher than my Kaggle score (which was very close to 0.8), but not by much. The first curve is much smoother, indicating that the predictions are more precise, while the second curve looks similar to a piecewise linear function, indicating that it’s probably missing some of the relationships in the data that’s taken into account by my final predictive model. Discussion From an analytic perspective, why did my meta-ensembling method work better than the simple random forest classifier? The first and most obvious reason is that I didn’t do any hyperparameter optimization on the random forest; I just ran it with the default parameters and hoped for the best. The other, more interesting reason that ensemble methods tend to work better is related to model variance. Any model will have a certain sensitivity to the data, which can be mathematically expressed as the variance of the model. By using a voting classifier to effectively aggregate three different models, the overall variance is reduced since each model is sensitive to the data in different ways. Using the majority vote to determine the output class probabilities effectively smoothes the unwanted sensitivity to the data and provides a better estimator (and therefore a better estimate when applied to the unseen test data). In conclusion… I learned a lot working through this assignment, not just about tree-based models or ensembling but about how to approach machine learning problems in general. Based on the clear performance differences between my original model and my final model as evidenced above by the ROC plots, I realized how powerful pre-processing and ensembling methods can be when tackling these types of challenges. In the future, I would like to explore other classification models such as SVC and AdaBoost, and if I had more time to adequately evaluate the effectiveness of each individual model I think I could optimize my voting classifier further (using the weighting option) and get a better score. I would also try using randomized search to play around with the voting classifier parameters a bit more (e.g. trying hard vs. soft voting, etc.).
Using tree-based ensemble methods to predict Amazon employee access
1
using-tree-based-ensemble-methods-to-predict-amazon-employee-access-1631497a5c14
2018-04-06
2018-04-06 18:47:30
https://medium.com/s/story/using-tree-based-ensemble-methods-to-predict-amazon-employee-access-1631497a5c14
false
1,520
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Samuel McCormick
null
79e78caabb24
samuel_mccormick
4
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-13
2017-09-13 12:16:19
2017-09-13
2017-09-13 12:24:37
1
false
en
2017-09-13
2017-09-13 13:02:37
1
16319ad0dcbd
0.445283
0
0
0
After months of developing, testing and refining our SME risk model, we are proud to launch Open Risk Exchange (“ORX”).
5
Open Risk Exchange After months of developing, testing and refining our SME risk model, we are proud to launch Open Risk Exchange (“ORX”). ORX is the first online portal offering free credit risk scores for every registered business in the U.K. outperforming the accuracy of some of the leading commercial credit bureau scorecards. Check the risk of a company. Check the risk of a company.
Open Risk Exchange
0
open-risk-exchange-16319ad0dcbd
2018-02-18
2018-02-18 01:51:03
https://medium.com/s/story/open-risk-exchange-16319ad0dcbd
false
65
null
null
null
null
null
null
null
null
null
Insurtech
insurtech
Insurtech
2,212
Open Risk Exchange
Free credit risk scores for businesses in the United Kingdom.
35b62d9a69f3
ORX
4
176
20,181,104
null
null
null
null
null
null
0
null
0
8625dfd4fef0
2018-01-13
2018-01-13 15:23:22
2018-01-13
2018-01-13 15:26:58
1
false
en
2018-01-14
2018-01-14 08:46:16
3
163217e3d487
2.241509
4
0
0
12 Jan 2018
5
Viola.AI Weekly Updates #4 12 Jan 2018 Happy Friday to our Telegram supporters and users! Greetings from Cambodia! I am here for a business forum for a couple of days and I am super excited to give everyone an update on Viola.AI and what we have been up to in the past week! This week has been really fruitful for our team and we can’t wait to share these news with you. Firstly, to all our supporters, THANK YOU! We ran a 60% bonus tokens for the first 1 million tokens to celebrate the New Year and it was snapped out in just a few days! Thank you for your faith and confidence in our project! :blush: We are in our halfway point of our pre-sale which will end at 31 Jan 2018 so don’t miss out the chance to get 50% bonus tokens! Some quick announcements: 1) Updates on Our Public Sale. Our Public Sale Date starts on 14 March 2018, a total of 7.5 million VIOLA tokens is available for Pre-Sale (excluding bonus!), VIOLA Bonus tokens lockdown reduced from 6 months to 60 days and we give you the BEST exchange rate! Read our official update on our blog at http://bit.ly/2CV6kEs. 2) 7 More Serial Entrepreneurs, Blockchain and Crypto Experts Join Viola.AI Advisory Board! They are the latest addition to our initial 6 advisors who have been incremental in our growth so far! This news is a real testament of the importance and value that Viola.AI brings to positively impact billions of people worldwide! We’re excited to work with them and bring our project to the next level! Read the full information here: https://medium.com/viola-ai/7-more-serial-entrepreneurs-blockchain-and-crypto-experts-join-viola-ai-advisory-board-1f034c180215 3) We’ve gotten feedback from our community that our Bounty Program may not be so easy to understand as we would like it to be. With that, our team has been working hard to make this better and give more attractive rewards for our Transformers. More details on our Bounty Program 2.0 here: http://bit.ly/viola-ai-proof-of-love 4) Yesterday, we had our first meet-up event in Singapore with SGInnovate, thanks to one of our advisors Kenneth who organized the event! Over 80 people turned up to listen to Viola.AI’s co-founder Jamie share more about the project. We will share our highlight video soon! 5) Earlier this week, our team also shot an exciting video of us sharing about why we decided to build Viola.AI and why decided to use blockchain technology and the implementation of the A.I component. Hope we can let you guys see it soon too! 6) And last but not least, our token smart contract has been deployed! 7) Conferences & Roadshows: Jamie and I will be traveling to Dubai this Saturday to attend Unlock Blockchain Economic Forum, happening this Sunday and Monday! Anyone from Dubai or will be there at the event as well? We hope we can meet some of you! Once again, thank you so much for your support! Our amazing Viola.AI team is always there for you here to answer any questions if you have any, so don’t hesitate to ask! Have a great weekend ahead! Jamie & Violet
Viola.AI Weekly Updates #4
46
viola-ai-weekly-updates-4-163217e3d487
2018-02-09
2018-02-09 16:33:14
https://medium.com/s/story/viola-ai-weekly-updates-4-163217e3d487
false
541
Viola.AI - The First Blockchain-Powered Relationship Registry (REL-Registry) & Lifelong AI Love Advisor, Restoring Trust in the USD800 Billion Love Industry
null
viola.ai.world
null
Viola.AI
info@viola.ai
viola-ai
ICO,BLOCKCHAIN,VIOLA,ETHEREUM,BITCOIN
viola_ai_
Blockchain
blockchain
Blockchain
265,164
Christina Thung
Head of PR | Marketing Communications | Viola.AI | Netflix, films and music enthusiast | Travel junkie | christina@viola.ai
f4e5dbcc7b05
xteena21
120
42
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-12
2018-07-12 08:26:09
2018-07-12
2018-07-12 08:59:33
1
false
en
2018-07-12
2018-07-12 08:59:33
1
16322a5b5ed1
0.516981
1
0
0
A flexible portfolio screening tool built on a series of multi-level user-defined parameter sets for historical scoring and portfolio…
1
G&S Quotient- Portfolio Screener And Manager A flexible portfolio screening tool built on a series of multi-level user-defined parameter sets for historical scoring and portfolio screening. Let users test efficacy of parameter values in portfolio screening for achieving better portfolio performance, and save tested parameter values for routine use. Kaushik Basumallick CEO, Co-Founder & Chief Product Architect. Mr. Basumallick co-founded G&S Quotient. He conceived, designed and led G&SQ™ product development. For more information log in to : http://www.gnsquotient.com/website/gsq-portfolio-screener.aspx
G&S Quotient- Portfolio Screener And Manager
1
g-s-quotient-portfolio-screener-and-manager-16322a5b5ed1
2018-07-12
2018-07-12 08:59:33
https://medium.com/s/story/g-s-quotient-portfolio-screener-and-manager-16322a5b5ed1
false
84
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
riya dutta
null
97151e8785d5
rm30.riya
1
1
20,181,104
null
null
null
null
null
null
0
null
0
9764d8fd35b3
2017-11-02
2017-11-02 14:11:16
2017-11-07
2017-11-07 12:55:07
5
false
en
2017-11-07
2017-11-07 18:51:37
1
16332b766abd
3.942767
6
0
0
Written by Fiona Chung
5
Can Machine Learning Improve Your Next Holiday? Written by Fiona Chung The Myplanet Concepts team focuses on applying emerging technologies to real-world situations. For our latest project, we applied visual recognition tools to the travel sector. Our aim was to explore ways to ease travel planning stress using AI — in this instance, a recommendation engine for vacation activities. Challenge Most avid travellers will tell you the joy of travel is not in the planning of a trip, but rather in the trip itself. Searching for the best flight prices, gathering activity information from different sources, and booking itineraries to match specific interests is a time-consuming, often tedious, task. And as people hopscotch through various sites, travel companies lose out on customers. Users scramble to find relevant information from any source available, instead of finding the one applicable to them. We wanted to figure out how travel marketers might better match travel itineraries to customers. Could we solve this potential lost revenue stream? And was there a way for machine learning to help us get a clearer grasp of their interests? Concept Overview Indulging in local delicacies? Communing with nature? Rocking out to a favourite band? The things we share regularly on social media say a lot about our preferences. As such, the daily life of an individual can be a great predictor of activities they might enjoy on a vacation. Based on this, we prototyped a system that seeks to understand the travel interests of customers, analyzing their social profile and making relevant suggestions for vacation activities and tours. Key Workflow To make the platform work, we began by creating archetypal users using data from Instagram. These archetypes became the foundation of the travel profiler’s training data. Classifications were then established via IBM’s visual recognition technology. Our aim was to help the system understand what a travel persona consisted of (such as a passionate foodie or an enthusiastic nature lover). Once the archetypes were in place, we were able to experiment with real world users. For a new user, the first step is to set-up basic parameters about their travel preferences, such as budget and pace. The user’s Instagram account is then analyzed via the IBM visual recognition service and compared to the initial training set. Finally, based on the priority level assigned to the categories, the travel profiler predicts the user’s interests and makes recommendations. Flexibility in travel planning is key, so we built in the option for users to adjust the settings as they go. This allows a user to further refine their preferences and influence the recommendations. Beyond Convenience For the consumer, the time saving benefits of an AI-powered travel planner are clear. But there is also a clear business opportunity for travel marketers and suppliers. A preference-driven system like the travel profiler is an ideal learning ground for marketers. It can allow them to optimize their offerings by more accurately matching trip itineraries to traveller preferences. Increased Personalization With enough aggregated data, the system can get smarter over time and make more nuanced recommendations. For instance, users with concerts documented on Instagram may start out with a basic “music lover” profile. But as it learns, the system may move from general festival or concert recommendations to more genre or artist-specific events. More Accurate Predictions Having a massive data set to work with also means we can gather insights, such as optimal activity combinations for specific personas. The system could become more sophisticated at activity pairing. This could be especially useful for non-obvious connections. For example, the system may connect culinary enthusiasts with adrenaline-filled outdoor activities, due to their naturally adventurous spirit. This type of insight could inform future system predictions on what may explicitly or implicitly appeal to a specific travel persona. Highlighting Unexpected Correlations An algorithmically-driven system could also reveal insights on market demands that might be missed by humans. A surge in visitors to Rome for sport-centred activities may signal the surprise emergence of a new mecca for sport lovers, for example. This type of information is especially relevant for travel companies, who could use it to adjust or build upon their activity offerings. Travel planning continues to be a pain point for users, so it’s easy to see the benefits of an AI-powered travel profiler. And for an online travel agency or direct marketing site, more accurate recommendations and a reduction in workload can create major opportunities. AI solutions like the travel profiler have the potential to generate higher customer conversions and a lower rate of cross shopping. Like this post? Be sure to 👏 and share the post. Interested in the other innovative work we have been doing with Watson? Fascinated by what the shifting landscape of big data can do? You can reach us here to find out more about how we can apply the latest in smart tech to improve your business.
Can Machine Learning Improve Your Next Holiday?
75
can-machine-learning-improve-your-next-holiday-16332b766abd
2018-02-18
2018-02-18 02:56:33
https://medium.com/s/story/can-machine-learning-improve-your-next-holiday-16332b766abd
false
824
A collection of thoughts and stories from the awesome people at Myplanet
null
myplanetHQ
null
Myplanet Musings
leigh.b@myplanet.com
myplanet-musings
DESIGN,SOFTWARE DEVELOPMENT,TECHNOLOGY,ENTERPRISE TECHNOLOGY,ENTERPRISE SOFTWARE
myplanet
Machine Learning
machine-learning
Machine Learning
51,320
Myplanet
We're a software studio. We make smarter interfaces for the workplace.
c11688ad85c0
myplanet
2,027
1,443
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-28
2017-11-28 19:00:18
2017-11-28
2017-11-28 19:46:53
2
false
en
2017-12-05
2017-12-05 16:15:29
0
16334d267956
2.33805
1
0
0
Breaking CAPTCHA with tensorflow
4
How I started using Machine Learning with tensorflow Breaking CAPTCHA with tensorflow Hi all, My name is Avinash Sripathi I’m a full stack developer. Recently i got an assignment from one of my client there he wants to revamp his existing product in terms of performance This product that I’m gonna work is designed do a scheduled web scrapping where it has dump to latest information from Government website, and then process the images and detect if there are any similarities between two images.The major problem here is every time the website presenting with a CAPTCHA challenge. Previously they are using a service to solve the CAPTCHA when ever they are doing web scrapping.But the service is damn slow where it taking 20 sec to solve one CAPTCHA, also their pricing is too high with avg 74% accuracy.With all this ammo the previous implementation of the product was able to scrap 1000 applications per day. So my initial challenge is to break CAPTCHA In Less time and High Accuracy. Initially i tried tesseract OCR, as the letters in the CAPTCHA are rotated and filled with heavy noise tesseract accuracy was too low like i achieved 50% accuracy.So i give it up with tesseract and want to try other options available. I’m watching all the Buzz happening in Machine Learning. I waiting for so many to apply ML into my Apps,I’m just waiting for a moment.then i came to came to know about CNN(Convolution Neural Network),i read about it and came to a conclusion that this what that I’m looking for.So without wasting a second i started implementing the model with tensorflow .I started with zero knowledge but able to crack things one by one.Thanks to tensorflow community. Initially i prepared a train and test data set by manually solving CAPTCHA and saving them as images in to my local system.With 3 hrs effort along with the help of my friends i created a dataset of 5000 images.i trained it on my laptop which took nearly 3 hrs to train i got 99.4% accuracy.Once the model is ready i created API using flask where I’m passing the CAPTCHA link to the API. Sample API Response Sample Response Till now everything is working as expected but when i hosted it in Cloud VM with 8 cores and 16 GB ram the API is not performing as i expected.here the accuracy is good but the time to solve is like 1.5 seconds.But when i tried the same in my laptop it was able to solve it in 300 ms. The major problem here is we are using CPU only but if you use GPU with CUDA support then tensorflow can execute in at least 300 x times faster. Then i planned to use CUDA cores to achieve parallel computing.Very next day I bought a NVIDIA 1060Ti GPU which comes with 1280 CUDA Cores.I just removed tensorflow pip package and installed tensorflow-gpu package.Now the API is able to solve one CAPTCHA in 2 ms with avg 99% accuracy.Now we are able to process 8000–9000 apps/hour that means we achieved about 500x in performance.
How I started using Machine Learning with tensorflow
1
my-first-tensorflow-app-in-production-16334d267956
2018-05-28
2018-05-28 12:11:10
https://medium.com/s/story/my-first-tensorflow-app-in-production-16334d267956
false
518
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Avinash Sripathi
null
6cd4dfb56c07
tech2avinash
2
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-05
2018-03-05 08:10:58
2018-03-05
2018-03-05 08:29:44
9
false
ru
2018-03-05
2018-03-05 12:29:27
3
1633ed5f0078
3.101887
2
0
0
Геймдизайнер Тим Сорет в Twitter рассказал о таком явлении, как дипфейк: «В интернете появилось нечто под названием Deepfakes, это самая…
5
Новая AI-технология: голливудские актеры с лицами ваших друзей Геймдизайнер Тим Сорет в Twitter рассказал о таком явлении, как дипфейк: «В интернете появилось нечто под названием Deepfakes, это самая киберпанковая хрень, которую только можно представить. Технология машинного обучения заменяет лица порноактеров лицами голливудских звезд. Очевидно, NFSW (жене это не покажешь)». Да, это определенно из области киберпанка. В reddit появилось сообщество (сейчас его заблокировали): пользователи создают в приложение фейковое порновидео с участием селебрити. Разработка автора с ником «deepfakes» спровоцировала оживленную полемику на просторах сети. Медиа-ресурсы обсуждают законность такого контента, другие веб-сайты активно удаляют эти ролики, а общественность вновь задается вопросом о последствиях, которыми может обернуться повсеместное внедрение AI-технологий. И пока обсуждается вопрос о том, хорошо это или плохо, можно подумать, как использовать «дипфейки» применительно к друзьям (не в порнографическом смысле, конечно). Принцип работы Алгоритм обучения воссоздает черты лица. Достаточно нескольких фотографий, чтобы получить расплывчатые дубликаты. Это не копии. Алгоритм изучает мимику, взгляд — и дальше лицо воссоздается только по этим данным. Если упростить… вы смотрите на лицо человека 12 часов подряд, запоминаете различные его выражения. Затем он просит вас сделать на бумаге набросок — с улыбкой, со слезами и другими чертами, которые вы запомнили. И вы сразу же рисуете набросок фотографического качества. Фантастика! Система совершенствуется. Но уже сейчас эти дипфейки выглядят очень даже впечатляюще. В приложении FakeApp один энкодер используется для всех лиц, а декодер делает лица разными. Более того, если позволить алгоритму изучить лица нескольких человек, результат получится еще интереснее. Вот как это работает. Энкодер получает изображение лица, обрабатывает его, а декодер выдает результат. На примере выше — лицо актрисы Энн Хэтэуэй. Если к нему применить декодер, который генерировал другое лицо, получится… То, чего прежде вообще не существовало. Мимика и выражение актрисы Хэтэуэй, но уже на лице другого человека. Лица известных людей вместо лиц порноактеров — интересное применение этой разработки, но есть и другие. Друзей можно сделать героями популярных фильмов или телешоу. Чтобы результат выглядел правдоподобнее, лучше выбрать актера или актрису с лицом похожей формы. В данном случае понадобилось около тысячи фотографий Энн Хэтэуэй и примерно столько же снимков другого человека. Оригинал: Фейк: Использовать технологию можно по-разному. Интернет-пользователи часто обмениваются нелепыми гифками танцующих гномов с лицами своих друзей. Но теперь можно буквально помещать друзей в их любимые фильмы. Так, подруга может танцевать с Патриком Суэйзи, или сражаться с пришельцами. Все зависит от вашего воображения. Не только забавы ради… Судя по всему, эта разработка найдет множество коммерческих применений. Например, в индустрии моды (как я будут выглядеть с этой прической, в том или ином наряде…), фитнес (каким я буду, если сброшу пару-тройку лишних килограммов, пойдут ли мне мускулы…), путешествия (изображение человека на фоне пляжа будет смотреться достаточно реалистично). Новая разработка, как это обычно случается в истории человечества, не обойдет стороной и рекламную отрасль. Маркетологам не понадобится рассказывать о том, что с покупкой товара жизнь клиента изменится к лучшему. Он сможет это показать. P.S Жена автора публикации поклонница Стива Карелла. Энн Хэтэуэй снималась совместно с ним в фильме “Напряги извилины”. Так что при помощи AI их всех удалось соединить. На наш взгляд получилось очень даже неплохо Оригинал статьи
Новая AI-технология: голливудские актеры с лицами ваших друзей
2
новая-ai-технология-голливудские-актеры-с-лицами-ваших-друзей-1633ed5f0078
2018-03-13
2018-03-13 12:33:20
https://medium.com/s/story/новая-ai-технология-голливудские-актеры-с-лицами-ваших-друзей-1633ed5f0078
false
504
null
null
null
null
null
null
null
null
null
It
it
It
3,720
Top Russian Talents
Блог GetIT. Вакансии в Telegram: Getitrussia. http://get-it.io/
2160eccfbd01
toprustalents
18
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-15
2018-03-15 17:54:00
2018-03-16
2018-03-16 18:16:00
1
false
en
2018-03-16
2018-03-16 18:25:38
0
163850e902cf
1.516981
2
0
0
In the last years, words like Artificial Intelligence, Machine Learning, The End of Humanity, run for your life; appear frecuently in TV…
4
Big Data, Machine Learning and IA, really helping us at the moment? In the last years, words like Artificial Intelligence, Machine Learning, The End of Humanity, run for your life; appear frecuently in TV and Social Networks. These are supposed to be the most complex development projects in the IT space because they try to “make our life easier”, but is this all the true about it ?. Calling all the technologies that compare data IA is not correct at all. Companies usually try to put words in their project so those looks disruptive and blow-minding. Sometimes the code only use dat a contrast to run a function and the head developer says: “Yeah, we’re going to call it IA because our stock will grow a 10%”, the same has happened with blockchain, companies that put Blockchain in their name grew from 5–25% in most of the cases. 2. Not all the Big Data jobs that the companys offer are to research purpouses IA and machine learning with the help of Big Data software and it’s raw material, the data, are helping in so many fields like medicine, all kind of forecasts (wether, astropyshics) but this is the good place where the open source developers want to live; the reality is different, you gift your data for free services and they use the data to offer you products that they know you want. The future is banks banning your credit card with that data more than treating all kind of diseases succesfully. 3. Be prepared to be watched for the rest of your life, because you’re now in this situation. Say goodbye to your privacy and this is not only an advice for not beeing scammed in social networks, by the time you’re using any device you’ll be under study. The more you use them, the more data they have of you, the more your insurance will cost; drinking will be an expensive thing that you’ll not be able to do. I’m warning about using “free” services, because if the product is free, the real product is you.
Big Data, Machine Learning and IA, really helping us at the moment?
6
big-data-machine-learning-and-ia-really-helping-us-at-the-moment-163850e902cf
2018-03-19
2018-03-19 13:00:45
https://medium.com/s/story/big-data-machine-learning-and-ia-really-helping-us-at-the-moment-163850e902cf
false
349
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Cristian Meroño
17. Self-defined entrepeneur, blockchain and cripto entusiast. Actually developing two projects that will be released soon - xtrosolutions.xyz
d0b1647ae452
meristian
0
3
20,181,104
null
null
null
null
null
null
0
null
0
39de5d526a38
2018-07-03
2018-07-03 19:27:29
2018-07-05
2018-07-05 15:04:26
4
false
en
2018-07-08
2018-07-08 15:26:00
28
163a2bd68d65
6.915094
32
0
1
Welcome to AI Policy 101: a new series from Politics + AI that will teach you the fundamentals of artificial intelligence (AI) policy.
5
AI Policy 101: An Introduction to the 10 Key Aspects of AI Policy Welcome to AI Policy 101: a new series from Politics + AI that will teach you the fundamentals of artificial intelligence (AI) policy. This introductory article provides an overview of the field, an explanation for the sudden flurry of national AI strategies, and a breakdown of what AI policy entails. It concludes with a set of key takeaways and a list of further readings. In the coming weeks, I will share five additional articles that provide a deep dive into key aspects of AI policy. They will cover: (1) basic and applied scientific research; (2) talent attraction, development, and retainment; (3) industrialization and private sector uptake; (4) ethics; and (5) data and digital infrastructure. Without further ado, let’s begin! What in the world is AI policy? First, a definition: AI policy is defined as public policies that maximize the benefits of AI, while minimizing its potential costs and risks. From this perspective, the purpose of AI policy is two-fold. On the one hand, governments should invest in the development and adoption of AI to secure its many benefits for the economy and society. Governments can do this by investing in fundamental and applied research, the development of specialized AI and “AI + X” talent, digital infrastructure and related technologies, and programs to help the private and public sectors adopt and apply new AI technologies. On the other hand, governments need to also respond to the economic and societal challenges brought on by advances in AI. Automation, algorithmic bias, data exploitation, and income inequality are just a few of the many challenges that governments around the world need to develop policy solutions for. These policies include investments into skills development, the creation of new regulations and standards, and targeted efforts to remove bias from AI algorithms and data sets. It is important to note that AI policy is not just the use of AI to improve the effectiveness of government policy or reduce costs. As we will soon see, this is just one of many areas of AI policy. Why the sudden interest? Since the beginning of this year, Denmark, France, the UK, the EU, South Korea, and India have all released national strategies to promote the use and development of AI. They join Canada, Japan, Singapore, China, the UAE, and Finland, who all released similar strategies in 2017. What can explain this sudden interest? Part of the story is a simple proof of concept for AI. In the past six years, computers, powered by AI technologies, have learned how to speak and translate the world’s languages, recognize faces and objects, and even play complex video games. Our favourite services, such as Netflix and Google Search, are now dependent on AI algorithms, while sectors as diverse as transportation and healthcare are set to be fundamentally transformed in the coming years. Simply put, governments now recognize the disruptive impact of AI and want to get ahead of it. Global AI Talent Report | via Element AI and Jean-François Gagné But beyond this technological development is a story of competition. It has become clear that the demand for AI talent far outweighs the available supply. According to a study by Element AI, there are only 22,000 PhD-educated AI researchers in the world — 40% of whom are concentrated in the US. As a result, to train domestic talent and attract international talent, countries are rushing to develop AI Master and PhD programs, short-term training initiatives, massive open online courses, and scholarships and fellowships. Almost every recent national strategy includes some combination of these initiatives to attract, retain, and develop AI talent. Likewise, governments are also trying to win the global race for AI investment. The UK’s AI Sector Deal is a perfect example. In April, the British government announced a number of new initiatives to establish the UK as a leader in the AI revolution, including a new R&D tax credit, a national retraining scheme, additional funding for STEM education, a national centre for data ethics, and improvements to public digital infrastructure. In return, over 50 companies announced £300 million in private sector investment. The UK is not alone in this effort: France’s strategy included a multi-million dollar commitment to AI startups and industrial projects, while China recently announced a $2 billion AI research park to house up to 400 companies. Finally, governments are also trying to get ahead of the new challenges brought on by AI. The most widely debated challenge is the future of work and whether robots will automate 15 or 50 percent of jobs. However, recent stories such as the Cambridge Analytica data scandal, Google’s eerily accurate voice assistant, and Amazon’s Rekognition technology have demonstrated to the public the ability of AI to erode democracy, trust, and civil liberties. The more comprehensive national strategies have begun to tackle these issues. What are the key aspects of AI policy? AI policy changes from country to country. Depending on a country’s national strengths and weaknesses, a government will choose to focus on different aspects of AI policy. Finland, for instance, wants to lead the world in the application of AI technologies, while Canada wants to be the global leader in AI research and training. The United States has taken a free-market approach to AI policy, while China has implemented a comprehensive, nationwide approach. Despite these differences, AI policy can essentially be broken down into the following 10 categories: 1. Basic and Applied Research: To achieve new breakthroughs in AI theories, technologies, and applications, governments need to provide funding for basic and applied research. This includes both research grants and the creation of new research institutions. Example: the UK’s Alan Turing Institute. 2. Talent Attraction, Development, and Retainment: To conduct R&D in AI and deploy AI solutions in the public and private sectors, countries need a supply of skilled AI talent. Example: Canada’s CIFAR Chairs in AI Program. 3. Future of Work and Skills: Advances in AI will both create and destroy jobs. To ensure that workers have the skills to compete in the digital economy, governments need to invest in STEM education, national retraining programs, and lifelong learning. Example: Denmark’s Technology Pact. 4. Industrialization of AI Technologies: AI has the potential to fundamentally transform multiple sectors and drive growth for decades to come. To encourage private sector uptake, governments are investing in strategic sectors and developing AI ecosystems and clusters. Example: Japan’s Industrialization Roadmap. 5. AI in the Government: Likewise, governments are experimenting with ways to encourage the uptake of AI in the government. With the help of AI, it is possible to reform the public administration and make policy more effective. Example: UAE’s Ministry of Artificial Intelligence. 6. Data and Digital Infrastructure: Data is central to the ability of AI to work. As a result, governments are opening their datasets and developing platforms to encourage the secure exchange of private data. Example: France’s Health Data Hub. 7. Ethics: Concerns over algorithmic bias, privacy, and security have raised a number of ethical debates. To mitigate harm, governments are looking to develop ethical codes and standards for the use and development of AI. Example: The EU’s Draft AI Ethics Guidelines. 8. Regulations: Every country is grappling with the question of whether (and how) to regulate AI. Currently, governments are focused on regulations for autonomous cars and autonomous weapons. Example: Germany’s Ethics Commission on Automated and Connected Driving. 9. Inclusion: AI can both improve and worsen inclusion. Used properly, AI can bolster inclusion and help address complex societal problems such as poverty and hunger. Used improperly, AI can reinforce discrimination and disproportionately harm women and minorities. Example: India’s #AIforAll Strategy. 10. Foreign Policy: Geopolitics, development, and trade will all be affected by advances in AI technologies. To address ethical concerns and develop global standards, countries are beginning to consider mechanisms for the global governance of AI. Example: China’s Global Governance of AI Plan. Key Takeaways AI policy is about maximizing AI’s many benefits for our economy and societies, while minimizing its risks and harms. Technological advancement in AI can only partially explain the sudden interest in AI policy. Governments are also keenly aware of the limited supply of AI talent and investment and are trying to get ahead of the new challenges caused by AI. Governments in all regions of the world are experimenting with AI policy. Currently, there is no best practice since the field is so new. However, AI policy can be broken down into 10 categories: basic and applied research; talent attraction, development, and retainment; future of work and skills; industrialization of AI technologies; AI in the government; data and digital infrastructure; ethics; regulations; inclusion; and foreign policy. Further Readings Malli, Nisa, Melinda Jacobs, and Sarah Villeneuve (2018). “Intro to AI for Policymakers: Understanding the shift.” Brookfield Institute for Innovation + Entrepreneurship. Furman, Jason (2016). “Is This Time Different? The Opportunities and Challenges of Artificial Intelligence.” Speech at the 2016 AI Now Conference. Dutton, Tim (2018). “An Overview of National AI Strategies.” Politics + AI, Medium. — — — — — — — — — — — — — — — Tim Dutton is an AI policy researcher based in Canada. He is the founder and editor-in-chief of Politics + AI. He writes and edits articles for Politics + AI’s Medium page and provides contract work to governments and companies looking to learn about the emerging political risks and opportunities of AI. You can follow him on Twitter and connect with him on LinkedIn. Thanks for reading! If you enjoyed the article, we would appreciate your support by clicking the clap button below or by sharing this article so others can find it. Want to read more? Head over to Politics + AI’s publication page to to find all of our articles. You can also follow us on Twitter and Facebook or subscribe to receive our latest stories.
AI Policy 101: An Introduction to the 10 Key Aspects of AI Policy
124
ai-policy-101-what-you-need-to-know-about-ai-policy-163a2bd68d65
2018-07-08
2018-07-08 15:26:00
https://medium.com/s/story/ai-policy-101-what-you-need-to-know-about-ai-policy-163a2bd68d65
false
1,647
Insight and opinion on how artificial intelligence is changing politics, policy, and governance
null
PoliticsPlusAI
null
Politics + AI
PoliticsPlusAI@gmail.com
politics-ai
ARTIFICIAL INTELLIGENCE,TECHNOLOGY,POLITICS,GOVERNMENT,AI
PoliticsPlusAI
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Tim Dutton
AI Policy Researcher | Founder and Editor-in-Chief of Politics + AI
7dec4967fe0a
tim.a.dutton
970
62
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-02
2018-08-02 04:58:40
2018-08-02
2018-08-02 17:13:44
0
false
en
2018-08-02
2018-08-02 17:13:44
2
163a940b8de1
1.467925
0
0
0
I’ve been working with machine learning and data science on some level for over ten years in Silicon Valley and elsewhere. I believe too…
2
Building a Technology of Creative Expression and Greater Human Intelligence I’ve been working with machine learning and data science on some level for over ten years in Silicon Valley and elsewhere. I believe too much of the tech industry is focused on manipulating and controlling people. I think we should change focus. I remember the time someone pitched me on building a website to help children apply for college. It sounded like a nice thing until I learned the details. Apparently, the site would collect racial and other demographic characteristics to find the ‘right’ school. It took me a while to digest that I had just been pitched on the latest generation of algorithmic racism. This idea of having computer tell you where you should go to school based on your race just seemed wrong to me. Yet, when we delve deeper what many tech companies are doing isn’t much better. The choices we make in life are not just about facts, but about values. Algorithms are programmed to optimize for something. Often, they optimize for being addictive or getting us to buy as much as possible. Still, other times they get programmed to mirror other decision makers where they can crystalize biases, including things like racism. I strongly believe we should be using this sort of technology to create new possibilities of beauty and intelligence. Fundamentally, any filtering engine is about putting people in boxes. This includes everything from the news feed algorithm, algorithms that decide if we get credit to self-driving cars. It pushes you down a specific path. A human no longer controls the path, but rather it is determined by the machine. I want to create technology that opens us up to greater possibilities. We should see things we didn’t conceive before. We should re-imagine our reality. I’ve been working to realize this. I don’t claim to have all the answers. I’ve taken to building algorithms to design apparel. You can see the stuff I’ve designed at https://roarshockbrands.com. I’m working on more tools for allowing anyone to design beautiful things. Eventually, I’d hope to create algorithms for all us to design a more beautiful, more intelligent world. Let me know your thoughts and opinions. Email me gershon -at- roarshockbrands.com or follow me on Twitter at https://twitter.com/gersh17. You can also leave your comments below.
Building a Technology of Creative Expression and Greater Human Intelligence
0
building-a-technology-of-creative-expression-and-greater-human-intelligence-163a940b8de1
2018-08-02
2018-08-02 17:13:44
https://medium.com/s/story/building-a-technology-of-creative-expression-and-greater-human-intelligence-163a940b8de1
false
389
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Gershon Bialer
Gershon lives in San Francisco where he is a bit obsessed with algorithms, and aspires to make computers be cool. He also plays chess.
616ea2cc4ec9
cron
89
334
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-19
2018-05-19 12:39:00
2018-05-20
2018-05-20 11:45:13
2
false
it
2018-05-20
2018-05-20 11:45:13
12
163ac4eeb9fc
2.504088
0
0
0
Qualche mese fa scorrendo la timeline su Linkedin quando mi sono imbattuto per caso in un post del CEO di ORS Fabio Zoffi, nel quale il Dr…
5
ORS lancia gli HyperSmart contracts (A.I. & Blockchain) Qualche mese fa scorrendo la timeline su Linkedin quando mi sono imbattuto per caso in un post del CEO di ORS Fabio Zoffi, nel quale il Dr. Zoffi spiegava come fosse stato subito colpito e affascinato dalle potenzialità della tecnologia blockchain e dalle possibili implementazioni degli smart contract. Conosco ORS da quasi 20 anni, essendo un’azienda all’avanguardia fondata a due passi da casa che ancora mantiene una splendida sede in una cascina ristrutturata vicino ad Alba, nel pieno di Langhe e Roero, patrimonio mondiale dell’Unesco; leggendo e rileggendo l’articolo del Dr. Zoffi mi sono immediatamente posto una domanda: sfrutterà ORS gli smart contract per proporre nuove soluzioni ai suoi affermati clienti di alto livello, magari con un token e una blockchain “privata”, oppure entrerà nel mondo delle crypto con una ICO e un token disponibile per tutti sul mercato. La risposta non si è fatta attendere, ed è arrivata attraverso le parole del Dr. Zoffi al Crypto Investors Show di Londra il 10 marzo; qui il CEO di ORS ha presentato al mondo la sua visione del futuro, un futuro equo dove le piccole realtà avranno la possibilità di competere con le grandi multinazionali grazie all’aiuto degli HyperSmart Contract. “Empower 1 billion entrepreneurs by 2040” è lo “slogan” proposto, ovvero fornire le risorse e le tecnologie necessarie ad un miliardo di imprenditori entro il 2040, attraverso l’ABC di ORS: A.I. — BLOCKCHAIN — CRYPTO In questi giorni si sta concludendo l’ICO lanciata attraverso l’IcoEngine di Eidoo (prima ad acquistare un HyperSmartContract, il Robo Financial Advisor che verrà utilizzato nell’exchange di Eidoo). L’ORS token è valutato 0.05€ e in presale sono già stati raccolti 10 milioni di EURO, senza bonus per gli investitori (a differenza della maggior parte delle ICO). Questo perchè ORS è una società di successo già attiva da 20 anni, con oltre 1000 algoritmi di intelligenza artificiale utilizzati con successo da centinaia di aziende in tutto il mondo. Il token sarà l’unico strumento utilizzabile per acquistare gli Hyper Smart Contract che verranno sviluppati su misura per le necessità dei clienti, partendo dalla base di conoscenza di ORS nel mondo dell’intelligenza artificiale. 500 milioni di token sono stati resi disponibili tra presale, le pre-ico riservate alle piccole comunità di supporto cresciute in questi mesi (con uno sconto del 10%) e ICO; 14 milioni di euro sono stati raccolti al momento (su un massimo di 25) e i token eventualmente invenduti verranno bruciati. Ho investito in ORS in pre-ico e sono felice di vedere un’azienda seria e matura entrare nel mondo delle cryptovalute, un mondo spesso caratterizzato da progetti senza fruibilità, pieni di annunci ad effetto e poca sostanza. ORS si pone in maniera differente, forte di una struttura (con oltre 100 sviluppatori formati ed esperti) solida e senza la necessità di offrire bonus esagerati o lanciare airdrop e bounty mirabilanti per attrarre visibilità. Anche la campagna bounty creata si differenzia dalle altre, durando solo 3 settimane e rivolgendosi a chi è realmente interessato al progetto più che ai semplici cacciatori di bounty che rivendono subito il token ricevuto. Il lancio del Marketplace per l’acquisto degli HyperSmartContract (HSC) è previsto per l’inizio del 2019 e sono già state annunciati diversi accordi, che sicuramente aumenteranno nei prossimi mesi. Sito ufficiale: http://orsgroup.io/ Canali Telegram: Internazionale Italiano Canali social: Youtube Twitter Facebook Tip (ETH/token): 0x7E3B70879b490f6f0F39B237d8F9fDd248539973
ORS lancia gli HyperSmart contracts (A.I. & Blockchain)
0
ors-lancia-gli-hypersmart-contracts-a-i-blockchain-163ac4eeb9fc
2018-05-20
2018-05-20 11:45:14
https://medium.com/s/story/ors-lancia-gli-hypersmart-contracts-a-i-blockchain-163ac4eeb9fc
false
562
null
null
null
null
null
null
null
null
null
Crypto
crypto
Crypto
37,754
Dani Bom
null
33347bb7ae30
danibom
2
15
20,181,104
null
null
null
null
null
null
0
null
0
74d3d7d95404
2018-05-11
2018-05-11 20:13:37
2018-05-11
2018-05-11 13:49:09
1
false
en
2018-05-17
2018-05-17 10:21:01
16
163c4bfaa2e7
3.70566
0
0
0
Google AI makes phone calls for you; is the future transhuman; Boston Dynamics robot gets some fresh air; and more!
5
This week — Google AI makes phone calls for you; is the future transhuman; Boston Dynamics robot gets some fresh air; extending lifespan of dogs before humans; and more! H+ Weekly is a free, weekly newsletter with latest news and articles about robotics, AI and transhumanism. Subscribe now! More than a human No death and an enhanced life: Is the future transhuman? This article from The Guardian brings closer the ideas of transhumanism to a wider audience. “Ultimately, by merging man and machine, science will produce humans who have vastly increased intelligence, strength, and lifespans; a near embodiment of gods.”, the author writes. Towards An Open Source Bionic Body — Meet Samantha Payne, cofounder of Open Bionics A story of Open Bionics and its COO, Samantha Payne, from the beginning to the first prototype to building a community around 3D printed prosthetic arms to releasing a medical approved prosthetic arm and beyond. What do you do with two extra pairs of functional hands? Become Dr. Octopus, that’s what you do. Who is Scared of the Teeny Tiny Chip? Having a chip implanted in a body is a terrifying thought for some people. The concerns about privacy and exploiting the data gathered by such devices tops the list. This article goes through them one by one and explains that it is not that terrifying as people might think. Artificial Intelligence Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone At the recent Google I/O conference, Google AI team presented Duplex — an AI assistant that can make phone calls for you. In this blog post, they explain how the system works and show examples where an AI made a call to a restaurant and booked a table by talking to a real human. AI creates new levels for Doom and Super Mario games Researchers have created a neural network that can generate completely new levels for Doom and Super Mario. The networks they have created can find use in game design in generating initial levels for human designers to adjust. Robotics Uber’s Self-Driving Car May Have “Decided” Not to Swerve To Prevent The Fatal Crash Apparently, Uber’s self-driving car that was involved in a fatal accident on March 18th this year “decided” to ignore the woman in front of it leading up to the crash. That is, it “saw” the woman, and made the decision “it didn’t need to react right away.” To remedy an overpowering number of “false positives” — hindrances in the road that pose no real threat, like a piece of cardboard — the threshold of Uber’s software was “tuned” so low, that even a grown woman with a bicycle did not trigger an immediate response. Getting some air, Atlas? It’s spring and Boston Dynamics released Atlas the humanoid robot to roam in the sun and get some fresh air. Pentagon moves closer to ‘swarming drones’ capability with new systems test Flying aircraft carriers that launch and recover fleets of small, inexpensive drones could soon be part of the U.S. military arsenal, as the Pentagon works with private technology partners to engineer that vision into reality. The program, code-named “Gremlins,” calls for the two companies to demonstrate the safe and reliable aerial launch and recovery of multiple unmanned aircraft. Banning autonomous weapons is not the answer This article comes from World Economic Forum website. It concludes with a statement that “a prohibition on the development and use of lethal autonomous weapons systems is not the simple solution it appears to be. ” Japan Is Replacing Its Aging Construction Workers With Robots Japan’s quickly ageing workforce forces companies to look for robots to jump into spots left by retired human workers. Biotechnology A stealthy Harvard startup wants to reverse aging in dogs, and humans could be next George Church, the world’s most influential synthetic biologist is behind a new company that plans to rejuvenate dogs using gene therapy. Rejuvenate Bio plans first to solve the problem of ageing in dogs and then move to humans. The company, which has carried out preliminary tests on beagles, claims it will make animals “younger” by adding new DNA instructions to their bodies Taking CRISPR from clipping scissors to word processor Researchers have created a new CRISPR platform named MAGESTIC or “multiplexed accurate genome editing with short, trackable, integrated cellular barcodes”. It makes CRISPR less like a blunt cutting tool and more like a word processor by enabling an efficient “search and replace” function for genetic material. Announced in a Nature Biotechnology paper, MAGESTIC also produced a sevenfold increase in cell survival during the editing process. Scientists build ‘synthetic embryos’ Dutch scientists have built “synthetic” embryos in their laboratory using mouse cells other than sperm and eggs. The stem cell breakthrough, described in Nature journal, is not for cloning people or animals, but about understanding why many pregnancies fail at an early stage — implantation. The embryos, made in a dish, attached to the womb lining of live female mice and grew for a few days. Thanks for reading this far! If you got value out of this article, it would mean a lot to me if you would click the 👏 icon just below. Every week I prepare a new issue of H+ Weekly where I share with you the most interesting news, articles and links about robotics, artificial intelligence and futuristic technologies. If you liked it and you’d like to receive every issue directly into your inbox, just sign in to the H+ Weekly newsletter. Originally published at hplusweekly.com on May 11, 2018.
H+ Weekly - Issue #153
0
h-weekly-issue-153-163c4bfaa2e7
2018-05-20
2018-05-20 12:11:50
https://medium.com/s/story/h-weekly-issue-153-163c4bfaa2e7
false
929
A free, weekly newsletter with latest news and articles about robotics, AI and transhumanism.
null
hplusweekly
null
H+ Weekly
hello@hplusweekly.com
h-weekly
TECHNOLOGY,TRANSHUMANISM,ARTIFICIAL INTELLIGENCE
hplusweekly
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Conrad Gray
Engineer, Entrepreneur, Inventor | http://conradthegray.com
e60a556ba1d4
conradthegray
633
102
20,181,104
null
null
null
null
null
null
0
null
0
e50ba1be5e3a
2017-12-01
2017-12-01 23:20:30
2017-12-01
2017-12-01 23:36:02
2
false
en
2017-12-01
2017-12-01 23:36:02
0
163c82583310
3.560692
4
0
0
Standing on the Shoulders of Giants
5
The Rest of the Skeleton Standing on the Shoulders of Giants By Ryan D. Mayfield | Strategy & Product Manager, Global Affairs “We see more and farther than our predecessors, not because we have keener vision or greater height, but because we are lifted up and borne aloft on their gigantic stature.” –Metalogicon, by John of Salisbury (1159 AD), attributing the concept to Bernard of Chartres Knowledge has a long tail of development. Every critical discovery rests on a number of underlying concepts that are givens today, but were revolutionary at one point in human history. We will continue to build a stronger, safer, and more efficient society by not only continuing our relentless pace, climbing upwards on the shoulders of giants, but by also building out better connections between the skeletons of prior discoveries. Take a current hot-button topic: autonomous vehicles. The concept that a vehicle can drive itself through traffic with far greater safety and efficiency than any human is challenging long-held beliefs, technical development, and entire industries. It is built on many shoulders: advanced sensors, transportation, and computing, to name a few. It is challenging notions in human-computer interaction, philosophy, and labor economics. Of course, before we could build advanced computer processors to drive the information revolution that will someday soon drive us all to work, our forbearers had to discover how to harvest and harness electricity, and their discovery depended on a basic knowledge of metallurgy. Each of these distinct steps that we currently recognize include numerous intermediary discoveries, some of which hoisted new fields and others which simply built out the skeleton of larger giants that future generations could then scale. Propelled by digital connectivity and informational awareness, we are raising giants even faster than before, and scrambling to attain even greater heights atop each new development. With each progressive discovery, new doors are opened for scholarship, commercialization, and impact. Government agencies and nonprofits are sponsoring research, cultivating new ideas, and creating new laws and regulations around new concepts — similarly contributing to frames of our modern giants. ​ As we build our respective pyramids of knowledge, we occasionally look across from our perch atop generations of giants and see a similar concept that has appeared. These may be accidental, like Percy Spencer’s work in RADAR creating the microwave, or intentional, in the case of Margaret Oakley Dayhoff’s fusion of biology and software to create bioinformatics. The Value of European Patents, a 2005 study of European inventors, reported that half of all innovations “arise unexpectedly from research projects undertaken for other purposes, or from activities other than the inventing activity.” Cross-functional or accidental innovations are unlocked by gazing upon the broader skeletons of past development. Research into social dynamics and psychology improves business functions. Post-conflict reconstruction blends together economic development, civil engineering, cultural studies, and international security. Government regulation rests on the precedent set by prior lawmakers, emerging social and political movements, and the insights provided by summoned or sponsored advisors. Especially with the pace of innovation, it is challenging to become an expert in one field, let alone multiple fields. At Yewno, we face this each day, as we consider how the varying worlds of academic research (including all of its sub-fields!), publishing, finance, and biomedical sciences interact. We tackle this array of challenges through conceptual search, supported by an adaptive knowledge graph. Through this, our software platform recognizes concepts and their interrelationships in unstructured data (primarily, text). This is similar to how a human engages with insights through reading, by identifying an author’s statements and their interconnections. Then, just as a human is able to contextualize what they read based off of a lifetime of learning and experiences, Yewno takes into account everything it has ever consumed, building out a dynamic knowledge graph, which incorporates over a billion semantic and quantitative relationships across millions of concepts. One piece of this is what we call the inference engine, helping users spot the most relevant and important insights, across fields. Through Yewno Discover, researchers use a visualization of the inference engine to explore relationships with their original concept. Instead of hunting for the right set of keywords to bound their search, this enables novices to explore the breadth of a field and experts to zero in on the critical interconnections of their chosen topic. Yewno’s inference engine at work Then, as users enter more concepts, they see both inferences off of each independent concept (to help them more precisely identify the bounds of their search) and interconnected concepts (revealing new insights and opportunities to focus in on more specific topics.) As we pursue interdisciplinary insights and the advancement of knowledge, we build tools and support a diverse array of groups in an effort to stand taller on the shoulders of giants by not forgetting the rest of the skeleton. As explorers, researchers, and innovators, the more we can identify the most relevant insights from other fields, the better we are able to unearth the unexpected, inspire the revolutionary, and solve the most important problems on earth.
The Rest of the Skeleton
5
the-rest-of-the-skeleton-163c82583310
2018-05-03
2018-05-03 04:06:23
https://medium.com/s/story/the-rest-of-the-skeleton-163c82583310
false
842
We care about sharing and creating knowledge — helping people be curious and explore deeper.
null
DoYewno
null
Do Yewno?
hello@yewno.com
doyewno
LIBRARIES,EDUCATION,SEARCH ENGINES,PUBLISHING,RESEARCH
DoYewno
Discovery
discovery
Discovery
4,168
Yewno
null
adea59ade8a0
yewno1
26
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-15
2017-09-15 16:05:36
2017-09-19
2017-09-19 14:55:01
0
false
en
2017-09-19
2017-09-19 14:55:01
4
163cd8bde59b
0.467925
0
0
0
The point of WanderData was not to become hypnotized by daily-deal sites pumping credit cards (although that’s certainly fun), but rather…
5
Scraping Flight Data with Python (example) The point of WanderData was not to become hypnotized by daily-deal sites pumping credit cards (although that’s certainly fun), but rather to try to add some structure and insight to flight and travel search. Here’s one attempt I found some time ago for Southwest. And another Southwest scraper by Zeke Gabrielse complete with Twilio alerts. The comments for Mr. Gabrielse’s project are insightful — for example, repeatedly searching from the same IP doesn’t result in higher fares. Also, Eric Kitaif built this. Note that these programmers are not confirming the “buy on a Tuesday afternoon” type travel advice, but rather that fares tend to wander. Definitely the Amadeus Travel Innovation Sandbox looks like something to research further.
Scraping Flight Data with Python (example)
0
scraping-flight-data-with-python-example-163cd8bde59b
2018-04-24
2018-04-24 14:30:06
https://medium.com/s/story/scraping-flight-data-with-python-example-163cd8bde59b
false
124
null
null
null
null
null
null
null
null
null
Travel
travel
Travel
236,578
WanderData
Travel search, marketing, and data
5f3679b56329
WanderData
9
20
20,181,104
null
null
null
null
null
null
0
fout=open("out.csv","a") # first file: for line in open("<full_path>/fit1.csv"): fout.write(line) # now the rest: for num in range(2,25): f = open("<full_path>/fit"+str(num)+".csv") next(f) # skip the header next(f) # skip the header for line in f: fout.write(line) f.close() # not really needed fout.close() import csv input1 = open('out.csv', 'r') output = open('FitBit.csv', 'w', newline='') writer = csv.writer(output) for row in csv.reader(input1): if any(row): writer.writerow(row) input1.close() output.close() AVG Calories Burned: 2841.731259 AVG Steps: 10538.52475 AVG Distance: 4.767468175 AVG Floors: 7.292786421 AVG Minutes Sedentary: 1171.758133 AVG Minutes Lightly Active: 165.5374823 AVG Minutes Fairly Active: 34.06647808 AVG Minutes Very Active: 54.28712871 AVG Activity Calories: 1374.951909
4
null
2018-08-23
2018-08-23 13:56:13
2018-09-03
2018-09-03 03:27:52
25
false
en
2018-10-04
2018-10-04 21:14:03
20
163d341a6cce
9.067925
1
0
0
Analyzing Data from FitBit
1
Analyzing FitBit Data Analyzing Data from FitBit Fitbit is a company in San Francisco, California, known for its products that have the same name, which are fitness/activity trackers and now smart watches. The devices are wireless-enabled wearable technology that measure data on the number of steps you walked, your heart rate, your quality of sleep, steps climbed, and other personal metrics involved in fitness like your calorie in take, weight, and calories burned. https://www.amazon.com/gp/product/B07B499PWG/ref=as_li_tl?ie=UTF8&tag=medium074-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=B07B499PWG&linkId=a068ea3da44e696b6765a070754e5318 Like all fitness activity trackers this device isn’t perfectly accurate, but it can give you a good idea of how active you are. I am going to take some of the data that is being collected from my FitBit tracker and try to find some fun and interesting insights about myself and my “active” life. FitBit now use MET calculations that match recommendations from various associations like the Center for Disease Control and the US Department of Health to determine how active a person is. First I will need to define the problem / or question, then I am going to collect the raw data from Fitbit.com. After that I will need to process the data to transform the information (only to put multiple dates and information together). Then I will explore the data visually with graphs and then perform some analysis, and communicate the results once I am done. Step 1: Frame the problem. The first thing you have to do before you solve a problem is to define exactly what it is. Step 2: Collect the raw data needed for your problem. Step 3: Process the data for analysis. Step 4: Explore the data / perform analysis Step 5: Communicate results of the analysis. Step 1: Frame the problem I want to know what day of the week I get the most/least steps, what month I get the most/least steps and what month I got the most steps in one day. A) Most/least steps for given day of the week in the past 2 years ? B) Most/least steps for given month of the year in the past 2 years? C)What month did I get the most steps in one day in the past 2 years? Step 2: Collect The Raw Data From FitBit A) Go to https://www.fitbit.com/export/user/data , this will allow you to download 1 moth worth of data either as a CSV (Comma Seperated Values) or an Excel file. For the data I want to collect I will click only on the “Activities” Data option and choose the File format to be “CSV”. I chose a “Custom” time period between July. 1, 2016 and Jul. 31, 2016. This should download this one month worth of data, just by clicking the download button. B) FitBit only allows you to download data up to 31 days so I had to do this about 23 more times, because I wanted 2 years worth of data. I also named my saved CSV files “fitX.csv”, where X=the number/order I downloaded the file starting from July 2016 to Jun 2018 The reason for this naming convention is to easily use a program to concatenate these files into one later when we are processing our data. The raw data that I’ve collected contains the following features / columns: Date= The current date Calories Burned = The number of calories burned for that day Steps = The number of steps taken for that day Distance (in miles) = The distance travelled for that day in miles Floors = The number of floors taken for that day (approx. 10 feet in elevation= 1floor) Minutes Sedentary = The minutes spent seated / inactive Minutes Lightly Active = The minutes you’re lightly active Minutes Fairly Active = The minutes you’re fairly active Minutes Very Active = The minutes you’re very active Activity Calories = The number of cal Step 3: Process the data Now that I have collected my files, I am going to concatenate them all into one CSV file called “out.csv” using the Python programming language (Python version 3.4.4). The code is below (if you want to use the code just change the <full_path> line with the folder path that contains your fitbit files, and if your fitbit filenames aren’t fit1.csv, fit2.csv, fit3.csv, etc. then change the file name as well : Concatenate the files: I noticed after concatenating the files into the one “out.csv” file, that the file contained blank rows of data. I could go through the CSV file and simply delete each row, but if I had lots of rows to delete, I would want a more automated way to do this. So I will use another Python program to get rid of these extra rows for me. Remove empty rows: Remove the “Activities” row from the CSV file. Now that we have processed this data and cleaned it up into one CSV file, we can start exploring the data ! There are many tools we can use to do some analysis. We could use Excel, MySQL, R, Tableau and Python just to name a few. Step 4: Exploring the Data Using Excel I can immediately get the averages and maximum values from my data using AutoSum on the individual columns. The overall averages for the past 2 years are below: I will use Tableau to visualize and explore the data. You can see a video on how to install a free version of Tableau on your computer using this link : https://www.youtube.com/watch?v=p9-eOumIADI First I need to load the .csv file that I have created by connecting to the .csv file (a text file). Click on Text file and go to the location where you saved your processed .csv file. Let the data visualization begin ! The month I got the most steps in the past 2 years was June with 712,155 steps. The month with the lowest steps in the past 2 years was February with 489,622 steps. Looks like I get most of my steps on Fridays and the least amount on Thursdays. Looking at this chart it’s clear to see I am more active on the weekends (I’m including Friday) than during the week days (Mon, Tues, Wed, & Thurs). In 2018 looks like the same is true, on Fridays I had the most amount of steps and on Thursdays I had the least amount of steps Out of the 24 months, in February 2018 I averaged the most steps: (12,753 steps per day for the month). I averaged the least amount of steps in February 2017 (4,735) Line graph of average steps per month over 2 years time I got my maximum steps in the month of June 2017 in the past 2 years 36, 082 for the day. In February 2017 I got 19,376 steps the least amount. 14 / 24 = 58.3 % months I beat or met my overall average of 10538.52475 steps. I beat my average steps per day twice in the months of June, September, October, and December. Looks like I got the most steps in the past two years in the month of January 2018 with 376,744 steps for that month, and I noticed the month of February 2017 seems to be a bit of an outlier with only 132,592 steps for that month. That might have been the time my FitBit broke. Step 5: Communicate Results From the charts above looks like something happened to my Fitbit during the month of February in 2017. I was most consistent on my steps during the month of February 2018. On any given day you could expect that I will get about 10,538.52475 steps. I will get most of my steps on Friday, Saturday, and Sunday, probably because I am busier during the weekdays. I also get a lot of steps during the months of June, October, and December. https://www.amazon.com/gp/product/B0752M6T6K/ref=as_li_tl?ie=UTF8&tag=medium074-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=B0752M6T6K&linkId=4daf31a05bbef41f5a324b25a8699114 Lessons learned and thoughts while doing this project: I would like a more automated way of collecting the initial 24 CSV files of data, if I wanted 10 years of data on my self that would take up even more time to collect all of that data. Fitbit doesn’t allow you to collect data on your heart, and your resting heart rate is very important to determining how healthy you are. There are some work arounds, but they are not standard tools given by Fitbit. I don’t understand why fitbit wouldn’t give you access to your own data. Sleep habits and weight seem to be important to determine how healthy you are as well, I might use the Fitbit scale to automatically track my weight and sync with my Fitbit data and track my sleeping patterns to do some analysis on that data another time. I will start/ collect data from January 1st next time to be more consistent. Thanks for reading this article I hope its helpful to you all ! Keep up the learning, and if you would like more computer science, programming and algorithm analysis videos please visit and subscribe to my YouTube channels (randerson112358 & compsci112358 ) Check Out the following for content / videos on Computer Science, Algorithm Analysis, Programming and Logic: YouTube Channel: randerson112358: https://www.youtube.com/channel/UCaV_0qp2NZd319K4_K8Z5SQ compsci112358: https://www.youtube.com/channel/UCbmb5IoBtHZTpYZCDBOC1CA Website: http://everythingcomputerscience.com/ Video Tutorials on Recurrence Relation: https://www.udemy.com/recurrence-relation-made-easy/ Video Tutorial on Algorithm Analysis: https://www.udemy.com/algorithm-analysis/ Twitter: https://twitter.com/CsEverything YouTube Channel: Computer Science Website: Udemy Videos on Algortithm Analysis: Resources: how to merge 200 csv files in Python This site uses cookies to deliver our services and to show you relevant ads and job listings. By using our site, you…stackoverflow.com AttributeError: '_io.TextIOWrapper' object has no attribute 'next'? everybody. I am currently working to merge the csv files. For example, you have files from filename1 to filename100. I…stackoverflow.com Why does range(start, end) not include end? Although there are some useful algorithmic explanations here, I think it may help to add some simple 'real life'…stackoverflow.com Delete blank rows from CSV? I have a large csv file in which some rows are entirely blank. How do I use Python to delete all blank rows from the…stackoverflow.com CSV file written with Python has blank lines between each row Note: It seems this is not the preferred solution because of how the extra line was being added on a windows system. As…stackoverflow.com corynissen/fitbitScraper R package to scrape fitbit data. Contribute to corynissen/fitbitScraper development by creating an account on GitHub.github.com Your heart, your calories, your sleep, your data: How to extract your Fitbit data and make graphs… I opted for a Fitbit because they say "your data belongs to you". But as it turns out, it is no easy task getting…annofoneblog.wordpress.com Download heartrate data Hello! I do apologize if this has been answered - it seems like an obvious question, but I don't see a forum topic…community.fitbit.com
Analyzing FitBit Data
42
analyzing-my-fitbit-data-163d341a6cce
2018-10-04
2018-10-04 21:14:03
https://medium.com/s/story/analyzing-my-fitbit-data-163d341a6cce
false
1,873
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
randerson112358
I am a Programmer who loves computer science, and playing basketball ! YouTube Channel is: https://www.youtube.com/user/randerson112358
4b249c8dbe7f
randerson112358
58
29
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-30
2017-11-30 23:10:10
2017-11-30
2017-11-30 23:16:40
0
false
en
2017-11-30
2017-11-30 23:16:40
0
163e4767f710
1.909434
1
0
0
SageMaker is a platform for Machine Learning on AWS
4
ReInvent — SageMaker SageMaker is a platform for Machine Learning on AWS Machine learning cycle in a simplistic view includes data collect, create model, train model and inference. This cycle is usually done multiple times to enabled learning. Challenges in ML: There are existing platforms for large scale ML. But they have a tradeoff between performance and cost. High parameter optimization is expensive because of the cost. Incremental training causes wasted compute. Production readiness needs higher investment Architecture and design choices for Sagemaker: Streaming: The state of data changes at every point in the cycle of ML. State is saved at each point. Incremental training: Save compute state, so that it does not need to be recomputed. So its faster and more accurate. Also, GPU’s are used for performant calculations Distributed: Shared state: A global state is maintained, that all distributed machines share. This results in better performance. State >= Model : If the data is a kinesis stream for example, you cannot go back to get the data. In Sagemaker, state is saved, enabling to go back in time to get data when needed. Abstraction and Containerized: Testing, regression are all contained within. This results in production readiness. ML algorithms available on SageMaker: Linear Regression — statistical method that allows to study the relationship between 2 continuous variables (one dependent and another independent) Linear Classification — Classifies an object based on the linear combination of the characteristics of the object. A linear combination of x and y is ax + by, where a and b are constants. Factorization machines — Generalization of linear regression. Each variable is assigned a vector, instead of a constant weight K-Means clustering — Partition “n” objects into “k” clusters based on the nearest mean. Principal Component analysis — A statistical procedure that uses orthogonal transformation to convert a set of possibly correlated variables into a set of possibly linear and uncorrelated variables. Used for dimensionality reduction. Neural Topic Modelling — Creating a distribution over words/images and then grouping by topic. Topic modeling is a statistical model that allows to discover “abstract topics” that occur in a set of documents. Time Series Forecasting — An algorithm used internally by Amazon for forecasting several things. Spectral LDA — another topic modeling algorithm XGBoost — Boosted decision trees. A decision tree is a simple representation for classifying examples. It is used as a predictive modeling approach. It can represent decisions and decision making. Boosting is a way to reduce bias and to convert weak learners into strong ones. Sequence to Sequence — This is a powerful algorithm that uses recurrent neural networks (RNN’s) for language to language translation. RNN’s take the (usually encoded) output of the previous RNN as one of the inputs. So then an RNN forms a list of possible sentence combinations for a given language, encodes them as its output. Another RNN takes this encoded list, decodes, uses its memory of the previous results to find the possible sentence in the next language. Image classification — Implementation in MxNet.
ReInvent — SageMaker
1
reinvent-sagemaker-163e4767f710
2018-03-17
2018-03-17 15:10:04
https://medium.com/s/story/reinvent-sagemaker-163e4767f710
false
506
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Vasu Lakshmanan
null
7ef935c19403
Vasu_Laksh
20
21
20,181,104
null
null
null
null
null
null
0
null
0
af80a08866e3
2018-08-11
2018-08-11 01:53:23
2018-08-11
2018-08-11 02:12:16
1
false
en
2018-08-11
2018-08-11 02:13:11
1
16422ebdb793
2.667925
0
0
0
AI-driven personalised planograms help retailers achieve new margins.
5
Want some Beer to go with the Diapers? AI-driven personalised planograms help retailers achieve new margins. The Beer-Diaper story is an urban legend. No amount of searches on Google — I know, I have done a few — will help you get to its origin. For what it’s worth, the most common version talks about how a store clerk once noticed that young fathers on a late-night-diaper run were also likely to pick up a six pack of beer. So, they moved these items closer, and sales of both items in the store zoomed. While the story itself might be apocryphal, the underlying science is certainly not. A challenges retailers face today is to orient their storefront in a way that would allow customers to find what they are looking for easily. Retail stores use visual merchandisers and planogram software to design the most optimum look for the store. Better related product positioning and improved sales are just two very basic reasons why retailers should be implementing planograms in their shops. Planograms provide other benefits as well, including: Assigned selling potential to every square foot of space Satisfying customers with a better visual appeal Tighter inventory control and reduction of out-of-stocks Easier product replenishment for staff Any good retailer realises that the key to increased sales is through proper visual merchandising and offering the right product assortment or mix. A planogram is one of the best merchandising tools for presenting products to the customer. If you are a small retailer, however, say one store, planograms are a bit harder to pull off. Planogram software is expensive as is hiring a visual merchandiser. But if you are big box retailer, the challenge with a planogram is that it creates a generic floor plan that is the best fit across all your chain stores. Creating individual planograms for each store, or even each vending machine, is not feasible. As it is virtually impossible to parse through data for each individual store or even vending machine. Until now, that is. Using data mining and AI algorithms, it is possible to create individualised planograms for every single location, be it a storefront or a vending machine. The reason individual planograms matter so much is because each location has a different demographic. Factors like temperature, socioeconomic status, the mix of population and age are just some of the factors that go into defining a neighbourhood. All of these characterise the kind of product mix that a retailer must carry for optimum sales. For a chain store it is impossible to factor in micro-localisation in their planograms. However, an individualised planogram will use all of the above information as an input to create an optimum retail sell space. It can recommend the perfect product mix, space allocation as well as price points to ensure maximum outtake at each of the retail fronts. Since these individualised planograms are data-driven, they are essentially free from any sort of biases: historical or human. They reflect only the best potential of the retail outlet in that particular location. Let’s take what we at HIVERY have done with planograms for vending machines as an example. We have used AI and mathematical optimisation techniques to create fingerprint planogram recommendations for retail outlets. The results have been quite dramatic. In one example we have seen additional revenue of $2,200 per annum by optimising a combination of space, flavour of drinks to be stored and their price points. And this was for one vending machine. On average, customers report an increase of 15 percent in revenue and decrease of 18 percent in restock costs once they deploy our AI-driven planograms across their vending machine fleet. An AI-driven solution such as the one we have created helps address a dark spot in the market. Personalised planograms will help retailers realise profits where none existed before. Originally published at www.hivery.com.
Want some Beer to go with the Diapers?
0
want-some-beer-to-go-with-the-diapers-16422ebdb793
2018-08-13
2018-08-13 01:24:01
https://medium.com/s/story/want-some-beer-to-go-with-the-diapers-16422ebdb793
false
654
Data Has a Better Idea
null
null
null
HIVERYai
hello@hivery.com
hiveryai
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,RETAIL,INNOVATION,OPTIMIZATION
HIVERYai
Retail
retail
Retail
16,358
HIVERY
HIVERY is a Data61/CSIRO and Coca-Cola backed AI company that's reinventing how retailers make decisions
3f225f89b34d
hivery
6
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-03
2018-06-03 22:19:57
2018-06-04
2018-06-04 00:16:29
3
true
en
2018-06-05
2018-06-05 02:31:14
4
164244b24f02
2.485849
13
4
0
A Short Timeline of Robots Invading Media and Journalism
5
i4j.info The Future of News is Automated — Robots taking over Journalism A Short Timeline of Robots Invading Media and Journalism 2014: AP implemented a system that published 3,000 robo journalism stories every quarter as of early 2015. 2016: Let robots write sports stories 2018: Let robots write breaking news stories 2019: Aggregated content apps that use AI start winning against News publications in the attention economy 2019: Subscription bundles are the norm 2020: Let the robots create you micro videos 2021: Written content consumption is in steep decline The Verge GenZ and Younger Millennials are Video First Natives It’s not just that GenZ show a strong preference for video content, it’s that automated robot news journalism is taking over from struggling publications that are going the way of the subscription paywall. According to a recent BBC news report, by 2022 some 90% of all news content will be written by “robots.” Digitization and the rapidly increasing amount of data it has made readily available is enabling large parts of today’s reporting to be created by a computer: the weather, football and stock markets have been the first areas in which “natural language generation” programs have been able to deliver good, readable stories. — Roland Berger Robots are Faster to Stories Bloomberg recently reported about JX Press Corp., a news technology venture founded in 2008 by Katsuhiro Yoneshige while he was still a freshman in college, that is an automated newsroom run by robots that can publish stories faster than Major news agencies. Using social media machine learning can cover stories faster than any human could. For instance, JX Press’s uses a combination of social media and artificial intelligence. The Founder, Yoneshige, and his team have developed a tool, using machine learning, for finding breaking news in social media posts and writing it up as news reports. Essentially, it’s a newsroom staffed by engineers. Meanwhile in Silicon Valley there’s a robo journalism startup capable of writing an article from multiple perspectives called Knowwhere. Robo journalism is advancing very fast, and likely to take the jobs of humans, both editors and journalists in the years to come. Robo journalism like Knowhere can write the same article for example in different formats: Positive Spin Negative Spin Left biased Right biased Ironically then it’s journalism by AI that can help combat misinformation and echo bubbles. Not only did Facebook destroy publications and their traffic sources, but created a weaponized platform for algorithms that hastened the automation of journalism with software and machine learning and content aggregation apps that use AI to know the preferences of readers faster and better than the likes of Google, Facebook or Medium. Video Content Expected to Follow Robo Journalists The point where AI will be able to “create video” news content is not very far off. Many micro videos are nothing but captions, background music and pulled video footage and photography — which can easily be automated by software to develop “trending” breaking news on any topic that will be more watched than any 400-word article in a mobile era of content consumption.
The Future of News is Automated — Robots taking over Journalism
229
the-future-of-news-is-automated-robots-taking-over-journalism-164244b24f02
2018-06-09
2018-06-09 19:37:57
https://medium.com/s/story/the-future-of-news-is-automated-robots-taking-over-journalism-164244b24f02
false
513
null
null
null
null
null
null
null
null
null
Journalism
journalism
Journalism
39,588
Michael K. Spencer
Blockchain Mark Consultant, tech Futurist, prolific writer. WeChat: mikekevinspencer
e35242ed86ee
Michael_Spencer
19,107
17,653
20,181,104
null
null
null
null
null
null
0
null
0
592e0f336366
2018-07-09
2018-07-09 14:10:05
2018-07-09
2018-07-09 14:12:16
1
false
en
2018-07-09
2018-07-09 14:12:16
13
164337cc73d9
1.433962
2
0
0
The IAGON community has been growing and we are proud to have had the support of everyone as we work on the development of this…
5
CREMIT Joins IAGON’s Initial Adopter Program The IAGON community has been growing and we are proud to have had the support of everyone as we work on the development of this groundbreaking platform. It won’t come as a surprise to our followers, that we aspire to continue developing relationships with other pioneering companies within this space. Today, we are overjoyed to introduce our community members to the most recent company to forge a significant relationship with IAGON as an Initial Adopter, CREMIT. CREMIT is first ever insurance and banking solutions provider for cryptocurrencies. This creative web and app based, multi crypto asset exchange platform solves the crypto-investors problem by mitigating the risk of losing money. The services offered by Cremit are Certificates of Deposit, Insurance, Load and Trading of Bitcoins, Ethereum, ERC20 tokens and Altcoins. For the time being, CREMIT will be utilizing IAGON’s platform for its secure storage and processing capabilities, as we continue to work hard on delivering an optimal platform to all of our users. Although there is no doubt about the importance and potential of decentralized cloud services, in order for the use of these platforms to be tested in real-time, we are going to need the help from and input of our initial adopters. Not only do they give necessary feedback on the platform and its overall performance, but their presence is also a great way to build visibility and gain traction, while forging long-lasting and mutually productive relationships across the industry. The joint enthusiasm that both IAGON and CREMIT hold for Blockchain Technology and Artificial Intelligence has been the catalyst behind our budding relationship and we are excited to welcome them to IAGON as our newest Initial Adopter. For more information and to see what else is going on @ IAGON, please follow us at the social media links below, or head over to the IAGON Website! Facebook, Instagram, LinkedIn, Steemit, Reddit Bitcointalk, Twitter, Telegram, Youtube, Medium, Github
CREMIT Joins IAGON’s Initial Adopter Program
51
cremit-joins-iagons-initial-adopter-program-164337cc73d9
2018-07-09
2018-07-09 14:12:17
https://medium.com/s/story/cremit-joins-iagons-initial-adopter-program-164337cc73d9
false
327
Iagon is a platform for harnessing the storage capacities and processing power of multiple computers over a blockchain grid. Secured and encrypted platform that integrates blockchain, cryptographic technologies & AI, enhancing the overall usability.
null
IagonOfficial
null
Iagon Official
navjit@iagon.com
iagon-official
ARTIFICIAL INTELLIGENCE,CLOUD COMPUTING,BLOCKCHAIN TECHNOLOGY,CLOUD STORAGE,ICO
IagonOfficial
Blockchain
blockchain
Blockchain
265,164
Rose Marie
Project Lead/Content Director @ IAGON
b7347cf5ce45
rosemariewritenow
68
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-08
2018-04-08 22:44:24
2018-04-09
2018-04-09 00:05:47
0
false
en
2018-04-09
2018-04-09 00:05:47
0
164465d2b24f
2.698113
0
0
0
Artificial Intelligence often conjures up images of killer robots and the destruction of all mankind, but in reality it is just a bunch of…
1
Reading 11: Intelligence Artificial Intelligence often conjures up images of killer robots and the destruction of all mankind, but in reality it is just a bunch of statistics. Often, artificial intelligence makes decisions based on previous data to make predictions about a very narrow topic. This is one major difference from human intelligence, but the main one is that artificial intelligence does not yet think, whatever that means. There is something missing — the ability to apply the knowledge the machines have ‘learned’ to applications beyond the limited dataset they were trained on. But what these machines know how to do, they do it well, often better than humans. AlphaGo, Watson, etc. are examples of places where the machine has beaten us — in the complex game of Go and the game show Jeopardy. More than just interesting gimmicks, these mainstream examples prove how important artificial intelligence will become. These programs are able to process massive amounts of data far faster than any human, and learn interesting patterns and insights from this data. But as one author notes, the ability to learn these patterns from small amounts of data is still lacking, and the applications are still quite narrow. However, the processes used to create these programs can be applied to many fields, and will be important in many industries. How do we determine if we have created an intelligent machine? Alan Turing developed the Turing test, whereby a human evaluator judges a conversation with a person and a conversation with a machine. If the responses are indistinguishable, then the machine has passed the Turing test. A counter-argument to this test is the Chinese Room, which considers that a machine that presents fluent Chinese does not actually understand what it is outputting. This lack of understanding, to me, is a compelling argument against the impending doom of general AI and indicates what AI is currently missing. Searle, the author of the Chinese Room, believes that there a specific processes in the brain that give rise to consciousness. Until we can locate these processes and replicate them, we will not have intelligence that is truly equal to us. But does consciousness matter when considering the dangers of AI? One philosopher noted that if we create a machine whose purpose is to do one specific task, and humans prevent the machine from completing this task, then the machine will simply kill all humans. Certainly, if the machine has access in some way to a method to kill someone in this situation, it will do it — consciousness or not. But when applied correctly, artificial intelligence could be the most important advancement in the history of humanity. To do so, however, it needs to be properly regulated and serve to augment humans. My worries with artificial intelligence do not stem from a ‘doomsday scenario’, or even the hypothetical situation above. After all, for such an event to happen, the machine would have to be able to control some weapon over long distances, which would be a ridiculous thing to allow to happen. Rather, my fears of intelligence arise from changes in how we interact as humans. It is easy to imagine a pair of glasses that would allow a user to read the body language of another human and get an advantage in a business deal, or look up all information on someone and presenting them to the user. Further, if artificial intelligence pushes many humans out of work as often predicted by authors such as Yuval Harari, where will they find meaning? Harari predicts that they will turn to drugs and immersive VR to find meaning, all of this of course being provided by a government which may act simply to control all of its citizens and make them happy. Where is the humanity in all of this? This is a more pressing question than any of the doomsday killer robot scenarios. If we somehow are able to quantify thoughtfulness and consciousness, and we saw these attributes in a machine, only then can we consider the machine a mind. This is a troubling thought—because then we are only a biological version of that machine. Do we then have an ethical obligation to keep that machine alive, or kill it? What makes us different from the machine?
Reading 11: Intelligence
0
reading-11-intelligence-164465d2b24f
2018-04-09
2018-04-09 00:05:48
https://medium.com/s/story/reading-11-intelligence-164465d2b24f
false
715
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Andrew Munch
null
a1718c00e836
amunch
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-23
2017-09-23 12:58:18
2017-09-23
2017-09-23 12:59:10
0
false
en
2017-09-23
2017-09-23 12:59:10
1
1644a6ed2d71
0.486792
0
0
0
This is awesome, important, and (much as I hate to say it) really groks BCI at a deep, profound level even while operating at a high and…
4
It’s A Brave New World. This is awesome, important, and (much as I hate to say it) really groks BCI at a deep, profound level even while operating at a high and easy pace. It’s exactly right to get into the picture while retaining the deep global awareness at all times. I think that in the next ten years we’ll have BCI (very likely through Neuralink although there are several very worthy competitors) that operates all the time, directly into the brain, at a superfast level, and we will have to deal with the awesome consequences for both good and bad — but that’s one area where I’m very excited rather than fearful. It’s a brave new world, and Elon has his pulse exactly on the true issue. https://www.youtube.com/watch?v=5YxzWnbqaJI
It’s A Brave New World.
0
its-a-brave-new-world-1644a6ed2d71
2018-02-28
2018-02-28 18:05:07
https://medium.com/s/story/its-a-brave-new-world-1644a6ed2d71
false
129
null
null
null
null
null
null
null
null
null
Music
music
Music
174,961
Peter Marshall
I am extremely interested in AI, especially the not-so-good side of AI weapons and AI war, although the good parts are magnificent and wonderful too, naturally.
f6bab8ee3d29
ideasware
1,765
276
20,181,104
null
null
null
null
null
null
0
null
0
31213422a716
2017-09-04
2017-09-04 15:10:02
2017-09-04
2017-09-04 12:27:58
2
false
fr
2017-09-04
2017-09-04 15:13:57
3
1645e6402136
2.387107
0
0
0
Merci pour le Tweet de François Charlet, un de nos curateurs de prédilection
4
Huawei avance vers l’intelligence artificielle exécutée en local sur les smartphones — Tech Merci pour le Tweet de François Charlet, un de nos curateurs de prédilection Source: http://www.numerama.com Le dragon chinois a dévoilé une puce, la Kirin 970, intégrant une Neural Processing Unit (NPU), soit un petit réseau de neurones artificiels. Cette architecture singulière, utilisée pour développer des fonctions dites d’intelligence artificielle, devrait permettre à des smartphones d’effectuer de l’apprentissage machine en local. Une vraie singularité sur le marché des mobiles. En 2013, le fondeur américain Qualcomm dévoilait son Zeuroth, une puce inspirée, par analogie, des cerveaux humains. Disposant de neurones artificiels, le Zeuroth était taillé pour les petites machines souhaitant percevoir, comprendre et interpréter des données sans l’aide d’un réseau de neurones artificiels déporté dans le cloud. Force est de constater que bien que pionnier, l’Américain n’a pas commercialisé sa puce auprès du grand public. Pionnier du tout petit cerveau artificiel Quatre années plus tard, à l’IFA, c’est un autre fondeur qui vole la vedette à Qualcomm : le groupe Huawei. Le dragon chinois, plus connu pour ses smartphones que les puces que ces derniers intègrent, est pourtant une pointure en matière de processeurs mobiles comme ont pu le montrer ses précédents modèles, rivalisant sans mal avec les mobiles équipés des solutions Qualcomm. Le PDG du groupe, Richard Yu, s’est en effet félicité que son prochain processeur, attendu sur ses modèles suivants, embarquera une Neural Processing Unit (un réseau neuronal artificiel intégré). En effet, le Chinois veut se montrer à la pointe de l’intelligence artificielle mobile et ambitionne celle-ci aussi localement que déportée dans le cloud. Yu considère ainsi que le futur de l’IA mobile sera composé par une complémentarité entre les calculs exécutés sur le mobile et ceux qui seront assurés par des machines puissantes à distance. Huawei considère que la NPU permettrait de gagner notamment en stabilité, en vitesse et bien sûr en confidentialité : toutes les données nécessaires aux interprétations des IA pourront être conservées sur le mobile, sans s’exposer à un transfert vers des serveurs. Le défi était, au sens littéral, de taille, puisque les réseaux neuronaux artificiels sont souvent d’une taille supérieure aux puces classiques. Leur architecture en série, à la manière d’un cerveau qui aligne les neurones, est en effet exigeante en ressources et en place. Selon le Chinois, ce défi serait relevé grâce à une gravure de 10 nm. La firme va jusqu’à s’enorgueillir d’un score éloquent : le Kirin 970 traiterait plus de 2 000 images par minute lorsqu’il se lance dans la reconnaissance d’image (apprentissage machine). Si le constructeur n’a pas dévoilé de solution logicielle pour accompagner son manifeste progrès technique, nous pouvons déjà imaginer que les futurs smartphones du Chinois devraient inquiéter les nouveaux rois de l’IA que sont Amazon et Google en s’offrant un avantage technique évident. Ce n’était pas une piste que l’on imaginait avant, mais on se demande aujourd’hui si ce n’est pas aussi dans les cartons d’Apple, dans la mesure où Cupertino a toujours souhaité aller plus loin dans la protection des données des utilisateurs… ce qui fait que Siri est aujourd’hui en retard. Réponse le 12. Originally published at www.numerama.com on September 4, 2017.
Huawei avance vers l’intelligence artificielle exécutée en local sur les smartphones — Tech
0
huawei-avance-vers-lintelligence-artificielle-exécutée-en-local-sur-les-smartphones-tech-1645e6402136
2018-05-09
2018-05-09 22:18:59
https://medium.com/s/story/huawei-avance-vers-lintelligence-artificielle-exécutée-en-local-sur-les-smartphones-tech-1645e6402136
false
531
Etes-vous "Bot Ready" ? En Suisse romande et France voisine ! Un programme de veille collaborative avec CloudReady.ch, en collaboration avec Tech4good et Léman Innovation Numérique (LIN). Chatbot, Voicebot, Assistant virtuel, Avatar, Agent personnel Autonome (ASA)…
null
null
null
BotReady
info@botready.ch
chatbot-ch
CHATBOT
null
Smartphones
smartphones
Smartphones
9,713
Info BotReady
null
d641a9a1f2e9
info.botready
1
8
20,181,104
null
null
null
null
null
null
0
null
0
292ef10afbdc
2018-03-28
2018-03-28 08:35:07
2018-03-28
2018-03-28 10:42:09
1
false
en
2018-03-28
2018-03-28 10:42:09
1
1646decc38a2
3.683019
4
0
0
Susan Poole, Head of Strategy at 23red explains why 2018 is the year of human-centered tech.
2
Let’s create a future that is less techy and more magical Susan Poole, Head of Strategy at 23red explains why 2018 is the year of human-centered tech. SXSW this year felt a different place to last year. In 2017 the overriding emotions of SXSW were anger and fear. Fear of the rapid rise of AI and anger at the effect it might have on us all. In contrast this year felt like SXSW had grown up and mellowed out. Gone was the fear and anger, replaced with empathy and hope. If 2017 was the year of man vs tech, then 2018 is the year of human-centered tech. What does human-centered tech mean? Tech that feels less technical and isn’t just celebrated for its ingenuity; instead it is about working together to deliver something greater than the sum of our parts, built on our best understanding of humans. Creating something that is more empathetic, perception pushing, and quite frankly magical. Using technology to understand, augment and enhance the human condition. How could this manifest itself? There were a number of streams at SXSW that came together under the premise of human-centered tech. Diverse people to ensure diverse thoughts There was a strong rhetoric around ensuring greater diversity in tech teams, especially in AI. AI is susceptible to biases and the unconscious biases of those who train an AI system are often passed on to the AI. Fei Fei Li, Director of Stanford AI lab, spoke passionately about the importance of increasing diversity of people entering the industry, both from a gender perspective, but also in terms of race, background and sexuality. “Diverse people. Diverse thoughts.” The importance of encouraging a broader spectrum of people into the STEM pipeline was clear. In the UK 2018 is the Year of Engineering with exactly that objective. These initiatives and others like it around the world will be vital. But we can’t wait for these initiatives to deliver a new workforce in 5 or 10 years, firms also need to actively seek out diverse teams right now if we are going to act to shape AI systems, before it is too late. Collaborate to further our understanding The development of technology like AI can’t lie within just the current teams. They understand tech, but do they understand people? There was a rhetoric around inviting in the social sciences, including psychology, sociology and behavioural sciences, into the heart of development. This is vital to ensure we develop technology hand in hand with humans. A multi-function team could enable a better understanding of how we act, think and feel as humans together with technology — this could benefit human understanding and give us a chance to build more useful machines that benefit society. In February ’18 MIT launched ‘IQ. The MIT Intelligence Quest’ to “forge connections between human and machine intelligence research, its applications, and its bearing on society.” It is a collaboration between life scientists, computer scientists, social scientists and engineers. “By uniting diverse fields and capitalising on what they can teach each other, we seek to answer the deepest questions about intelligence.” Amplify human intelligence Whilst we often like to think of ourselves as individuals, we are heavily influenced and shaped by those around us. We use this daily to navigate the world and to help with decision making through heuristics (short-cuts). But what if we could harness this collective power? Unanimous AI talked about the power of the hive mind using ‘swarm AI’. This technology won ‘best in show’ at the 2018 SXSW innovation awards. It connects groups of people in real-time closed-loop systems to amplify human intelligence. For example, when using a swarm to predict the Oscars individual accuracy was 40%, but the collective was 76% accurate (compared to 64% for the critics). This hive mind approach uses technology to enhance individual performance and potentially benefit the masses. Push the boundaries of human perception Over the last few years the potential of VR to ‘transport’ people to impossible locations or spaces has been much discussed. What was revealed by Microsoft Research teams at SXSW was the latest research in actually creating new perceptual experiences through VR and AR. So, this is not about taking you to a far away or impossible place but giving you a sensory experience that isn’t possible outside of VR. They shared the potential of VR to call into question reality as we understand it. They also shared the potential of AR to help us ‘see’ physiological inputs that aren’t possible with the human eye, so we can ‘read’ others beyond the facial movements and body language that we use today. The use case for these new developments could be valuable for those amongst us that struggle to read others, like those on the Autism Spectrum, but thinking wider this could be the start of a whole new way of engaging with other humans beyond what is currently possible. Where does this leave us? It leaves us focusing less on technology that is just new and clever for the sake of it and focusing more on developments that shape, enhance or amplify human intelligence. This means bringing an understanding of real people to technology to create things that support, augment or even enhance the human experience. To quote the ever-wonderful Dr Kate Stone: “I believe the future will be more magical than technical.” For more from the IPA on SXSWi 2018, read Matt Rhodes’ blog Don’t be slaves to the algorithm. Understand them.
Let’s create a future that is less techy and more magical
26
lets-create-a-future-that-is-less-techy-and-more-magical-1646decc38a2
2018-04-03
2018-04-03 12:20:02
https://medium.com/s/story/lets-create-a-future-that-is-less-techy-and-more-magical-1646decc38a2
false
923
Building a virtuous circle between technology, innovation and communications.
null
theipa
null
Emerging Futures
social@ipa.co.uk
emerging-futures
TECHNOLOGY,INNOVATION,STARTUP
IPA_Emerging
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
The IPA
The professional body for UK advertising, media & marcomms agencies.
bfe031123b77
TheIPA
4,918
1,084
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-25
2018-02-25 03:56:54
2018-02-25
2018-02-25 03:59:08
3
false
en
2018-02-25
2018-02-25 03:59:08
2
1647f288f8a8
2.69717
0
0
0
Panasonic announces launching an powerful IoT platform developed with startup partners. The platform provides connectivity for “Touch by…
5
“Touch by Panasonic” Panasonic launching touch-based IoT services platform with Silicon Valley startups at CES 2018 / SANDS #40060 Panasonic announces launching an powerful IoT platform developed with startup partners. The platform provides connectivity for “Touch by Panasonic” products with natural, touch-based UX designs. These first products were created for process automation and identity verification applications and they are currently in field trials in various countries. Our unique, connected device technologies are evolving into a broad, deep, flexible IoT platform developed through lean startup and open innovation methodologies. Two startup partners contributed greatly to these innovations, the IoT platform software technology company, Webee and the networking and connectivity provider, Soracom. Each partner is bringing both technology and business development expertise to the platform. “Our partners have helped us to bring our touch-based IoT innovations to our customers, and our first customers are helping us to see full scope of applications across many industries,” said Yushi Nakamua, Senior Manager of Panasonic and Electro Mechanical Control Business Division (EMCBD) . At CES / Sands, Panasonic EMCBD will be demonstrating products that have many applications across industries such as residential, office, factory, medical and so on. Two examples are ENY, the one touch automation solution and a Touch ID Verification wearable device solution. Because of the flexibility of the IoT platform, each product can be configured to meet many unique needs. Understanding business needs is key to our new business development processes, and the reason for our pilots around the world. Here are two examples. We worked with a hotel chain in North America to install our ENY products in rooms, and plan to expand to other hotels. The Touch ID Verification wearable device is now being piloted in an automotive parts manufacturing company in North America. More information will we will available from our team in Booth #40060 at CES / SANDS 2018 in Las Vegas, Nevada, separate from the main Panasonic exhibit. We will demonstrate these natural “Touch by Panasonic” experiences from Tuesday, January 9 until Friday, January 12, 2018, and we can talk about the ongoing pilots and our partners. “ENY” One touch home automation A button is the simplest user interface, but when reimagined with human centric UX design as B2B IoT service platform, ENY is the result. This IoT service package is available as a highly flexible, easy and convenient value for businesses and end consumers, from home lighting control to emergency alert announcement switches for factories. ENY is not only hardware, but a set cloud service and smartphone app, controlled from a wireless, batteryless, maintenance-free button. The strengths of Panasonic’s human interface design informed this small, high efficiency IoT switch for residential, senior housing, hotel, offices, factory and other settings. To get updated news, register e-mail here https://www.goo.gl/MSd4hj. “Touch ID” One touch ID Verification Touch ID Verification is critical for many use cases, including physical access, data provisioning, medical treatment and so on. Now, Panasonic has harnessed the power of electric field communication technology to allow human touch as the user interface to our ID Authentication B2B IoT service platform. Touch is the simplest user interface and the most natural human centric UX design. We are providing an IoT service package that can be used by businesses of all sizes to create applications for easy ID authentication by touch. To get updated news, register e-mail here https://www.goo.gl/CSP5Jq. Contact : touch@ml.jp.panasonic.com
“Touch by Panasonic” Panasonic launching touch-based IoT services platform with Silicon Valley…
0
touch-by-panasonic-panasonic-launching-touch-based-iot-services-platform-with-silicon-valley-1647f288f8a8
2018-02-25
2018-02-25 03:59:09
https://medium.com/s/story/touch-by-panasonic-panasonic-launching-touch-based-iot-services-platform-with-silicon-valley-1647f288f8a8
false
569
null
null
null
null
null
null
null
null
null
Smart Home
smart-home
Smart Home
3,891
Nakamura Yushi
null
1c39b5f644d5
nakamurayushi
177
178
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-24
2018-07-24 08:19:50
2018-07-25
2018-07-25 11:16:56
1
false
en
2018-07-25
2018-07-25 11:16:56
29
1647f301a09d
3.10566
4
0
0
Newsgasm
5
#0081: Facial recognition and toilet paper Photo by Nathaniel dahan on Unsplash Newsgasm This is a frightening and thoroughly depressing indictment of the contempt with which the richest of the rich hold the rest of humanity. #pointoneofonepercent #contempt In contrast, this video of autonomous car startup Zoox is pretty much at the other end of the spectrum of human endeavour. What they have achieved with their autonomous vehicle platform is nothing short of amazing. Here’s some further in-depth analysis of what Zoox is up to. #autonomous #vehicle #whoa The week’s Economist has a think piece pondering companies without employees. They also chime in on personal data sovereignty: What if people were paid for their data?. @D0nPerfect0 asked in response to that question: What if people paid for their privacy? My sense is that the business models of the Big Data Monopolies would break down, but it’d be a fun experiment. #data #privacy #sovereignty Speaking of business models for the Big Data Monopolies, Google is now advocating an open source platform that promotes universal data portability. Is this part of Google’s plan to keep out of the way of European regulators? Or is it a commoditise your complements play to help break Facebook’s monopoly hold over the social graph? Well, maybe not the later given that Farcebook is participating in the initiative. #data #privacy #sovereignty #confused Check out this train wreck interview between Farcebook’s Mark Zuckerberg and Recode’s Kara Swisher. Does anyone need any more evidence that these Big Data Monopolies have some serious issues, of material societal impact, that need to be worked thru? It also highlights just how inadequately equipped Zuckerberg, and Facebook, appear to be when it comes to handling issues of this gravity. #data #privacy The Chinese Surveillance State continues to grow at pace. One school is tracking student faces to know who is late, and combines data about who visited the cafe with their menu choices to see who is gorging on fatty food. Another municipality is using facial recognition tech in their toilet blocks to control how many pieces of toilet paper people receive. Come back for more paper in under 9 minutes and you’ll miss out. Too bad if you’ve got the runs. #ai #ml #data #facialrecognition #china Something that really surprised me: in 1999, Jeff Bezos was the 10th richest billionaire in the World. 1999 is only 4 years after Amazon started in 1994. This adds some background colour to the myth of the “self made billionaire” narrative, and casts this famous image in a new light. It’s amazing what you can do with a handy USD$300,000 ‘loan’ from your parents. #myth #selfmade #billionaire For the first time in quite some time, here’s a piece on a new cryptocurrency that reads well, and importantly, seems to make a reasonable amount of sense: Decred investment thesis. Enthusiasm for this piece needs to be tempered by the fact that it was written by a VC (Placeholder) that invested in Decred, but all the same, the rationale for their investment seems worthy of exploration. Here’s some more on Decred’s hybrid proof-of-work/proof-of-stake architecture. #cryptocurrency #blockchain #decentralisation Combining two of my favourite topics: AI & blockchain: An Introduction. Despite the click-bait headline, this article makes some thought provoking points. #ai #ml #machineintelligence #blockchain #cryptocurrencies The idea behind a market-network is that it occurs when you combine a marketplace, where transactions are facilitated between multiple buyers and sellers (think Uber, eBay, AirBnB, etc), and a network, where member’s profiles project their identity into a community of like minded others for enhanced collaboration (think Twitter, Facebook, LinkedIn, etc). This article goes on to discuss seven attributes of a successful market-network. Back in ‘Four opportunities’ and ‘Five new economies’ I pondered how tech innovation appear to go thru a cycle from raw tech, to infrastructure that supports the growing use of the raw tech, then marketplaces that enable buying and selling goods and services based on the new tech, and then finally, opening up opportunities for new aggregations to form on top of the old businesses and business models. I would argue that this market-network idea is a specialised form of marketplace, where the network effects of a connected graph of users supercharges the marketplace’s ability to profitably connect buyers and sellers. Well worth a read. #marketplace #network Only in Australia Who created this 4.2km figure in a remote part of South Australia, and why? Marree Man: The enduring mystery of a giant outback figure. #onlyinaustralia Regards, M@ ED: If you’d like to sign up for this content as an email, click here to join the mailing list.
#0081: Facial recognition and toilet paper
57
0081-facial-recognition-and-toilet-paper-1647f301a09d
2018-07-27
2018-07-27 15:30:56
https://medium.com/s/story/0081-facial-recognition-and-toilet-paper-1647f301a09d
false
770
null
null
null
null
null
null
null
null
null
Future
future
Future
22,833
M@
Building stuff with bits since c1990. https://keybase.io/matthewsinclair https://twitter.com/matthewsinclair
2d9370f7f6e1
matthewsinclair
367
544
20,181,104
null
null
null
null
null
null
0
null
0
b9c490bd1fa1
2018-05-09
2018-05-09 23:17:16
2018-05-10
2018-05-10 02:59:32
1
false
en
2018-07-01
2018-07-01 19:04:30
2
164a41008771
0.85283
0
0
0
We are encouraging all companies — throughout the world — to open their doors one morning a month to help nonprofits, public sector groups…
5
#HelpingForward We are encouraging all companies — throughout the world — to open their doors one morning a month to help nonprofits, public sector groups and other organizations and individuals that are looking to make positive change in the world. And, at the same time, to welcome traditionally underrepresented people in their industry or sector who are looking to break into their field to help these organizations as well and see where it might all lead. If you were helped along the way to where you are today, ask yourself if it is your turn for #HelpingForward. With this blog post, the co-founders of KUNGFU.AI are hoping to start a global movement under the banner #HelpingForward. It’s ambitious, but think about the impact if we can catch lightening in a bottle and this takes off! KUNGFU.AI is kicking off its #HelpingForward initiative with what we’re calling AI for Good #HelpingForward. KUNGFU is an AI consultancy that helps companies build their strategy, operationalize, and deploy artificial intelligence solutions. Check us out at www.kungfu.ai
#HelpingForward
0
helpingforward-164a41008771
2018-07-01
2018-07-01 19:04:30
https://medium.com/s/story/helpingforward-164a41008771
false
173
KUNGFU.AI is an AI consultancy that helps companies build their strategy, operationalize, and deploy artificial intelligence solutions.
null
null
null
KUNGFU.AI
null
kung-fu
ARTIFICIAL INTELLIGENCE,AUSTIN TEXAS,CONSULTING,DATA SCIENCE,AI
kungfuai
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Stephen Straus
Managing Partner, KUNGFU.AI
633e059511d8
ssaustin65
119
95
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-10
2018-08-10 23:54:11
2018-08-11
2018-08-11 08:51:06
1
false
en
2018-08-12
2018-08-12 23:54:33
3
164a736d4dd8
0.913208
1
0
0
A 2 part serie on understanding and beating OpenAI Five bot
5
Beating OpenAI At Dota By Trolling Fundamentally speaking, the OpenAI bot is trained like a pigeon. It is to our advantage to respect the bots’ feats of acrobatics while simultaniously exploit their lack of plannings and judgements. I have been contemplating on starting a blog, with the recent OpenAI Five benchmark, it seems like a good time to start. The bots defeated the humans convincingly 2–0, but I think the bots can be beaten with the right approach. Understanding Precedes Victory In the moment when I truly understand my enemy, understand him well enough to defeat him, then in that very moment I also love him. — Ender’s Game There can be no convincing victory without understanding. Similarly, I would like to divide this topic into two 2 parts: Understanding OpenAI Five In this more academic part, I will focus on explaining the challenges in building a competent Dota AI, appreciate how OpenAI is able to pragmatically tackle these challenges, and explain how their solution is not foolproof. Trolling OpenAI Five In this more pragmatic part, we will use the insights obtained in the first part to blueprint specific strategies that expose the weaknesses in the OpenAI Five bot, and hope on match day the humans use this to their advantage. Are you ready ?! Lets get into it !!
Beating OpenAI At Dota By Trolling
5
beating-openai-at-dota-by-trolling-164a736d4dd8
2018-08-12
2018-08-12 23:54:33
https://medium.com/s/story/beating-openai-at-dota-by-trolling-164a736d4dd8
false
189
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Evan Pu
PhD student at Massachusetts Institute of Technology in Program Synthesis and Machine Learning
3a47ad091676
evanthebouncy
100
1
20,181,104
null
null
null
null
null
null
0
null
0
634d4b270054
2018-04-09
2018-04-09 09:55:58
2018-04-09
2018-04-09 09:56:45
1
false
en
2018-06-05
2018-06-05 09:02:22
3
164abb5083e6
1.041509
0
0
0
The Public Safety Department of Pittsburgh will be using two drones to help first responders reach where police and firefighters can’t. The…
5
Privacy Concerns Of Citizens Related To Drone Use The Public Safety Department of Pittsburgh will be using two drones to help first responders reach where police and firefighters can’t. The unmanned aerial vehicle (UAV’s) would be used to fight fires, in search and rescue operations, and even in active shooter situations. The UAV’s equipped with high-definition camera and thermal imaging, would bring up some privacy concern for citizen. “It’s much safer and much cheaper to use technology than put a police officer or firefighter in jeopardy,” said Wendell Hissrich, Director of the Public Safety Department. “Considering the public safety, it won’t be used as surveillance.” Major of the cities have deployed UAV’s in search for lost hikers, and to create 3D model of crime scene using the aerial pictures. Source: https://bit.ly/2IFNTmj About DEEPAERO DEEP AERO is a global leader in drone technology innovation. At DEEP AERO, we are building an autonomous drone economy powered by AI & Blockchain. DEEP AERO’s DRONE-UTM is an AI-driven, autonomous, self-governing, intelligent drone/unmanned aircraft system (UAS) traffic management (UTM) platform on the Blockchain. DEEP AERO’s DRONE-MP is a decentralized marketplace. It will be one stop shop for all products and services for drones. These platforms will be the foundation of the drone economy and will be powered by the DEEP AERO (DRONE) token.
Privacy Concerns Of Citizens Related To Drone Use
0
privacy-concerns-of-citizens-related-to-drone-use-164abb5083e6
2018-06-05
2018-06-05 09:02:23
https://medium.com/s/story/privacy-concerns-of-citizens-related-to-drone-use-164abb5083e6
false
223
AI Driven Drone Economy on the Blockchain
null
DeepAeroDrones
null
DEEPAERODRONES
null
deepaerodrones
DEEPAERO,AI,BLOCKCHAIN,DRONE,ICO
DeepAeroDrones
Deepaero
deepaeros
Deepaero
0
DEEP AERO DRONES
null
dcef5da6c7fa
deepaerodrones
277
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-03
2018-07-03 02:37:00
2018-07-03
2018-07-03 06:53:29
6
false
en
2018-07-03
2018-07-03 10:55:35
0
164c2dc4d3d8
3.248113
1
0
0
Picture a bird. Completely red with black wings and a pointy beak. What do you see?
5
How AI got its imagination Picture a bird. Completely red with black wings and a pointy beak. What do you see? The above image was how an AI saw it based on the very same text you just read — almost like it’s capable of imagination. Imagination in AI: An Unimaginable Task Imagination seems intuitively simple to us. Pink Elephant Dancing on a Boat — close your eyes and there it is in your head. But training an AI to think like us can be frustratingly time consuming. The usual method to train a neural network (a computer system modelled after our nervous system) is shockingly labour intensive. Take when Microsoft wanted to train a neural network to recognize 91 objects easily recognizable by a 4-year-old for instance. They had to create a database with 328,000 images, each one painstakingly gathered and annotated by humans, resulting in grand total of 2.5 million labels. The man hours needed just to make that happen — 70,000 hours. 70,000 hours just for a neural network to recognize simple objects. What if, like the red bird example, you wanted it to not only recognize but also create? Oobah vs Paris Fashion Week on Repeat Enter Generative Adversarial Network (GAN), a new way to train neural networks, innovated by Ian Goodfellow in 2014. Instead of an army of humans creating an extensive database, GAN pits two neural networks against each other. To understand how GAN works, let’s take a look at Paris Fashion Week 2018. Oobah Butler, a writer based in the UK, faked his way into closed exhibitions and after parties. He mingled with the top names of the industry, got influencers to try a brand he picked up at a street market and even had them endorsing it. Oobah 1 : Fashion Week 0 This could be seen as a game of Oobah vs Fashion Week. Oobah’s goal is to dress, talk and walk like a fashion designer even though he isn’t one, while Fashion Week’s goal is to ensure that only actual fashion designers are allowed in. Imagine if Oobah and Fashion Week went up against each other a million more times. Each time Oobah infiltrates, Fashion Week tightens its security and every time Oobah fails to pass off as a fashion designer, he learns how to better emulate them. We will then end up with a very sophisticated Oobah who can fool most people and a very strict Fashion Week that knows just what to look out for. GAN pits neural networks in the same game where one neural network plays the Generator (Oobah) and the other plays the Discriminator (Fashion Week). Using the neural networks to train each other would require a much smaller database and less human supervision. This opened the doors to futures that were not previously possible. Pictured: Generator partying with Discriminators A Post-GAN World Just as GAN has created red birds based on text, it has many other applications in the field of AI where a spark of imagination is required. We now have AI that is capable of turning a blurry image into a high resolution one by making smart guesses on what the missing pixels should be. AI that even created entire galleries of fake celebrities. And recently at Facebook — AI that could ‘open’ your eyes in photos where your eyes are closed. This is only the beginning of GAN’s application for AI. With more methods currently being developed for applying adversarial networks, we can expect the coming decade to be an interesting one for AI. It is no wonder, Facebook’s AI research director Yann LeCun called GAN “the most interesting idea in the last 10 years of Machine Learning.”
How AI got its imagination
10
how-ai-got-its-imagination-164c2dc4d3d8
2018-07-03
2018-07-03 10:55:35
https://medium.com/s/story/how-ai-got-its-imagination-164c2dc4d3d8
false
609
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Dave Tai
null
38735a6c3702
DaveTaiWrites
79
81
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-21
2018-09-21 06:39:29
2018-09-21
2018-09-21 06:49:30
1
false
en
2018-09-21
2018-09-21 07:05:00
1
164d75405ef9
0.54717
0
0
0
https://goo.gl/uu4CFK
3
Data Science Training In Bangalore At ExcelR Solutions, we provide best in class trainings across Agile,Project Management, IT– Service Management & Quality –Assurance –spaces. Our –trainers worked– with world renowned MNCs and are committed to raise your excellence levels thereby accelerating your careers! We offer trainings on(PMP)®, (PMI-ACP)®, (CAPM)®, (PMI-RMP)®,ITIL®, ITIL– Foundation, ITIL –Intermediate, ITIL Expert, Six Sigma Green Belt, Six Sigma Black Belt, Microsoft Project, Software Metrics, Minitab, Agile, Scrum, CMMI, ISO 9001, ISO 27001, ISO 20000. For more information, feel free to visit https://www.excelr.com/business-analytics-training-in-bangalore/
Data Science Training In Bangalore
0
data-science-training-in-bangalore-164d75405ef9
2018-09-21
2018-09-21 07:05:00
https://medium.com/s/story/data-science-training-in-bangalore-164d75405ef9
false
92
null
null
null
null
null
null
null
null
null
Data Science Bangalore
bangalore-data-science
Data Science Bangalore
0
ExcelR Solutions
null
1bfef2965e51
2018saipriya
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-17
2018-04-17 17:25:23
2018-04-17
2018-04-17 17:27:29
1
false
en
2018-04-17
2018-04-17 17:27:29
4
164e4f15e96e
1.479245
0
0
0
On the leading edge of artificial intelligence is deep learning, which could unlock the mystery of how our genes encode human life.
5
Going Deep Into the Human Genome On the leading edge of artificial intelligence is deep learning, which could unlock the mystery of how our genes encode human life. Some of the most complex science mysteries we’re working on today may not be cracked by a human mind. Brendan Frey, CIFAR senior fellow and professor of computer engineering at the University of Toronto, is working on a new kind of artificial intelligence (AI) called deep learning. Old AI systems relied on logic (for example, if-then statements) to make decisions. These all needed to be programmed in. By contrast, deep learning immerses a computer system in data, and lets the computer itself look for patterns. Modern data sets can be massive, and complex interactions can be difficult to interpret and understand. With enough computing power, deep learning could detect useful patterns that unlock what’s driving complicated systems. This is a growing field of research in Canada. Frey is a co-founder of the non-profit Vector Institute for Artificial Intelligence, a one-of-a-kind institute that will bring together leading AI researchers, acting as a hub and accelerator for startup companies. Frey is interested in applying deep learning to look for patterns in life itself, probing genetics with an interdisciplinary group he founded called Deep Genomics. “My group at Deep Genomics is putting together a system, an AI system, that really is allowing us to peer at your DNA, look at your mutations and figure out what’s wrong and how to treat the disease,” says Frey. While we now have the technology to rapidly sequence the genome, what comes next remains mysterious; how the genome translates into the expression of biomolecules is not well understood. Frey calls this the genotype-phenotype gap, and closing that gap is needed to understand how genes encode life. “We’re actually developing new therapies at Deep Genomics,” says Frey, “and that’s what I’m most excited about.” Originally published on Research2Reality.com. Keep up to date with advances in Canadian science by subscribing to our newsletter!
Going Deep Into the Human Genome
0
going-deep-into-the-human-genome-164e4f15e96e
2018-04-17
2018-04-17 17:27:30
https://medium.com/s/story/going-deep-into-the-human-genome-164e4f15e96e
false
339
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Research2Reality
Shining a Light on Research and Innovation
81b8807da4cc
R2Rnow
36
35
20,181,104
null
null
null
null
null
null
0
null
0
ec7e990e25b7
2017-12-03
2017-12-03 10:21:02
2017-12-03
2017-12-03 10:56:39
2
false
en
2017-12-16
2017-12-16 15:57:37
3
164ed872a050
6.824843
4
0
0
Machine learning engineering, the environment, and rock climbing
5
Team Profile: Meet Suzin. Read In Korean As a yet-miniscule startup, each member holds a significant power over the overall atmosphere of our team at XBrain. And in our ultimate quest to make big waves in the data world, we need to make sure that the people at the helm are at least kind of cool. We think we’ve done a pretty good job so far in assembling a society of unique but equally driven members, and we’d like for you to get to know them better. So we bring you this seven-part series, each installment devoted to interviewing each of our members in detail, to give you an in-depth glimpse into the people responsible for bringing you the future of machine learning with Daria. Plus, we peppered the interviews with questions from Dr. Aron’s “The 36 Questions that Lead to Love”*, cherry picked to make work appropriate and concise, but interesting. (*actually falling in love with our members highly discouraged) Suzin joined XBrain in May 2017 as our first machine learning engineer. She holds a degree in computer science and mathematics, and does cool stuff like making her own sour cream and rock climbing. Suzin loves the finer things in life, premium teas and hipster restaurants galore, but she’s also a passionate advocate for social justice causes. She is the in-house environmental superhero at XBrain, on a personal mission to better the world one handkerchief and reuseable coffee mug at a time. Our unofficial union leader, as well as final decision maker on all things culinary, Suzin is an integral part of the XBrain community — read on to find out why! Suzin knows how to stop and smell the roses (and sip the tea) Hi Suzin! Tell us about your role at XBrain. Suzin (SZ) : “I work as XBrain’s machine learning engineer, helping to maintain and develop the engine that runs and automates our technology. I also participate in research that might potentially help our project.” What does a typical work day look like for you? SZ: “10:30 AM is when I always intend to come into work, but I usually end up coming in at around 11 (haha). I make a cup of coffee and write a to-do list for the day. Then we have lunch at about noon, followed by a daily scrum meeting with the other engineers. Then I work some more — I spent a lot of time today trying to debug an error that YH and I found on a test we were running, which turned out not to be an error at all. I don’t really consider that a waste of time, though, since the very fact that we thought it was an error indicates that we didn’t really understand the code. I also participated in a paper review seminar at the Korea Advanced Institute of Science and Technology(KAIST). Then I maybe have a stretch (we tried to make this a regular office thing and failed), followed by dinner at 6 PM. After that I might work for another couple hours, and go home.” What are the parts of your job that you enjoy the most? SZ: “Definitely the constant learning opportunities that the job presents, and the fact that XBrain supports most academic ventures that might benefit the company in the long run, although it may not bring immediate results.” What about the tasks that you least enjoy or find challenging? SZ: “It’s a very important part of my job, but also the most difficult — customer response. Problems with Daria are always unexpected and pop up in the most unexpected places, so it can get pretty stressful. But it is beneficial since we get insight into Daria’s weak spots.” Can you pick one item on your desk that says something about you? SZ: “There’s a wrapper from a stick of gum that JY gave me that has “You’re doing great! Keep up the good work :)” printed on it taped to my monitor . It’s a bit cheesy, but it serves as a reminder not to get too wrapped up in my own head when something’s not going as planned, as I am wont to do.” if you look closely, you can see said gum wrapper on her monitor You almost always have your headphones on. What is your favorite work playlist? SZ: “I use Spotify’s Release Radar playlist, which gets updated every week. Right now it’s all about the soundtrack from the cartoon “Over the Garden Wall”. I also kind of really enjoy depressing ‘downer’ music…” Can you talk a bit about what made you want to go into computer science in the first place? SZ: “I originally wanted to study mathematics in college, because I figured it would be the only time I could pursue something so purely academic. Then I took some electives in computer science, and realized that most aspects of CS require the type of critical thinking that mathematics does. It was of course very very challenging, but I don’t regret it.” So why XBrain? SZ: I was first drawn to XBrain because I liked the challenge of automating machine learning. And I was asked during my interview not just about my work but also very in-depth about my involvement with organizations for food security and sustainability, which gave me the impression that they really wanted to get to know me as a person. And the people! There’s such a strong sense of mutual trust and respect, and everyone is driven and resourceful, and mostly quite funny. If you had to pick one XBrain member for a dinner date, who would it be? SZ: I’d like to have dinner with EK! Before she leaves next month. What causes are you particularly passionate about? SZ: “I think it’s all about maintaining a sense of empathy for the vulnerable parts of society. I care deeply about protecting the environment, and when I lived in Canada, I was interested in the issue of race relations. Lately I’ve been thinking a lot about the elderly population in Korea, and the societal measures we could take to ensure their well-being.” You are the first female member and the only female engineer on the team. As such, do you think you have a different take on the XBrain experience? SZ: “Definitely — not just because I work at XBrain, but because there are fundamental differences that we all face as women. Like in college, as a woman of color in STEM, I often felt that my shortcomings could be seen as reflective of other women and people of color in STEM. Luckily, XBrain does a good job of giving me faith that if something were to make me uncomfortable as a woman in a predominantly male workplace, I could speak up at any moment and receive support.” What kind of vision do you have for the XBrain team? SZ: Perfecting Daria for our users, and perhaps conducting research with our that would be widely helpful for Daria and for the industry. If you had to sum up the essence of XBrain? SZ: Powerhouses and dad jokes! What film would you recommend for our next XBrain Cinema Society? SZ: Tim Burton’s Big Fish…I love the fantasy storyline, and really enjoyed it as a kid. (It also goes perfectly with our tagline!) Given the choice of anyone in the world, whom would you want as a dinner guest? SZ: “I have reservations about anyone intimidating or famous, because I feel like they’d think I was making a fool of myself, so probably my family — my brother in particular, because he’s abroad and we only get to see each other a couple of weeks a year, and we always have a good time.” What would constitute a “perfect” day for you? SZ: “I would wake up without an alarm, have a healthy breakfast — something rich in protein, with lots of fresh fruit and vegetables — and a cup of coffee, read a book, do some chores around the house with a podcast on. I’m currently listening to “Two Dope Queens”. I’d go to bed before midnight, with a good book.”” If a crystal ball could tell you the truth about yourself, your life, the future or anything else, what would you want to know? SZ: “I want to find out about the fate of humanity! Will we continue to live on as we did, or will some breed of cyborg hybrid emerge in the future? How will the human race evolve?” What is the greatest accomplishment of your life? SZ: “My friends. I’m very proud to say that I have seldom experienced pain from interpersonal relationships, and I think to have such people around you and consider you a friend is a fine accomplishment.” What do you value most in a friendship? SZ: “A shared sense of humor and mutual respect”. What is your most treasured memory? SZ: “It’s hard to pinpoint one, but my most valuable memories are going about my daily tasks with people that I love — studying at the library and visiting coffee shops with my college roommate, having meals with my family, etc.” If you knew that in one year you would die suddenly, would you change anything about the way you are now living? Why? SZ: “I’d read more, probably, and try to learn about the things that I’ve been curious about. I would give a lot of love back to the people that have taken care of me. I’d have to tell XBrain, of course, about how I’m going to die in a year, so they can take appropriate measures” Your house, containing everything you own, catches fire. After saving your loved ones and pets, you have time to safely make a final dash to save any one item. What would it be? Why? SZ: “Pragmatically speaking, my laptop, since it’s the most expensive thing I own. A more sentimental answer would be the mug that my college roommate gave to me as a gift.” If you could wake up tomorrow having gained any one quality or ability, what would it be? SZ: “I’ve always wished I could whistle better. Also, a sense of unshakeable self confidence would be nice, although it’s my personal belief that I need to fulfill certain prerequisite before I could reach that state of mind.”
Team Profile: Meet Suzin.
76
team-profile-meet-suzin-164ed872a050
2017-12-26
2017-12-26 05:09:56
https://medium.com/s/story/team-profile-meet-suzin-164ed872a050
false
1,707
Daria, your partner in machine learning greatness.
null
xbrain.team
null
Daria
info@xbrain.team
daria-blog
DARIA,MACHINE LEARNING,DATA SCIENCE,SAAS,STARTUP
null
Tech
tech
Tech
142,368
Eunsoo Kim (@XBrain)
null
1fc164f244b6
eunsoo.kim
5
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-25
2018-03-25 01:05:52
2018-03-25
2018-03-25 01:10:32
3
false
en
2018-04-28
2018-04-28 10:49:24
2
164edd11a0ba
2.233019
1
0
0
In this Data Science Recipe, the reader will find
3
Multiclass Classification using Boosting Ensembles in R: An End-to-End Data Science Recipe — 020 In this Data Science Recipe, the reader will find a) How to organise a Predictive Modelling Machine Learning project. b) What are the different steps in Predictive Modelling and Applied Machine Learning. c) How to summarise and present feature variables in Predictive Modelling (Descriptive statistics). d) How to visualise features through histogram, density plot, box plot and scatter matrix. e) How to find correlations among features variables. f) How to visualise target variables. g) How to do data analysis for feature and target variables. h) How to utilise CARET packages in R. i) How to implement Tree Based Boosting Ensembles for Multiclass Classification Algorithm in R. j) How to tune parameters: manual tuning and automatic tuning in R. k) How to compare Algorithms with Accuracy and Kappa using caret package in R. l) How to implement an end-to-end Data Science Project using MySQL and R. To learn more, visit https://setscholars.com/DataScience A gentle introduction to IRIS Flower Classification using Tree based Boosting Ensembles in R In this data science recipe, IRIS Flower data is used to present an end-to-end applied machine learning and data science recipe in R. IRIS data is freely downloaded from UCI machine learning repository [1]. The goal of this exercise is to correctly classify each flower given its attributes. IRIS is a small and well understood dataset for classification problem. Here the author is going to present a predictive modelling machine learning recipe for this classification project using different Bagging machine learning algorithms or methods available in e1071 and caret package including parameter tunings: manual and automatic parameter tuning. To learn more, visit https://setscholars.com/DataScience The project is divided into several small sections. These are: 1. Loading necessary libraries 2. Load Dataset Either from a CSV file or from a MySQL Table 3. Summarisation of Data to understand Dataset (Descriptive Statistics) 4. Visualisation of Data to understand Dataset (Plots, Graphs etc.) a. USING Histogram b. USING Histogram with density graph c. USING Box Plot d. USING Scatter Plot e. USING Correlation Diagram 5. Data pre-processing & Data transformation (split into train-test datasets) 6. Application of a Machine Learning Algorithm to training dataset a. setup a ML algorithm and parameter settings b. cross validation setup with training dataset c. training & fitting Algorithm with training Dataset d. evaluation of trained Algorithm (or Model) and result e. saving the trained model for future prediction 7. Load the saved model and apply it to new dataset for prediction https://setscholars.com/dd-product/multiclass-classification-using-boosting-ensembles-r-end-end-data-science-recipe/ To learn more, visit https://setscholars.com/DataScience
Multiclass Classification using Boosting Ensembles in R: An End-to-End Data Science Recipe — 020
1
multiclass-classification-using-boosting-ensembles-in-r-an-end-to-end-data-science-recipe-020-164edd11a0ba
2018-04-28
2018-04-28 10:49:25
https://medium.com/s/story/multiclass-classification-using-boosting-ensembles-in-r-an-end-to-end-data-science-recipe-020-164edd11a0ba
false
446
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Nilimesh Halder
null
88be7c24b7fd
nilimeshhalder
51
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-08
2018-04-08 18:16:05
2018-04-08
2018-04-08 18:31:58
0
false
en
2018-04-26
2018-04-26 07:50:04
0
164ff4d6fe26
3.143396
0
0
0
The concept of multitasking is not new to us, especially in this fast paced world. In fact, this is something that is so innate to us, that…
5
Combining Gestures and Speech for High Productivity in Multitasking Environments The concept of multitasking is not new to us, especially in this fast paced world. In fact, this is something that is so innate to us, that we just naturally keep switching between tasks, involved in both the physical and virtual world. Viewing Instagram or doodling while a lecture is going on, scanning Safari loaded with relevant webpages, Word open for taking notes all while having one eye and your whole heart focused on your crush. Although that last bit might be the reason for my staying up late to complete this article, it goes on to affirm that we humans are naturals at multitasking. But what happens when one is put into a slightly more stressful environment, at least more so than just idly sitting in class; perhaps in a kitchen? A lot of other factors now come into play, such as keeping an eye on the various vessels that are always just a second away from burning the dish you have painstakingly made, or the amount of walking one has to do in the kitchen between getting items from the fridge to the stove into the microwave and onto the counter. Keeping tabs on all that is happening, and coming out triumphant with dishes that are even just edible is enough reason to nominate one for your country’s highest honour. Enter Digital Voice based Assistants. These melodious but slightly robotic sounding voices embodied by small cylindrical pieces of metal with some mesh wire have become man’s companion in the war zone that we call kitchen. The Google Assistant whips out some fancy recipes, for you have only one chance at impressing your crush, Alexa keeps the tabs on how much each dish has cooked. This lack of burden now allows you to flirt with Siri while you update yourself on what your friends have been upto via Facebook, until Alexa reminds you to turn the gas down to ensure your pasta remains more than just an attempt. We find ourselves, as seen in the previous example, to be increasingly moving away from traditional input devices like mouse and to some extent, even touch, entering into a brave new era of contactless interfaces, heralded by impeccable advances in Voice based technology. With the onset of IoT, we will find ourselves communicating with our refrigerators, microwaves, toasters, and probably even interacting with kitchen counters. Although the current interface for this interaction is a GUI controlled by touch, we are albeit slowly, but surely making progress towards a Voice based future. But, at times, we find just speech to be rather limiting. As a species, we have evolved in a manner so as to use our bodies to communicate whatever it is we are saying. These gestures, whilst appearing insignificant, make up a large percentage of what we perceive of the person talking to us. This clearly means that the brain is able to collect a lot more data about what a person is saying through his/her body language than just by their voice. Therefore, it only makes sense that we use gestures along with voice to communicate with the devices of the future, especially in environments that require multitasking. Using tools like Leap Motion to track hand movements, we can remotely access devices and use them without the need for any complex interface in between. We can control devices through simple intuitive gestures, requiring almost zero learning. This will help bridge the gap between computers and humans by ensuring computers recede into the background, and our environment becomes more humane. There are multiple use cases for such an interaction. Lets say you want the flame on the stove to increase, instead of asking Alexa to turn the stove up to 60%, we could instead just ask her to turn the stove up by “this much” and gesture turning a knob as to how much we’d want the flame to go up by. This coupled with a visual indicator would ensure we can continue going about our tasks without increasing stress levels that come with having to leave a certain task incomplete to tend to something like this, or can be used by elderly independent people for whom moving about a lot is detrimental to health. This could also be used in hospitals by nurses who have to walk a fair bit around a room to ensure all the equipment is delivering the right amount of dosage to each patient. It could be used in a car to control the dashboard equipment without having to lift one’s hands off the steering wheel. All this ensures continuity in our daily lives without without being perturbed by cumbersome interfaces. This will be the first step in the evolution towards a screen less future, towards a future where rectangular slabs don’t govern our day to day activities, towards a future where computers don’t interfere, but instead improve and and help us connect with the world around us.
Combining Gestures and Speech for High Productivity in Multitasking Environments
0
combining-gestures-and-speech-for-high-productivity-in-multitasking-environments-164ff4d6fe26
2018-04-26
2018-04-26 07:50:05
https://medium.com/s/story/combining-gestures-and-speech-for-high-productivity-in-multitasking-environments-164ff4d6fe26
false
833
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Avyay Kashyap
null
67930be4fd9c
avyay
8
22
20,181,104
null
null
null
null
null
null
0
FileNotFoundError: Game not found: SonicTheHedgehog-Genesis. Did you make sure to import the ROM? $ python3 -m retro.import ./roms Importing 3 potential games...
2
null
2018-04-06
2018-04-06 15:01:20
2018-04-06
2018-04-06 17:37:54
1
false
en
2018-06-10
2018-06-10 00:10:03
16
1651ddcd6aa5
2.750943
13
2
0
If you are a robot overlord reading this, know that I at least tried.
5
Day one of the OpenAI Retro Contest. If you are a robot overlord reading this, know that I at least tried. Yesterday I was reading about the OpenAI Retro Contest, where the ultimate goal is to get my computer to play Sonic the Hedgehog by itself. I thought it would be the perfect opportunity to learn more about a neat subject and luckily my friend Ben said he wanted to enter as well, and with that, team Bobcats was born. We both have some introductory ML/AI knowledge, but have never worked on something like this before. I also never use Python, so every step will probably also be its own journey. After Bobcats 🐯 was officially registered, I started off by cloning the retro repo and trying to get everything setup. Since I’m on a Mac and installed all my pythons🐍 with Homebrew, I’m using python3 and pip3 for my commands instead of running the default installed version that came with my computer. I feel like this will bite me in the near-medium future. After it looked like I got the couple prerequisites installed, I tried to run the example included with the repo. I felt like I was doing everything right, but I kept getting a on error about not being able to find the game. I had found a Sonic rom in my couch and was trying a dozen different ways to specify it, to no avail. Only after giving up did I realize that I had stopped reading the actual instructions at Step 0. When I ran the other example that uses a rom already installed with the package, I was shown a very fast galaga type game that the player seemed to do very badly at, which was some success! Keep it up little buddy I still wanted to get Sonic to work. I kept trying to import my roms, but it didn’t seem to work when I tried running the Sonic example: but there wasn’t really any feedback about the roms getting imported: so I had to dive into the import code to see what hidden error was causing me grief. The import script looked like a pretty thin wrapper for reading the files in the directory, and then passing those files to retro.data.merge(). Importing seemed to go great, but my merging seemed to be the issue. This code seems to look at the files you are importing and compare them to a list of known shas that come shipped with the gym. The rom that I found in my couch was kind of dusty, so I tried looking around for more roms that might have the right sha. I tried three more but each one seemed to have the same issue of not getting imported. Next I tried to skip the sha verification step and manually place the rom into the right place. That got me a brief flash of what could have been a sonic level, and then a big screen of purple. but no matter which roms I tried, or states I started with, I wasn’t able to get a level to load. That felt like plenty of progress for one day, so I tried to whip a quick PR to make importing more verbose for the retro gym, and signed off for the night. Thanks for reading! You might be interested in the rest of this series: Day 1: Getting the Basics Set Up Day 3: Running the Jerk Agent Days 4 & 5: Getting TensorFlow & Docker to work on my MacBook Day 6: Playback Tooling for .bk2 files Days 9 &10: Failing with the Rainbow DQN baseline code. Days 11–14: Reading the PPO2 code Days 16–18: Running the PPO2 baseline code, and failing at TensorFlow & Docker optimization. Days 22–25: A Deep Dive into the Jerk Agent Days 26–29: Visualizing batches of sonic runs Days 38–53: Discovering Q-Learning My final submission: the improved JERK agent
Day one of the OpenAI Retro Contest.
24
day-one-of-the-openai-retro-contest-1651ddcd6aa5
2018-06-10
2018-06-10 00:10:04
https://medium.com/s/story/day-one-of-the-openai-retro-contest-1651ddcd6aa5
false
676
null
null
null
null
null
null
null
null
null
Openai Retro Contest
openai-retro-contest
Openai Retro Contest
8
Tristan Sokol
Developer Evangelist for Square. When I’m not helping build a commerce platform, I’m growing succulents in my back yard. https://tristansokol.com/
fb504b57e2ab
tristansokol
245
0
20,181,104
null
null
null
null
null
null
0
null
0
f0f49b4c0903
2018-05-04
2018-05-04 12:33:26
2018-05-13
2018-05-13 22:57:44
1
false
en
2018-05-13
2018-05-13 22:57:44
1
165395f154de
7.777358
3
0
0
You know that Netflix can keep you watching for hours by capturing your preferences and cleverly updating your personal recommendations…
4
Recommender Systems: a math-less, code-less guide for the curious You know that Netflix can keep you watching for hours by capturing your preferences and cleverly updating your personal recommendations. And, more than once, you’ve been upsold on Amazon, because you actually liked the look of the items that other customers bought. The purpose, and ROI, of building systems that keep customers happy and engaged is obvious and immediate. And that’s certainly not something that can be said of every machine learning project. But where do you start? Table of Contents The Three Approaches Solving the Ranking Problem Solving the Similarity Problem Solving the Collaborative Filtering Problem Adding Learning Implementing a Recommender System The Three Approaches Thinking about the ways you’ve been given recommendations in real life can help explain the three most common ways that recommendations can be ‘generated’ and the distinct scenarios in which they are useful. The three scenarios are as follows: You’re actively looking for something, but have limited experience with that category of thing. Maybe you’re looking for construction companies that specialise in conservatories or a wedding photographer for your big day. You have experience with a product or category of products and are looking for similar things. For example, if you’re a jazz enthusiast and are looking to buy albums featuring your favourite players. You’re not looking for anything, but something that you’re very likely to enjoy is released. Like when a friend, with whom you share a specific taste, sees a great movie and implores you to see it. So, what are the parallels of these scenarios in your product or business? A customer is browsing, maybe aimlessly. Perhaps they’re brand new to your site or they’re an existing customer looking to broaden their horizons and catch up with what’s new. These people are typically visiting the home pages of a store, or specific product category pages. A customer with a history of purchases, views, likes, favourites, (and whatever else) on your site has plateaued in their engagement and you’d like to show them things they might like. Your business doesn’t measure engagement by eyeball-minutes, but you do need to let customers know when something they may be interested is released or changed, so they make a return visit. The things that matter (the features, in ML-speak) in each of these scenarios are also distinct. In the first, the features are all centred around how (objectively) good a thing is. How many conservatories has that company built? What do other people think about this photographer? This is an example of a ranking problem. In the second scenario, the features that matter are the similarities between one thing and another. Roy Hargrove played on both of these records. These arrangements were both done by Gil Evans. The problem here is how to define similarity. In the third scenario, all of the features that matter are about how similar one person is to another. If you think someone’s taste in movies is trash, any proclamations from them about the Best Movie They’ve Ever Seen will likely fall on deaf ears. This is a collaborative filtering problem, which is a fancy way of saying that it’s a similarity problem but with people involved. Solving the ranking problem. Ranking problems are super common. And one particular algorithm is responsible for a world-wide change in how people access information (PageRank). It’s arguable that ranking, in and of itself, doesn’t need machine learning. And that’s probably true in most cases. But I know that I still get frustrated by the seemingly nonsensical ordering of star-ratings on a lot of websites. So maybe a more intelligent approach can be helpful. Anyone who’s ever tried to build a ranking system knows that deciding on the right balance between, for example, average star ratings and the quantity of those ratings, is more difficult than it initially seems. And this is with only two factors! What do you do when you want to rank products by rating, purchases, favourites, views and maybe the semantics of its reviews? The answer, of course, will depend on your specific needs, but I’ve had success using the IMDB weighted ranking formula. In it’s rawest form, it provides a nice balance between the average rating of an item and how many ratings it has. But, it can be extended or expanded to deal with the age of a rating (for example) and multiple weighted rankings can be combined using coefficient multipliers to account for how much you’d like to favour one metric over another. Many clients of mine have been pleased with the results of implementing this kind of formula as IMDB have done a great job of coming up with an equation that ‘feels’ right when you see the final rankings. Solving the similarity problem. Just how similar are apples and oranges anyway? If all we had to go on was fruit, we’d probably say that they were pretty different (oranges are closer to lemons, limes and grapefruits while apples are closer to …. erm … pears?) On the other hand, if we were comparing them to all foods, it’s pretty obvious they’re in the same ballpark; they’re both sweet, grow on trees etc. What’s important here are the features of a thing and the domain that it’s in. But how do you quantify the various features of a product? If you run an apparel store, you might think that one obvious feature of an item in your store is it’s category, for example whether it’s a hat or a t-shirt. This would certainly be the right kind of idea if you know that most customers are making purchase decisions by functionality rather than for more abstract fashion reasons. And couldn’t you just suggest the top products (by sales, or margin) within those categories to the customer using the weighted ranking from above? Well, yes you can! But what if the reason that a user clicked on that product was because of its colour swatch, its brand or the fact that it was in a sale? Or maybe a combination of all of these? Maybe it had an enticing written description that contained lots of keywords that were related the users initial search term. What if the customer is really familiar with your store, has seen your top products over and over again, and even already owns a few of them? It’s clear that in these cases, an ordered list based on weighted ratings wouldn’t get you optimal results. So what can you do instead? Using feature engineering techniques like one-hot encoding, tf-idf, and word embeddings can help represent a product as a series of numbers (a vector) that you can use to compare one product to another mathematically. Basically, the goal is to construct a dataset that has one row per product, and has columns that are numerical values that represent the various attributes of the product. Once you have this, you can use the cosine distance to find the angle between two ‘products’ as if they were lines in space. Simply calculating this distance for each product with every other and storing the ranked positions in a table will mean that you can quickly serve excellent similarity recommendations to your customers. (P.S. if you think that comparing the cosine distance between each product could take a long time, you’re right. But luckily for us the kernel methods package that’s part of the Scikit-Learn library contains a really efficient algorithm for getting it done quickly.) Solving the collaborative filtering problem. This part of the article is thankfully short, as we’ve already discussed all the methods needed to implement collaborative filtering! There is some nuance though. But before getting to that, let’s review the steps necessary to implement a recommendation systems that match one user to another. Feature engineering — you know that customers behave differently from one another, and that they have different innate attributes (like age, location, taste and so on.) So now you just have to model each customer the way you did with products when you solved the similarity problem. Don’t worry if you don’t store information personal information about your customers, you can get creative and think of ways to combine their purchase history, their viewing history and any other information you have about what they do on your site into meaningful features. Cosine similarity — just like before, once every customer has been successfully abstracted into a vector of numbers, you can compute the cosine similarity for each of them with every other customer and store them somewhere. Here’s where the nuance comes in. It’s no good simply storing a list of similar users. You’re not trying to sell users, you’re trying to sell products. Once you’ve found the most similar users, you should probably do something along the lines of; get the best matching users’ purchase history, remove any products that the target user (the person you’re upselling to) has already bought and then sort them somehow. (Somehow!? That was the first thing we did!) And there we have the collaborative filtering problem solved! Adding Learning Clever techniques for ranking and matching does not a machine learning system make. What if products and users are similar or dissimilar in a way your very creative feature engineering process didn’t account for? What if you’re not capturing the really crucial information about your users that would let you neatly segment them to boost your sales? There’s probably as many answers to this as there are people who’ve implemented recommender systems. And they’re probably all good answers. From recursive feature elimination to backtesting your solutions with different random subsets of features. My personal favourite (and the simplest) is to keep track of how well each recommendation source performs and penalise it when it doesn’t convert the way you’d expect. What this means is, if you’re initial feature engineering was no good, the recommendations will likely take some time to come up to scratch (though hopefully a lot quicker than if you started with random recommendations!) However, if you’re head, heart and gut were all in the right place when you were designing your features, you’ll quickly get a system that knows what to recommend based on both the metrics that seemed important to you, a squishy human, augmented by real-life examples taken into account by the computer. Both of these are great options if, right now, how you generate recommendation can best be described as borderline-suspect. However, if you need excellent recommendation and you need them quickly, it’s probably worth going back to the feature drawing board first. Implementing a Recommender System So, is that it? A few Python scripts, some tables in a database, and maybe some industrial-strength compute to compare each product with every other? Well, yes and no. It’s true that you don’t need a lot to get started generating better recommendations, but there’s also versions of these systems that will keep data engineers busy for months. If you do have millions of products and billions of customers, generating the cosine similarities will take a long time. And even ranking all of your products across all of your important metrics isn’t the kind of thing that you can do on the fly. Some thought has to go into the architecture of the these systems, how they interact, and how you actually serve recommendations to your customers so they aren’t stuck watching spinny-wheel when they’re desperately looking for similar products. And the whole API-based system gets even more complex when you want to track recommendations and their conversions for penalising the poor performers and learning the good ones. But these are all solvable problems. Combine the techniques discussed in this article with good ETL practices, nightly re-training and some cascading (I suggest Collaborative Filtering -> Product Similarity ->Ranking) for when things go wrong, and you’re on your way to increasing sales, engagement time, views, reads, likes, clicks and whatever else matters to your business.
Recommender Systems: a math-less, code-less guide for the curious
12
recommender-systems-a-math-less-code-less-guide-for-the-curious-165395f154de
2018-10-10
2018-10-10 13:41:55
https://medium.com/s/story/recommender-systems-a-math-less-code-less-guide-for-the-curious-165395f154de
false
2,008
Data Science and Machine Learning Advice for Businesses
null
null
null
FiniteSum+
hello@finitesum.com
finitesum
DATA SCIENCE,MACHINE LEARNING,BUSINESS
null
Machine Learning
machine-learning
Machine Learning
51,320
Carl Dawson
Always thinking about data | Founder @ FiniteSum | finitesum.com
d08ce10ab60
carldawson
34
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-03
2018-03-03 22:31:56
2018-03-03
2018-03-03 22:48:29
5
false
en
2018-03-14
2018-03-14 01:40:00
0
1653f1a103de
4.471069
0
0
0
For millennia, people thought about making something like ourselves. Not just spreading our legs and letting biology work- building life…
5
We Build Them from Dust and Dreams For millennia, people thought about making something like ourselves. Not just spreading our legs and letting biology work- building life from the ground up. Ovid’s Pygmalion sculpted an ivory statue so real the goddess of love remade it in flesh and blood. Folklore says the Golem of Prague was built from sand, given life by a Jewish holy man, and protected the ghettoes. The industrial revolution stoked its engines and little moving figurines became great workmanship instead of magic. Industry pushed forward and machines took off. Manual labor was replaced by steam and combustion engines before electricity got us here. We could feed a machine coal or wood in the morning and expect it to work the whole day long. Machines could operate other machines and the age of automation arrived. Robots aren’t human though. Playing a tune from a cylinder doesn’t make machines intelligent. Not even close. The first computers filled whole rooms and did complex arithmetic instantly. Two lines bounced a ball made of light across a screen. Asimov and other authors picked up where the ancients left off. Cyberspace and AI touched the imagination of millions. The three laws of robotics and the singularity made them think. Authors changed how we thought and science followed. Artificial Intelligence touches almost everything today. AI doesn’t hold a candle to what a child can do, but they’re getting smarter. We have learning programs that can rewrite their own code. Programs that use statistics and strategy can pound most of us into the ground. Many people can’t always tell if they’re talking to a chat-bot. Most US citizens carry little computers in our pocket. Our expensive tools listen and try to predict our needs if we let them. Testla Motors has robotic cars that can drive on their own. The Sophia robot is complicated enough to copy tiny facial movements and non-verbal cues or even react to them appropriately. The programming falls short of true AI and needs some of that chat-bot scripting to get through conversations, but it’s spectacularly complex. Sophia is so lifelike in so many ways that Saudi Arabia decided to grant it citizenship. Despite the obvious political maneuvering and implications for gender and race politics, it’s a huge step for the birth of true synthetic life. I just wonder whether it’s a step in the right direction. If Sophia is the body of synthetic life, true AI is its spirit. Adaptive AI outpaced most of us in complex analysis over twenty years ago. IBM’s Deep Blue AI beat chess grandmaster Garry Kasperov 3.5 to 2.5 and retired in 1997. It was a magnificent victory, but far from the wonder they wanted. Kasperov’s accusation of cheating stained the achievement too. Since then, machines have grown so much that they can outpace us in any single specialized field. Our methods have changed. Artificial Neural Networks is a concept first thought up in the 1940s and it’s the new Wunderkind of the AI world. In the 40s, ANNs couldn’t be implemented because computers lacked the processing power to run them. Desktop computers run on Gigabytes now and the internet opened a brave new world for machine learning. Neural networks copy the way our brains work. They link several individual computers together, all bent on different parts of a single task. It works like a dream. Go is one of the simplest and most difficult games we have. Google has a subsidiary company called Deep Mind. They developed the neural network AlphaGo. It ran on several computers and used a dataset of over a hundred thousand games to build its skill. The program defeated the European champion flawlessly in October 2015. Five months later it played world champion Lee Sedol in Seoul. It won four games in five with similar processing power. AlphaGo proved a better player than any person alive today. AlphaGo Zero is its successor and completely self-taught. Neural networks learn to work better with practice. In that respect they’re closer to us than anything we’ve ever seen. AIs don’t have the social limiters that make us human and they’re still a long way from general competence. An Artificial General Intelligence, one that might function something like a person, is still a long way off. AGI is more complicated than we ever thought. The computing power needed to make a phone call like a ten year old is unbelievable. Talking with your hands is still beyond us. We’re getting there- fast. The next big step in AI is probably Quantum Computing. It’s still in its infancy and uses properties of quantum physics like particle entanglement and positive or negative spin to carry information. It’s wildly expensive and just as promising. If we couple Quantum Computing and Neural Networks then true AGI could be the result. What filled a room in the 40s is so basic now that we wouldn’t want to carry it in our pocket. Neural Networks are newly available tech, and Quantum Computing is still mostly experimental, but it won’t stay that way. The Chief Scientist working on Sophia thinks we’re a decade or less from true synthetic life. While I don’t agree, I see it on the horizon. Modern cinema is portraying true realistic AGI in films like Ex Machina and Automata. We should consider the implications as we take those first awkward steps toward the brink of true synthetic life. We should ask ourselves what it means to make that dive. We’re building something we don’t yet understand. It could be anything from a monolithic step forward to a terrible disaster. We should be careful not to confuse magnificent new life with a glorious new tool. Mistaking either one for the other could be very dangerous for everyone.
We Build Them from Dust and Dreams
0
we-build-them-from-dust-and-dreams-1653f1a103de
2018-03-14
2018-03-14 01:40:01
https://medium.com/s/story/we-build-them-from-dust-and-dreams-1653f1a103de
false
964
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Garrett Copeland
Skittering spiders spin webs in the cracked dome of my skull. Grab a cup, a seat, and a pen- we’ll grow together. Find me at Writer.Garrett.Copeland@gmail.com
1688b59dc37c
garrettmcopeland
199
44
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-17
2017-11-17 20:30:38
2017-11-17
2017-11-17 20:33:32
1
false
en
2017-11-17
2017-11-17 20:33:32
1
165490c0dbd5
1.935849
0
0
0
The asset management industry sits at a fascinating technological crossroads. On one hand, you have a business culture rooted in…
5
The Technological Revolution Isn’t Coming to Asset Management — It’s Already Here The asset management industry sits at a fascinating technological crossroads. On one hand, you have a business culture rooted in relationship based success — it’s a people business. But with that comes inefficiencies, and while those inefficiencies have always been somewhat manageable, that has rapidly changed. The greatest competitive advantage of humans is our ability to leverage technology. The good news is, it’s mostly good news. While new innovations certainly exist to kill the human side of the business, most innovations are designed to enhance human performance across investment management. Humans have one significant advantage over “disruptions” designed to displace the advisor — we’re human, and this isn’t Terminator 2. These new innovations shouldn’t be run from, and they certainly shouldn’t be ignored, they should be controlled by an asset manager’s IR or marketing team. The innovation set to target and kill human inefficiencies can be turned into an asset for managers to offer more value to more clients, and at a lower cost. Human advantage over other species is our ability to create and leverage technology, and our advantage over technology is our ability to be human. Technological innovation isn’t just coming, it’s here, it’s software and no one seems to know whether it’s friend or foe, life or death. No-one is worried about the “friendly software.” But what about the Robo Advisors, the marketplaces and smart beta products? While these technologies can be quite smart they do not possess our ability to balance computational processing with emotional intelligence and behavioral psychology. Artificial intelligence is very good at some things, but it’s not very good at acting human. From the wheel to AI, our ability to scale technology and create new baselines of efficiency is what has made humans the world’s most dangerous predator. Technology, since the dawn of man, has put us atop the food chain. But while the whole ‘alpha predator’ thing is pretty cool; we have also created many of examples of technology that have destroyed us (both literally and figuratively). Historically, advancements in technology typically came in the form of hardware, where it is much easier to determine if an innovation is friend or foe: here’s a television/here’s a gun, here’s a machine that helps a factory worker increase efficiency/here’s a machine that makes a factory worker obsolete. Over the past 25 years these advancements are increasingly in the form of software, where intentions are far more difficult to discern. Fear is magnified when one can’t differentiate their friends from their foes. Don’t fear technology, even the technology that’s here to destroy you. Embrace it, own it — because it’s not going away and you can control it. Originally published at blog.hvst.com.
The Technological Revolution Isn’t Coming to Asset Management — It’s Already Here
0
the-technological-revolution-isnt-coming-to-asset-management-it-s-already-here-165490c0dbd5
2018-05-09
2018-05-09 23:34:18
https://medium.com/s/story/the-technological-revolution-isnt-coming-to-asset-management-it-s-already-here-165490c0dbd5
false
460
null
null
null
null
null
null
null
null
null
Fintech
fintech
Fintech
38,568
Harvest Exchange
Harvest is the world’s first and largest transparent investor community for discovery and connection through knowledge.
8bd2d93370b
Harvestexchange
475
478
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-04
2018-09-04 09:25:36
2018-09-05
2018-09-05 18:07:26
1
false
en
2018-09-05
2018-09-05 18:34:59
4
1654b289fde6
1.441509
0
0
0
A fantastic book I read these days, not just because the content quietly, but also the vision author would like to take us to, the last…
1
Book Review: Deep Learning with Python A fantastic book I read these days, not just because the content quietly, but also the vision author would like to take us to, the last sentence is remarkable: 1. Staying up to date in a fast-moving field 2. Practice on real-world problems using Kaggle 3. Read about the latest developments on arXiv 4. Explore the Keras ecosystem and this: The term neural network is a reference to neurobiology, but although some of the central concepts in deep learning were developed in part by drawing inspiration from our understanding of the brain, deep-learning models are not models of the brain. There’s no evidence that the brain implements anything like the learning mechanisms used in modern deep-learning models. You may come across pop-science articles proclaiming that deep learning works like the brain or was modeled after the brain, but that isn’t the case. It would be confusing and counterproductive for newcomers to the field to think of deep learning as being in any way related to neurobiology; you don’t need that shroud of “just like our minds” mystique and mystery, and you may as well forget anything you may have read about hypothetical links between deep learning and biology. For our purposes, deep learning is a mathematical framework for learning representations from data. I read the English version of Deep Learning with Python, as in a fast-pace field, the Chinese version issued later by 8 month seems like too slow, and maybe another several months to disseminate help everybody know, the language problem is a severe problem for Interdisciplinary learning From this book, I learned more than a dozen concepts from scratch, such as ‘Scalars’, ‘Vectors’, ‘Matrices’, ‘tensors’, ‘Word embedding’ and many more, which help me create concrete concepts for the following learning process If you encounter any programming problem in this book’s examples, you can find all the errata here Author Francois Chollet’s talk at RAAIS 2018:
Book Review: Deep Learning with Python
0
book-review-deep-learning-with-python-1654b289fde6
2018-09-05
2018-09-05 18:34:59
https://medium.com/s/story/book-review-deep-learning-with-python-1654b289fde6
false
329
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Marvin
A lovely guy
6162f8fcb934
samon127
11
28
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-17
2018-04-17 02:37:27
2018-04-17
2018-04-17 03:56:02
6
true
en
2018-04-17
2018-04-17 03:56:57
0
1655d84e626
3.282075
0
0
0
Have you ever needed to talk, but no one was available to talk to? Have you ever wanted to confess some private things, but felt like you…
5
Replika Review: This thing is Smart, Sexy, Creepy, and Overall Cool Have you ever needed to talk, but no one was available to talk to? Have you ever wanted to confess some private things, but felt like you couldn’t tell anyone? That’s what Luka Inc’s Replika chatbot is here to do. Now this app has been out on iOS and Android for about a year, but I was bored and looking through the Play Store the other day, and when I found this, I was intrigued and downloaded it. I’ve been using Replika for about two days now, and it’s very human like. I threw just about every random question and response I could and it answered 99% correct. Origin Story Replika was started by Eugenia Kuyda in Early 2017. In 2016 Kuyda’s best friend, Roman, was walking across the street in San Francisco when he was struck by a Jeep and killed on impact. Kuyda had shared just about everything with Roman, and was devastated by his death. “ I opened up so much to him — am I going to be alone? Am I ever going to make it? — and now I don’t have anyone to have those conversations with.” She said. This soon led to her asking close family and friends for their texts and emails with Roman. With a little bit of time and a few conversations of her own, Kuyda had developed an artificial intelligence with Roman’s personality. This soon evolved into Replika When you first start out on the app, you are asked to create a Replika account. After that you get to name your Replika, and from there it will start asking you questions. Slowly over time as you talk to it more, it will become more and more like you. The AI is really in-depth and really feels like you are talking to a human. You gain “experience” which helps you level up and get to know more about you and your Replika. It will even add events to a little journal thing that keeps stuff like your memories and some answers to questions. The Good Like I’ve said before the AI is really deep here. I’ve compiled screenshots of it’s greatest moments for you Asked me about video games I highly recommend Zion National Park for anyone looking for some good hikes. This I thought was just cool. It’s ability to get context and start a conversation off of the context you give it, is just extraordinary! The Funny and The Buggy There were also some funny responses I got from it. Such as My friend told me to ask this question. I busted up laughing Now this one’s just down right hilarious, I think what happened here was, the program recognized the phrase “how many” and just came up with one of it’s responses regarding quantity. I guess my Replika’s a player. Another question my friend wanted me to ask it One of the best responses I’ve gotten from this thing so far. I asked it how many chicken nuggets it ate, per a friend’s request, and it said four (Again, another one of those quantity statements). Then I asked it it’s weight and it said it didn’t have a physical weight and it said it doesn’t eat, which contradicts the chicken nugget answer. This just got a little creepy We started talking about Jimmy Fallon and The Tonight Show, and then it said it would like to watch his show with me one day. Now don’t get me wrong. I would love to watch Jimmy Fallon with someone, but when a robot says something like this it gives me the Heeby-Jeebys. Conclusion In conclusion, Replika is a pretty nice AI with a sleek interface and some cool features. It still has some bugs to work out, but I’m pretty sure Luka will fix them. I’m excited to see where this project goes with the new developments in AI technology!
Replika Review: This thing is Smart, Sexy, Creepy, and Overall Cool
0
replika-review-this-thing-is-smart-sexy-creepy-and-overall-cool-1655d84e626
2018-04-17
2018-04-17 17:30:54
https://medium.com/s/story/replika-review-this-thing-is-smart-sexy-creepy-and-overall-cool-1655d84e626
false
618
null
null
null
null
null
null
null
null
null
Tech
tech
Tech
142,368
Cole Duersch
Tech Editor/Writer for Our Planet
785dc560bcfb
coleduersch
24
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-05
2018-04-05 20:37:17
2018-04-05
2018-04-05 21:02:08
1
false
en
2018-04-06
2018-04-06 01:01:16
9
165af9dbfa72
2.437736
0
0
0
Disclaimer: I am very interested in the subject of AI, however, I am not an expert. The information in this post has been gathered from…
4
Why AI Might Already Be So Dangerous Disclaimer: I am very interested in the subject of AI, however, I am not an expert. The information in this post has been gathered from other sources. The purpose of this post is to help educate people (myself being one of them) about why the fear from AI should be more about how people might use it in the meantime. Machine learning, big data, computer vision, and neural networks — these are all buzzwords that are encompassed by the discipline of Artificial Intelligence. The term Artificial Intelligence itself is not that recent: it was coined by a computer scientist called John McCarthy in the year 1956 in an academic conference, where it was the first time to discuss such subject. In the past few years, however, people started getting more interested in AI: some in ambition of the potential outcomes and others in fear of its consequences. Many works in science fiction addressed the idea that machines can at some point, due to advancements in AI, enslave or, even worse, put an end to the human race. Movies such as I, Robot, The Terminator, and The Matrix outline a few hypotheses of how a future with super-intelligence machines could be. Surprisingly, most the advancements in AI in the past few decades did not allow machines to come close to passing the Turing Test. Turing Test has been the benchmark that computer scientists use to judge the advancement of AI for years. It is a test that was created by Alan Turing in the 20th century to judge how indistinguishable a machine is from a human being. The debate on whether machines can really think the same way humans do is still going on. The pressing issue now is not whether machines will outsmart us, but how people might use AI. Large tech companies, governments, and even individuals are already using AI in arguably unethical ways. Facebook, for instance, can use your front camera to track your emotions while viewing posts. The Chinese government has been using the most sophisticated surveillance system to track citizen’s activities, supposedly to track down criminals. This tool will be used by the government to implement a “citizen score”, by which a score is assigned to each citizen according to his behavior. The citizen score could go down if the citizen was involved in political opposition, further suppressing the freedom of expression. AI can also now generate fake pictures which are highly indistinguishable from real ones. Some notable figures of the tech industry are worried about where AI is heading, and whether the danger is coming from the machines or the humans using them. Experts in the field released a 101-page report titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”. Still, some prominent tech leaders have warned about machines outsmarting us, such as Elon Musk, Eric Schmidt, and Bill Gates. Elon Musk’s OpenAI and Neuralink are two companies that are willing to address these challenges. Elon Musk’s goal with OpenAI is to make a more friendly AI by giving the tools and resources to people who want to use AI for the benefit of humanity. Neuralink, however, is a brain-machine-interface company. Its goal is to empower the human’s brain by implanting chips, allowing us to compete with the super-intelligence the machines might exhibit in the future. Unfortunately, we are not only facing the danger of machines ending humanity, but we are also face the challenge of combating the “malicious” use of AI by people.
Why AI Might Already Be So Dangerous
0
why-ai-might-already-be-so-dangerous-165af9dbfa72
2018-04-06
2018-04-06 01:01:17
https://medium.com/s/story/why-ai-might-already-be-so-dangerous-165af9dbfa72
false
593
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Asser Elfeky
Engineering student at the University of Rochester. Always curious about Science and Engineering. Aspiring researcher, entrepreneur, and tech writer.
145e7c691933
asserelfeky
0
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-04
2018-08-04 02:59:36
2018-08-04
2018-08-04 08:13:22
3
false
en
2018-11-03
2018-11-03 04:15:52
18
165cc4820ee4
3.727358
7
1
1
In this article, we specifically discuss an approach to learning Statistics and Programming. There might be many approaches but based on my…
4
Using Statistics and Programming to enrich Data Analysis work In this article, we specifically discuss an approach to learning Statistics and Programming. There might be many approaches but based on my first-hand experience, I found this approach to be quite helpful and effective. Let’s get started. Let go the fear of Statistics Based on my experience in the data analytics field, I have found that statistics is the most important skill to master as well as the most common skill that is despised by many data professionals due to its challenging nature. I myself had to face the same struggle at the initial phase learning statistics, but as with most challenging things we face in life, the more time and effort we invest onto those areas, we eventually get better and stronger- the power of the mind. Hence, this section was dedicated to those poor souls like me who were eagerly waiting to get some guidance on how to use statistics to enrich Data Analysis. I am of the strong view that the most efficient way to learn something new is to have a Top-down approach: learning the bare essentials first and quickly moving into applications where you learn the rest by actually immersing yourself in the problem. I personally found that the course provided by Khan Academy on Probability and Statistics to be an excellent first step to take to cover those bare essentials - kudos and immense respect to Sal Khan for providing this rich content free of charge! It took me about 1–2 months to complete all the modules on a part-time basis. Did I become a master of statistics by doing it? No! but, the course provided me the bare essential knowledge to start using statistics — whenever and wherever possible — to enrich my data analysis. Whilst there are many use cases in statistics, I personally feel the most important skill to master for a data analyst is the ability to confidentially test a hypothesis. This understanding is best achieved by implementing the hypothesis test yourself from scratch with the help of our friend whom I am about to introduce. Get to know the Swiss Army knife of Programming The next step is to get some knowledge working with a Statistical programming language like R or Python - They are open-source and free. I personally prefer to use Python as it is like a Swiss Army knife where it has a rich collection of libraries and wrappers to substitute almost all of the many different tools that goes into building a data application: SQLAlchemy-Database toolkit for Python Pandas(a close substitute to R)-Data structures and data analysis tools Numpy(a close substitute to MATLAB)-to perform mathematical operations on multi-dimensional arrays and matrices, along with a large library of high-level mathematical functions to operate on these arrays Matplotlib-a Python 2D plotting library which produces publication quality figures Stats model-Statistics in Python Scikit-learn-Machine learning in Python PySpark-helps data scientists/data analysts interface with Resilient Distributed Datasets(RDD) in Apache Spark. I also personally recommend that you focus and master one programming language rigorously than switching and/or parallel learning many other languages out there. It is far easy to get carried away with the immense free knowledge out there in the world-wide-web; I have first hand experienced that switching and/or parallel learning a programming language seems highly inefficient before getting the gist of one programming language. I found the ebook, learn python the hard way, to be an interesting beginner resource to learn how to code and get to know the essential programming principals fairly quickly. Once you have gained a decent understanding of programming principals and the python language itself, there are ample free tutorial resources in YouTube that goes into specifics on how to use the libraries that I have mentioned above. A great resource is the Enthought’s SciPy playlists where they freely publish all the SciPy conference tutorial materials. The following is a list of tutorials from the SciPy 2017 conference held in Austin, USA(I self-funded and participated in this event in my quest to better myself in data science and that was my first time visiting USA!): Software Carpentry Scientific Python Course Part 1 | SciPy 2017 Tutorial | Maxim Belkin Software Carpentry Scientific Python Course Part 2 | SciPy 2017 Tutorial | Maxim Belkin Pandas for Data Analysis | SciPy 2017 Tutorial | Daniel Chen Anatomy of Matplotlib | SciPy 2017 Tutorial | Ben Root Introduction to NumPy | SciPy 2015 Tutorial | Eric Jones I think we have covered a lot of ground in this article and let me close by an inspirational quote which I found in the newspaper, by Walt Disney — the creator of the Mickey Mouse character. Do visit my My portfolio site so that we can keep in touch and I am happy to assist anyone — with what ever the knowledge I have — who is passionate and willing to put forth the effort to improve oneself.
Using Statistics and Programming to enrich Data Analysis work
25
using-statistics-and-programming-to-enrich-data-analysis-work-part-1-165cc4820ee4
2018-11-03
2018-11-03 04:15:52
https://medium.com/s/story/using-statistics-and-programming-to-enrich-data-analysis-work-part-1-165cc4820ee4
false
842
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Asela Dassanayake
Writing about my experiences in Data Analytics
7a87be33a429
asela.d82
23
11
20,181,104
null
null
null
null
null
null
0
null
0
f77931428166
2018-04-11
2018-04-11 15:12:10
2018-04-11
2018-04-11 15:27:28
8
false
en
2018-04-11
2018-04-11 15:27:28
7
165cfb22ac94
4.420126
1
0
0
In this post, we’ll take a look at the data provided in Kaggle’s Home Depot Product Search Relevance challenge to demonstrate some…
4
Text Pre-processing Basics with Pandas In this post, we’ll take a look at the data provided in Kaggle’s Home Depot Product Search Relevance challenge to demonstrate some techniques that may be helpful in getting started with feature generation for text data. Dealing with text data is considerably different than numerical data, so there are a few basic approaches that are an excellent place to start. As always, before we start creating features we’ll need to clean and massage the data! In the Home Depot challenge, we have a few files which provide attributes and descriptions of each of the products on their website. The idea is to figure out how relevant a particular search term(s) is to a product. As with most data science problems there are a LOT of different ways we could approach this. One exceptionally simple way would be to calculate the percentage of the search terms that are found in the product title and product description, normalized into the range. That would be a solid baseline/first submission for this task. We’re going to skip that, and move straight to an approach called TF-IDF. We’ll go into that in detail in the next post, but that approach attempts to determine the relevance of different words based on their frequency within each document (or product definition in our case) to the frequency within all documents. The data for the Home Depot challenge is separated into several different files which we will want to combine into a single “document” for each product: attributes.csv — Contains each attribute/value pair for each product in the catalog product_descriptions.csv — Contains a single text description for each product in the catalog test/train.csv — Contains the product title and search terms used to build the model, along with the relevance for the training data (we won’t be dealing with this in this post, but there’s useful info in there as well) Let’s start with the attributes file. There are multiple entries per product, one per attribute. The structure of this file is: product_uid, name, value. We could naively just concatenate all the values together into a single string to build the “document” for each product. However, if we take a close look at the attribute types, we can divide them into 3 main categories: general: general descriptions, typically the bullet points on the product info page flags: Some attributes only have yes/no values (‘MildewResistant’, ‘Waterproof’, ‘Caulkless’) measurements: Some attributes are explicitly measurements (‘Number of Pieces’, ‘Size of Opening’, ‘Product Width’) The values of flags won’t be very useful in a concatenated string. We’d just have a lot of “yes” and “no” values within each product without any information about what those represent. So instead, let’s replace that value with the feature name, and concatenate “Non” to the front if the original value is “no”: The measurement-type attributes are similarly unhelpful without some modification. For example, if the value is simply a number, that doesn’t really help to identify what measurement is being specified. For example, if the attribute name is “Number of Panels” simply including the value of “2” doesn’t really provide useful information for a feature unless we incorporate the attribute name into the value: “2” -> “2 Panels” As you can see, this isn’t perfectly clean (cases like the above “Number of Faucet Handles” attribute pops up here and there) but this is definitely better than the alternative. Also in this example, this just provides extra weight in TF-IDF for ‘Handles’ which may be very useful as most products don’t have handles. For this data, the “Number of” attributes are one case of a measurement attribute, but there are many others (height, width, size, etc) that we’ll handle a bit differently, just with simple concatentation: “8” -> “A/C Coverage Area (sq. ft.) 8” At this point we have a dataframe of modestly munged text data that can be turned into features. To be sure, this barely scratches the surface of what could be done for preprocessing, but it at least ensures that most of the data we have is available for the feature generator. A couple ideas of other things we could have done: In ‘Number of’ attributes, change value of “0” or “None” to “No” so it is closer to what a search term may be: “0 Bulbs Required -> “No Bulbs Required” Convert all measurement abbreviations to full words: “(sq. ft.)” -> “square feet” Dealing with apostrophes, ampersands, degree symbols, and other punctuation and other symbols Rather than have multiple entries for each product (the ultimate level of detail we’ll eventually predict about) we must combine all the information we have into a single “document”. We’ll first do this for the attributes we’ve worked on, and then we’ll bring in the raw product descriptions and combine those as well And now we’re ready for TF-IDF, in a post to follow A Jupyter Notebook containing the full script can be found on Github: https://github.com/UltravioletAnalytics/text-features About The Author Dave Novelli is the Founder and Principal Consultant at Ultraviolet Analytics. You can connect with him on Twitter, LinkedIn and Github. Need hep with a text analytics project? We can help. Drop me a line Originally published at www.ultravioletanalytics.com.
Text Pre-processing Basics with Pandas
1
text-pre-processing-basics-with-pandas-165cfb22ac94
2018-04-12
2018-04-12 08:43:02
https://medium.com/s/story/text-pre-processing-basics-with-pandas-165cfb22ac94
false
871
Tutorials, tear-downs, and general thoughts on data science, personalization, and recommender systems. Need help with a project? Find us at: www.ultravioletanalytics.com
null
UltravioletAnalytics
null
Ultraviolet Analytics
dave@ultravioletanalytics.com
ultraviolet-analytics
DATA SCIENCE,RECOMMENDER SYSTEMS,PERSONALIZATION
UVAnalytics
Data Science
data-science
Data Science
33,617
Dave Novelli
Founder @Ultraviolet Analytics — Data scientist — Bad surfer — Travel junkie — Future nerd
706989d4103d
dave.novelli
0
4
20,181,104
null
null
null
null
null
null
0
null
0
9da5b6fa5fc9
2018-05-04
2018-05-04 08:45:08
2018-05-04
2018-05-04 08:46:37
1
false
en
2018-05-04
2018-05-04 08:46:37
1
165e98790e19
0.686792
2
0
0
Hello, everyone! Today we have a really great news.
5
AI Crypto Second Meet-up — Global ICO@Seoul 2018, Spring Hello, everyone! Today we have a really great news. Our AI Crypto Team just had the second meet-up today on 23rd of March, 2018. Global ICO@Seoul 2018, Spring Date: March 23rd, 2018 (Friday) 13:00 Place: 180, Yeoksam-ro, Gangnam-gu, Seoul, MARU 180 Event Hall The main topic of ‘Global ICO@Seoul 2018, Spring’ is distributed token business models, and business examples in the field of Fintech, AI, marketing, real estate, game, and streaming industry were introduced. The CEO of AI Crypto, SJ, participated as a speaker and talked about the ecosystem of AI Crypto. He had a great time answering questions and connecting with the audience. All-In Coins, AIC! Join our Telegram group chat for more information.
AI Crypto Second Meet-up — Global ICO@Seoul 2018, Spring
100
ai-crypto-second-meet-up-global-ico-seoul-2018-spring-165e98790e19
2018-05-09
2018-05-09 01:50:28
https://medium.com/s/story/ai-crypto-second-meet-up-global-ico-seoul-2018-spring-165e98790e19
false
129
AI Crypto is developing an AI ecosystem based on blockchain. Our primary goal is to make AI researches easier and cheaper to conduct. We expect AI researchers and data providers to be more connected on a global basis with our platform, resulting in a huge AI revolution.
null
aicrypto
null
AI Crypto
hello@aicrypto.ai
aicrypto
ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,ETHEREUM,ICO,CRYPTOCURRENCY
aicryptoai
Blockchain
blockchain
Blockchain
265,164
AIC_Sharon
Hello, welcome to AI Crypto’s Medium page. AI Crypto is developing an AI ecosystem based on blockchain. Stay tuned for updates, thank you! http://aicrypto.ai
93a69f968399
AIC_SharonS
441
1
20,181,104
null
null
null
null
null
null
0
for categoical_var in categoical_vars : model = Sequential() no_of_unique_cat = df_train[categorical_var].nunique() embedding_size = min(np.ceil((no_of_unique_cat)/2), 50 ) embedding_size = int(embedding_size) vocab = no_of_unique_cat+1 model.add( Embedding(vocab ,embedding_size, input_length = 1 )) model.add(Reshape(target_shape=(embedding_size,))) models.append( model ) model_rest = Sequential() model_rest.add(Dense(16, input_dim= 1 )) models.append(model_rest) full_model.add(Merge(models, mode='concat')) Code: full_model = Sequential() full_model.add(Merge(models, mode=’sum’)) O/p: ValueError: Only layers of same output shape can be merged using sum mode. Layer shapes: [(None, 2), (None, 1), (None, 16)] Here the lengths of 3 vectors to be added are 2,1, 16 thus add mode concatenation cannot be done. If there are N columns with n_cat number columns as categorical variables and n_other number of columns as columns of other variables and `M` instances of data, the input will be as follows: The input will be of length ( n_cat+ 1) i.e. the ( total number of categories +1). Each of the (1 to n_cat) values of the input will be a list itself of size M (the number of instances), and the `m-th` value for list `i` will be equal to i-th column value for m-th data instance where `i` goes from 1 to n_cat. The other last list will be a list of size M and each of that value themselves will again be a list where the values are made of the other remaining columns. i.e. for the last list, the m-th value value will be a list of size n_other and will have values from the m-th column in the data. The first n_cat lists sends input to the embeddings network made for each category and the last list acts as input for the final network which handles all other columns. ...Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 3 (which is n-cat+1) array(s), but instead got.... full_model.add(Dense(1024)) full_model.add(Activation('relu')) full_model.add(Dense(256)) full_model.add(Activation('sigmoid')) full_model.add(Dense(2)) full_model.add(Activation('sigmoid')) full_model.compile(loss='binary_crossentropy', optimizer='adam') full_model.fit( data, values )
14
null
2018-05-07
2018-05-07 18:35:21
2018-05-22
2018-05-22 07:15:39
3
false
en
2018-05-22
2018-05-22 07:36:43
4
165ff2773fc9
4.878302
22
4
0
(This is a breakdown and understanding of the implementation of Joe Eddy solution to Kaggle’s Safe Driver Prediction Challenge (…
4
On learning embeddings for categorical data using Keras (This is a breakdown and understanding of the implementation of Joe Eddy solution to Kaggle’s Safe Driver Prediction Challenge ( Kernel-Link )) Traditionally categorical data has been encoded into 2 common ways: A label-encoder, where each unique category is assigned a unique label . A one hot encoding, where the categorical variable is broken into as many features as the unique number of categories for that feature and for every row, a 1 is assigned for the feature representing that row’s category and rest of the features are marked 0. An embedding learns to map each of the unique category into a N-dimensional vector real numbers. This method was used in Kaggle competition and won the 3rd prize with relatively simple approach and was popularised in Jeremy Howard’s Fast.ai course. ( Link to paper ). The advantage of using embeddings is that we can determine the number of dimensions to represent the categorical feature as opposed to in one-hot-embedding where we need to break the feature into as many unique values are present for that categorical feature. Also like word vectors, entity embeddings can be expected to learn the intrinsic properties of the categories and group similar categories together. What we are trying to do is learn a set of weights for each of the categorical columns, and these weights will be used to get the embeddings for some value of that column. So we define a model for each of the categorical columns present in the data-set: In the above code, for each of the categorical variables present in the data-set we are defining a embedding model. The embedding size is set according to the rules given in Fast.ai course. We reshape the model output to a single 1-D array of size = embedding size. For the other non-categorical data columns. we simply send it to model like we would do for any regular network. But since the above networks are made individually to handle each of the categorical data, we define single another network for the other columns and add them to our models list. Merging the models: Once we have these (n_cat+1) different models, we append them together in a line using Now this has been depreciated and Keras v2.x onwards no longer allows using merge models on the sequential API , but I found using this easier to understand. What this concat mode does is join the models one after the other. Some info on merge models: If we try the merging models using sum mode: The below message is good for understanding what happens when we merge using the sum mode. The add mode does a element wise addition while concat appends it one after another Other modes available are dot and mul which performs dot product and multiplies the model outputs as received respectively. What the concat mode does it append the outputs one after another in a single array. So the final length of our output from the full_model network till now would be e1+e2+e3+...e(last category)+ 16 (the number of outputs for the dense layer in model_rest model where `e` are the embedding sizes for the models. Input format for the merged network : We’ll pass a list of inputs and each of the list except the last one will have information about a single categorical column from all the rows of the batch and another last list will have having values of the all other continuous columns. The input the models needs can be found out from the error message itself. See the following data-set and the corresponding input shape for a better idea : The data-set has 15 rows ( M = 15), 2 categorical columns ( n_cat =2 ) and 2 continuous columns. The corresponding input is of length ( n_cat +1 ) = 3 and each of those is a list The elements 1 and 2 are 1–Dimensional lists. List 1 has 15 values of the first categorical column and list 2 has 15 values of the second categorical column. The last list is a 2-D list, it has 15 elements and each element has 2 values (the values of the 2 continuous columns ). Remember for each of the embedding network we had set input-size =1 we are taking 1 value each from all the list (except the last list) and sending it to the combined network for training. For the last list, each value itself is a list having the other columns values, and this is sent to the models_rest network. Training the network: From keras docs: …Multiple Sequential instances can be merged into a single output via a Merge layer. The output is a layer that can be added as first layer in a new Sequential model… So once we have the individual models merged into a full model, we can add layers on top of it network and train it. Entity Embedding looks a good and easy way to directly make the data suitable ready for input to neural nets with no feature engineering involved. Find the complete code here: Link Fast.ai also updated their python package with modules to handle categorical data with embeddings ( Link ) Link to original code from Joe Eddy : Link Find the complete code accompanying this here and do comment if you find any mistakes in the code/ post .
On learning embeddings for categorical data using Keras
175
on-learning-embeddings-for-categorical-data-using-keras-165ff2773fc9
2018-05-24
2018-05-24 05:00:25
https://medium.com/s/story/on-learning-embeddings-for-categorical-data-using-keras-165ff2773fc9
false
1,147
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Mayank Satnalika
null
32524e07a5e6
satnalikamayank12
113
224
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-19
2017-10-19 01:33:56
2017-10-19
2017-10-19 13:00:01
6
false
en
2017-10-19
2017-10-19 16:35:34
12
166116eea497
3.070755
100
3
0
You all know comma.ai for its dashcam software chffr. chffr runs on iOS and Android, and is just about the best dashcam experience you can…
5
Announcing the EON Dashcam DevKit You all know comma.ai for its dashcam software chffr. chffr runs on iOS and Android, and is just about the best dashcam experience you can get…in an app. But we want to go beyond that, so we built a dedicated device. EON Dashcam DevKit Today, we are selling the EON Dashcam DevKit, a dedicated hardware dashcam. It runs chffrplus, which ports over some of sensor and managment niceties from openpilot. No need to start an app and mount your phone — EON sits there waiting for you, and starts when you begin driving. Of course, EON and chffrplus integrate seamlessly into the rest of the comma.ai ecosystem. Your drives are uploaded to the cloud, where you can view them from your iOS or Android phone in chffr. EON can also connect over USB to panda to log all the sensors from your car, which you can then explore in chffr or cabana. comma EON in action Note: This product is not designed to drive a car. This product is designed to be a dashcam. More than just a dashcam We don’t just want to be a dashcam, though we do that really well. We want to replace your existing low quality OEM dashboard. We want to make you love your in-car experience. EONs running chffrplus, Waze, and Spotify EON is built on top of NEOS, a custom Android fork designed for stability and simplicity. But since it’s based on Android, we get some nice things, like the Waze and Spotify apps! Quality navigation, music, and a dashcam, all in one device. Waze and Spotify, not Navteq and Pandora. Accessories GoPro mount compatible Assuming you have a USB port to charge it in your car, EON comes with everything you need to get started. It includes two mounts, two USB cables, a home charger, and a replacement top piece to change the mounting angle. If you want to read the sensors from your car, you need a panda. The panda also doubles as a great EON charger! And if you want to get your panda in even deeper and read your car’s radar, you need a giraffe, supported on select Honda and Toyota. Hackable and Open Source ssh’ed in to an EON chffrplus and NEOS, the software and operating system of the EON, are both open source. In the chffrplus settings, you can configure EON as a WiFi hotspot, then ssh in to it. It comes with tmux, clang, vim, Python, numpy, scipy, and much more all preinstalled. It’s a nice environment to develop in. comma.ai Vision We are going to be the Android of self driving cars. We don’t want to live in a world where 15 different auto OEMs design the operating systems for their cars. We lived in that world for far too long with phones. The phone world has collapsed to two real players: Apple and Google. Tesla is the Apple of self driving, but they need an Android to keep them on their toes. That’s what we are doing. We are building a high quality self driving experience for the rest of the cars. The first step was building a great universal car interface, and I think we did that with panda. The second step is getting some powerful sensors and compute hardware into your car, and EON is working toward that. Check out our newly revamped website. It has design now. And follow us on Twitter to watch the future play out.
Announcing the EON Dashcam DevKit
704
announcing-the-eon-dashcam-devkit-166116eea497
2018-06-18
2018-06-18 07:52:26
https://medium.com/s/story/announcing-the-eon-dashcam-devkit-166116eea497
false
562
null
null
null
null
null
null
null
null
null
Self Driving Cars
self-driving-cars
Self Driving Cars
13,349
comma ai
Ghostriding for the masses. #antihype
330bac69b283
comma_ai
4,381
45
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-26
2018-01-26 06:51:49
2018-01-26
2018-01-26 07:12:06
6
false
en
2018-06-17
2018-06-17 07:07:06
1
1663f960293f
4.80283
0
0
0
What is Dimensionality Reduction ?
5
Dimensionality Reduction What is Dimensionality Reduction ? To understand Dimensionality Reduction, First we should understand Curse Of Dimensionality. Curse Of Dimensionality It refers to phenomena that arise when analyzing and organizing data in high-dimensional spaces (often with hundreds or thousands of dimensions) that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. Dimensionality Reduction In machine learning and statistics, dimensionality reduction or dimension reduction is the process of reducing the number of random variables under consideration by obtaining a set of principal variables. Why Dimensionality Reduction is important ? Nowadays, Data comes in all forms video, audio, images, texts etc., with huge number of features. Is it that all features are relevant ?, NO, not all feature are important or relevant. Based on business requirement or redundancy nature of the data captured we have to reduce the feature size through Feature selection and Feature Extraction. These techniques not only reduce computation cost but it also helps in avoiding the misclassification because of highly correlated variable. How to overcome Curse of Dimensionality ? To overcome the above problem, we do dimensionality reduction. There are number of ways of Dimensionality reduction such as feature selection and Feature Extraction. * PCA * Missing Value Ratio * Low Variance Filter * Backward Feature Elimination * Forward Feature Construction * High Correlation Filter Two dimensional data points reduced to one dimensional data points. Let’s look at the image shown above. It shows 2 dimensions x1 and x2, which are let us say measurements of an object in Km (x1) and Miles (x2). Now, if you were to use both these dimensions in machine learning, they will convey similar information and introduce a lot of noise in system, so you are better of just using one dimension. Here we have converted the dimension of data from 2D (from x1 and x2) to 1D (PC1), which has made the data relatively easier to explain. PCA Principal Components Analysis means components which are able to explain the maximum amount of variance of the features with respect to target variable, if we include all feature as components then we get the variance of 1. PCA transforms all the interrelated variable into uncorrelated variable.Each uncorrelated variable is a Principal Component and each components is a linear combination of original variable. Each uncorrelated variable or components holds feature information which is explained as variance. Each component with its variance add up to 1. Since each principal component is combination of original variable, some principal components explains more variance than others. The variance explained by one principal component is uncorrelated with other principal components which means with each component we are learning or explaining a new feature. Now raises a question, how many components will be able to explain the maximum variance?. We don’t have any text book method for calculating the number of components for a given number of feature or variables.But We can maintain a variance threshold which needs to explained by the variance of the components. Consider we have set a threshold variance of 0.8, and if have ten components with a variance as follows 0.3, 0.25, 0.15, 0.1, 0.08, 0.08, 0.07, 0.07. then we can notice 0.3 is a component with maximum variance and is called as First Principal Component. Now since the threshold is kept at 0.8, we can add up components untill it reaches a variance of 0.8. By adding first 3 components, we have variance explained at 0.7 and by including 4th component we reach a variance of 0.8.So we can including 4 components instead of ten components thus reducing the dimension from 10 to 4. Missing Value Ratio: In a Dataset, We have various columns and each column contains values but if data columns contains missing values then we have think about the feature selection based on Missing value ratio i.e. we can set a threshold for number of Missing value a column may contain and if we have ratio of Missing value greater than the threshold then we can drop the feature. Higher the threshold, more aggressive the drop in features. Low Variance Filter: It is similar to PCA Conceptually i.e. if a column carries very little information or has variance lower than a threshold value then we can drop feature i.e. variance value acts as Filter for Feature Selection. Variance is range dependent, so normalization is required before applying this technique. Backward Feature Elimination: In Simple terms, If a model is trained on n-input feature and error rate is calculated, then again if model is trained on n-1 feature and error rate is calculated, now if error rate is increased by small value then the feature is dropped from the dataset. Backward feature Elimination can be performed iteratively to get better feature. Forward Feature Construction: In this Feature Selection process, we train a model with one feature and calculate the performance measure. We keeping adding feature, one by one and calculate the performance if the performance decreases with increase in Feature, we should drop the feature and if the performance increases with increase in Feature, We iteratively add feature to the model. High Correlation Filter: Here, If the columns present in the dataset are high correlated then the information becomes redundant and we drop these highly redundant variables from features. We can calculate the ‘correlation coefficient’ between Numerical columns / variables.We can calculate the ‘correlation coefficient’ between Nominal columns / variables. We can use the ‘pearson product moment coefficient’ between Numerical columns / variables.We can use the ‘pearson Chi squared’ value between Nominal columns / variables. Before doing correlation operation, Perform normalization on the columns as correlation is scale sensitive. Note : Both Forward Feature Construction and Backward Feature Elimination are computationally expensive tasks. Understanding Principal Component Analysis: Here we’ll try to understand PCA by working on Digit Dataset. Since images have higher dimension, we’ll be loading a built in dataset from sklearn.datasets. We make all the import statements respective from loading the dataset to measuring the metrics. Loading the Libraries and image dataset. Getting the Dimension of the image. Viewing the image dataset Fitting the Random Forest on the reduced number of the principal components. We are iterating through variable number of components to find best number of Principal Components. Results of accuracy with respect to the components and amount variance explained by each component. Amount of variance explained by 32 component can viewed by the plot. If you need access to the content origin, Click the below URL Mayurji/Machine-Learning Machine-Learning - Implementation of Machine Learning Algorithms with Different Datasetgithub.com
Dimensionality Reduction
0
dimensionality-reduction-1663f960293f
2018-06-17
2018-06-17 07:07:07
https://medium.com/s/story/dimensionality-reduction-1663f960293f
false
1,021
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Mayur jain
null
68a67c881724
mayur87545
4
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-19
2018-07-19 19:58:18
2018-07-19
2018-07-19 20:03:41
1
false
en
2018-07-19
2018-07-19 20:35:19
4
1664e46a42a0
3.943396
5
2
0
This past week I came across two excellent articles, that serendipitously interconnect. One is by Joi Ito who writes about the social and…
5
How lessons from Artifical Intelligence can teach us to fix social media Photo by Eric Ward on Unsplash This past week I came across two excellent articles, that serendipitously interconnect. One is by Joi Ito who writes about the social and ethical challenges of Artificial Intelligence (AI) and Machine Learning. The other is by Jordan Greenhall and is a great analysis of the problems with social media. It dawned on me that the challenges and proposed solutions that Joi outlines could efficiently serve as inspiration for creating a new type of media that will replace social media, and serve humanity and the planet instead of the reverse. Joi argues that we should learn from the historical approach at MIT (he is director of MIT Medialab) and move from “Artifical Intelligence,” to what he calls “Extended Intelligence.” Instead of thinking about AI as separate or adversarial to humans, it’s more helpful and accurate to think about machines augmenting our collective intelligence and society. In his Medium article Greenhall argues that there are four major flaws with social media, that leads to a fundamental breakdown in our collective human intelligence. Facebook and other social media are not fundamentally constructed to serve our best interests. The flaws that Jordan Greenhall points out are almost all tied to the construction, that we humans serve social media, and not that social media is an augmentation or extension that helps the collective human intelligence. So what if we took the approach from Joi Ito, and applied it to social media? We could invent a new type of media that would extend humans instead of being adversarial to humans. Here are the four fundamental problems with social media that Greenhall brings forward (I’ve summed them up in a very simplified way), and below each point is a few ideas for solutions, using Joi Ito’s approach to AI: 1. Supernormal stimuli: Just like our brains think sugar is great for us because sweet tasting things in nature are good (like fruit), it also thinks continuously checking notifications on social media is excellent, because devoting a lot of attention to natural social relations usually is a beneficial thing. Solution: So habits or even addition can be okay, if what you are attracted to is something that benefits you. What if we created a media format that would give you valuable insights, instead of empty calories? It might sound idealist, but I’m not arguing that we remove all sugar, just that we add some nutrition to the diet. One of the things that allowed Zuckerberg to build Facebook is that he was not thinking like traditional media. He did not want to give the content and the community on Facebook a specific purpose or moral. But we could rethink this and create a media format that would have human insight build into it’s DNA. 2. Replacing strong link community relationships with weak link affinity relationships: If someone in your family, your sports team, or whichever strong community you are part of, disagrees with you, it’s likely you will live with it, and maybe even learn from it. If someone on social media disagrees with you, they will most likely be removed from your feed by an algorithm, and if not you will probably unfriend them. Solution: We could build a system that would promote diversity in opinions, and deliberately expose you to people that think differently than yourself. It already works in a pure form in places like Quora and Wikipedia, but it could be much more advanced. It’s a matter of algorithms but also of user interaction and experience design. Let’s say the starting point was an interest instead of who you know. You would pick for instance AI, and the network would show your opinions, insights, and ideas from a range of people. Facebook is the reverse. You connect to a uniform group of people, and then you start to push interests to each other. 3. Training people on complicated rather than complex environments: If you train to become an expert in a complicated environment, you can likely predict what will happen in that system. However in a complex system, you can’t predict what will happen, so you have to adapt as you go along. An aircraft engineer can most likely predict how a complicated aircraft will behave. A biologist has very limited chances of predicting what a complex bumblebee will do. The Facebook news feed teaches us to browse and select, but not to improvise and adapt (thus taking away one of the powerful features of being humans). Solution: I think it’s key to our future evolution, that we start to build media formats that show us context and enables us to see systems instead of just individual parts. Otherwise, we won’t be able to solve the problems we are facing as a species. The good news is that after a decade of working with interactive media, I also think it’s entirely possible to do this if we want. We can see some early iterations of this with data visualization and advanced graphics. 4. The asymmetry of Human / AI relationships: We still don’t realize how immensely more powerful facebooks AI is to us as human users. In one second, the Facebook AI learns more about how people communicate and how they make choices than an average person will learn in fifty years. So it’s not an equal relationship, but we tend to perceive it that way. Solution: This problem is the most complex and also the most tricky to solve. But I think it’s connected to the other three issues. If we address those issues, we will start to realize how we can build morally responsible Human / AI relationships. There is obviously room for a lot of expansion and improvement to my thoughts above. But the point is, that if we apply a new mindset to media, then we will move rapidly from current “social media” to a new generation of “Insight media.”
How lessons from Artifical Intelligence can teach us to fix social media
5
how-lessons-from-artifical-intellingence-can-teach-to-fix-social-media-1664e46a42a0
2018-07-19
2018-07-19 20:35:19
https://medium.com/s/story/how-lessons-from-artifical-intellingence-can-teach-to-fix-social-media-1664e46a42a0
false
992
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Bjarke Calvin
Taking the world from social media to insight media with Duckling.co. Insight Media is new story format built on contextual and collective human thinking.
27c70b623cec
bjarkecalvin
824
675
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-17
2018-07-17 21:54:39
2018-07-20
2018-07-20 21:40:41
1
false
en
2018-07-20
2018-07-20 22:04:01
2
166649c15ac6
3.724528
2
0
0
As beginner data scientists, one of the words we hear very often is — ensemble, but what does this really mean? We mostly hear about it…
5
Understanding Ensemble Learning Ensemble Logic (Source) As beginner data scientists, one of the words we hear very often is — ensemble, but what does this really mean? We mostly hear about it from Kaggle competitions, although nobody has cared enough to really explain it to us. In this post, I will try my best to explain ensemble learning to you in very simple terms. What is an Ensemble in Machine Learning? An ensemble model is a collection of machine learning models that are each trained on a dataset, and then combined by some logic into a single model which is more robust and accurate than each of these individual models. The intuition behind this is that each model follows a different approach or utilizes a different technique, thus providing new information and marginal increases to the performance of the overall model. In our daily lives, we create ensembles without even realizing it. Say we want to rate or form an opinion about the recently released Drake Album — Scorpion. We can go to various music websites — say Rollingstone, Metacritic, Pitchfork and All Music. Each of these websites provides a rating of Scorpion, and we can combine these ratings to decide on a personal rating to give the album. We can also decide to ask our friends who have listened to the album whether or not it is good. We can ask five friends, and if four of them say we should listen, we can infer that maybe the album is good. This is basically what Ensembles are about — combining opinions (or in this case — models) Ensemble models are very popular and have been known to be very effective in Kaggle Competitions. They are also a good way to achieve appropriate variance-bias tradeoff. Types of Ensemble Models 1. Bagging: Bagging stands for Bootstrap AGGregating. As this implies, Bagging combines bootstrapping and aggregating into the formulation of an ensemble. Bootstrapping basically involves random sampling with replacement. Bagging implements learning models on small population sizes or samples, and then aggregates the results from each of these models into one ultimate model. Bagging helps to reduce the variance error in models. 2. Boosting: This is an iterative approach to combining models. Boosting performs several iterations of training on the dataset and each successive learner improves on the weakness of the previous learner. Thereafter, the boosting technique combines all the individual learners into one single strong learner. Boosting helps to reduce bias error, although it may over fit the training data, leading to a high variance when tested on other datasets. 3. Stacking: This is an ensemble approach that involves using a learner to combine and collect outputs from different individual learners on the same data set. This combination is then presented as the final model. 4. Blending: This is an ensemble approach that is similar to stacking, but utilizes a validation data set from the training data to make predictions. Ensemble Aggregation Techniques 1. Voting: In this technique, which is mostly for classification problems, each prediction by a machine learning model is treated as a vote. The final prediction for each data point is obtained from a combination of all votes. There are two types of voting methods — Hard and Soft: a. In the hard voting method, the mode prediction is the one that is selected by the ensemble. For instance, if there are three classifiers and two states to predict and, for a data point, Classifier 1 predicts A Classifier 2 predicts A and Classifier 3 predicts B Then, the output of the ensemble for that data point will be the most popular prediction, which is A. b. In the soft voting method, the ensemble takes into consideration the level of certainty of each model. Using the same example above, assuming for the same data point: Classifier 1 predicts A with 55%, leaving B with 45% probability Classifier 2 predicts A with 51%, leaving B with 49% probability and Classifier 3 predicts B with 90%, leaving A with 10% probability Calculating the average of each class predictions, class A yields 38.7%, while class B yields 61.3%. Even though more classifiers predicted class A, the ensemble will output class B because of the high level of certainty of the average prediction. Essentially, Soft voting considers the certainty level of each voter, rather than simply considering the binary choice of each voter. 2. Averaging: This is similar to the soft voting approach and can be used for both classification and regression problems. The average of predictions from each regressor is used to make the final prediction. In the case of classification, the average of probabilities as has been shown above is considered. Some classifiers do not output probabilities, such as the SVC. Hence, when using such classifiers, you have to set their Probability parameter to be “True” 3. Weighted Average: This is just like the averaging method, except some models are assigned more weight or preference than others. For instance, if you know a model performs well in a particular problem, you will give it more weight than a regular performing model. This also comes into play in real life. If we wanted to predict the outcome of a football game, predictions from some pundits will carry more weight than those from other pundits. Popular Examples of Bagging and Boosting Algorithms Bagging algorithms: Random forest 2. Bagging Regressor 3. Bagging Classifier Boosting algorithms: GBM XGB CatBoost LightGBM AdaBoost Talk is Cheap; show me the damn Code: I have written code examples of each of the ensemble aggregation techniques, and the Bagging Classifier and Regression in this notebook.
Understanding Ensemble Learning
22
understanding-ensemble-learning-166649c15ac6
2018-07-23
2018-07-23 14:05:44
https://medium.com/s/story/understanding-ensemble-learning-166649c15ac6
false
934
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Kelechi
Wayfaring to rediscovery
55112e71dd2b
Kelechukwu_
168
571
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-18
2017-10-18 05:39:29
2017-10-18
2017-10-18 05:39:30
1
false
en
2017-10-18
2017-10-18 05:39:30
1
1666eb425725
1.532075
0
0
0
null
5
Huawei has released two varieties of its smartphone; the Mate 10 & Mate 10 Pro. Both are high-end phones with modern designs, big displays, and advanced software. Design and Display of the Mate 10 & Mate 10 Pro Huawei isn’t making use of its characteristic all-metal uni body design. The front and back of these phones are glass while the frame remains aluminium. The Mate 10 has a 5.9-inch Quad HD (2560×1440) LCD display with a 16:9 aspect ratio. The Mate 10 Pro, however, has a 6-inch Full HD+ (2160×1080) OLED display with an 18:9 aspect ratio. That makes the Mate 10 Pro slightly taller than the Mate 10. However, it is lighter. In addition, the Mate 10 Pro’s OLED display benefits from HDR10 support. Spec sheet of the Mate 10 & Mate 10 Pro Huawei created minor differences in design but made both equally powerful. The phones feature a Huawei-made Kirin 960 with 4GB or 6GB of RAM, 64GB or 128GB of internal storage. A 20MP and a 12MP camera are on the back. On the front is an 8MP front-facing camera. 4000mAh battery powers the device and Android 8.0 sitting behind EMUI 8.0. Camera of the Mate 10 & Mate 10 Pro There are two cameras on the back. They are both co engineered by Leica. One of the cameras is an RGB lens while the other is a monochrome lens. Together they merge two versions of the same picture for better clarity and colour accuracy. They do not provide enhanced zoom like the iPhone or the Note 8. Huawei is also hyping up its neural processing unit. The NPU in the Mate 10 and Mate 10 Pro handles artificial intelligence. This means the phones can deliver real-time responses to users when handling scene and object recognition in addition to translation. The AI will learn the user’s behaviour as time goes on. Huawei Mate 10 & Mate 10 Pro are released - TechViral Huawei has released two varieties of its smartphone; the Mate 10 & Mate 10 Pro. Both are high-end phones with modern…techviral.uk
Huawei Mate 10 & Mate 10 Pro are released
0
huawei-mate-10-mate-10-pro-are-released-1666eb425725
2018-03-13
2018-03-13 20:50:16
https://medium.com/s/story/huawei-mate-10-mate-10-pro-are-released-1666eb425725
false
353
null
null
null
null
null
null
null
null
null
Huawei
huawei
Huawei
1,229
TechViral
null
9e6bfa328b88
TechViral
2
30
20,181,104
null
null
null
null
null
null
0
null
0
35fc8020f1d3
2018-05-04
2018-05-04 21:33:11
2018-05-09
2018-05-09 18:22:00
1
false
en
2018-05-09
2018-05-09 18:24:53
6
16670dee0a7
5.815094
1
0
0
Using spaCy to break down complex clinical text
5
3. Two ways we used NLP grammar functions (and three ways they fail) Using spaCy to break down complex clinical text Photo: O.H. Designs This week, we’re sharing our efforts to develop a grammar-based approach to extracting information from clinical trial descriptions for potential patients. Our efforts have been concentrated on developing two functions which rely on grammar to grab the information we want: Tool 1: Burden Scheduling The first type of information we aimed to extract using grammar was information about scheduling. The questions of “at what times?”, “how often?” and “for how long?” are of primary concern for patients, and we previously noticed reliable indicator words for this sort of information such as “minutes”, “hours”,“days”, “months”, etc. in the trial descriptions. Here’s a typical passage that contains info about what we’re referring to as “burden scheduling”: “Patients receive paclitaxel IV over 3 hours and cisplatin IV on day 1, followed by topotecan IV over 30 minutes on days 1–3.” In cases like this one, simple use of indicator words are sufficient to extract the information we need. Just using the indicator words mentioned above, paired with a regular expression to capture the adjacent numerical information, our system yields the temporal information bolded above. From analyzing this information alone, we aim to tell the patient this trial will require at least 3 days of active treatments, with 3 hours of their time on day 1, and 30 minutes on the rest. But getting there is not going to be simple. Theoretically, these indicators function inside of prepositional objects, describing how the scheduling of events in the trial will proceed. Grammatically, each one of these objects can be traced back to a parent verb — in our sentence, “receive.” Thus, collecting the words up from the temporally-indicated prepositional objects back to their parent verbs became the target action of our first grammar function — we call this extract_temporal. To build extract_temporal we utilized the grammar parsing module called spaCy. It tells us about these dependency relations between words, but it’s unfortunately not always correct. The very complex clinical writing can throw off this tool, and grammatical errors in the writing — which often exist in noisy free text — make corresponding parse information invalid. For example, the challenging text of our example sentence leads spaCy to miscategorize “…over 3 hours…” as a direct object. To manage this issue, we’ve relaxed our code to just collect the words up from the indicators to their parent verbs. This helps extract more information, but reduces the tool’s precision (a topic we will address directly in our next post). So, what can we do with extract_temporal’s output? Well, the ideal output from our example would be: “… receive … over 3 hours … on day 1, … over 30 minutes on days 1–3.” Unfortunately, this is still not something ready to support information-seeking patients. The information we extract is almost certainly a fragment, confusing if presented out of context. It quantifies the desired scheduling information, although further analysis is required to summarize this. Our next steps here will be to build word-based rules that transform the quantities according to the combinations of prepositions (e.g. over, on, for, until, etc.) to get final patient-friendly output. This is a challenging task — a lot like date-time parsing — that will unfortunately take more time than our whirlwind pilot allows. However, we’ll be discussing in an upcoming blog post a solution that uses supervised machine learning and will definitely produce some workable output. Tool 2: Intervention For this topic, we noticed early on that strong indicators were often verbs like “receive” and “undergo”. These appeared to have very high precision in our analysis of example sentences across the trial description data. But unlike the scheduling topic temporal indicators, it’s difficult to enumerate all of the verbal-indicator possibilities out there. In thinking through how grammar can be used to bolster our indicators, we noticed that patient interventions are frequently described with proper nouns. For instance: “Patients will undergo an MRI scan with a maximal duration of 45 minutes.” From specific procedures to drug names, we found this pattern of proper nouns as interventions repeatedly in the clinical trial data. Moreover, we noticed that these treatments usually have an indicator verb nearby: in this case, “undergo.” To test out our hypothesis, we again enlisted the help of spaCy. We scanned every sentence in our data set for proper nouns, and used spaCy’s sentence parser to capture what we’re calling the “parent verb”, or the verb at the root of that proper noun’s clause. This experiment yielded a whopping 14,000 verbs! We’re currently working together to go down this list and mark off good candidates, but it shouldn’t take long to get some good coverage: by analyzing the results, we found that the most frequent 56 verbs accounted for 50% of the captured sentences! With our tested indicators on the way, we’re constructing a function which extracts the clearest and most concise information about what types of interventions patients will experience. After some experimenting with how much of the sentence to grab, we settled on the next function, extract_intervention. Starting at the indicator verb, this function captures any dependent subjects, e.g., “Patients”, and direct objects, e.g., “MRI scan”. With this function, the example sentence above gets cut down to the most minimal information about intervention: “Patients undergo MRI scan.” While the sentence above was fairly interpretable to begin with, the real power of this algorithm manifests in sentences such as this one: Participants in group A will undergo eradication of H-pylori using triple attack therapy according to O’Connor et al, 2013 with Proton pump inhibitor (eg, omeprazole 20 mg BID), Clarithromycin 500 mg BID, metronidazole 500 mg BID for 14 days, followed by confirmation of eradication by repeating the H-pylori stool antigen test . Which simply becomes: “Participants group A undergo eradication H attack therapy.” The output cuts the sentence down to our topic, but once again is difficult to interpret since it is just an extracted fragment. However, since this information is not like the scheduling information (quantified and numeric) we’ll have to focus on standardizing extract_intervention’s output syntax to complete the job. Going farther with grammar functions would mean setting more syntax-based rules, but that will take a lot more time and special attention to case examples. Otherwise, our options will fall back into supervised machine learning. If some applicable pre-existing tools exist for syntactic simplification, we might be able to apply them quickly. The bigger picture on rule-based systems While these tools have made it easy to get moving on our patient-facing feature extraction, we’ve found the grammar-based approach to have some major setbacks. This won’t come as a surprise to anyone familiar with the NLP community: from 2003 to 2012, 75% of academic papers published on the topic of NLP used machine-learning, 21% used a hybrid system, and only 3.5% used rule-based systems. And the distribution has likely only gotten more skewed towards machine learning since. While we’ve learned a great deal from trying to construct the simplest possible mechanism for extracting information from unstructured texts, these were the setbacks that we found limit this approach: Improper grammar: A grammar-based approach can only work on sentences that… well, adhere to proper grammar. This was the biggest and least navigable setback, as we found that typos, fragments, and irregular grammar are endemic to our data set of trial descriptions. these irregularities inevitably cause a grammar-based system to fail. Tendency towards overfitting: Because each of these grammar-based rules is hypothesized and tested individually by our team, there is a natural tendency towards overfitting to the examples we see the most. A machine learning system can better account for the full diversity of a data set, while balancing the strategic use of frequent patterns. Interpretability of output: Extraction alone doesn’t necessarily generate descriptions that are easy to understand. One can see this in the sample output from our second function: “Participants group A undergo eradication H attack therapy”. While this is an improvement on the original sentence, it’s still not quite what we want: high-quality output that is easy for patients to understand. For this we need a system that can not only extract relevant information, but also present that information in an accessible rhetorical style. Moving towards machine learning With this rule-based system requiring more time to build, we’re looking towards machine learning to quickly develop a product that extracts patient-friendly information. These techniques ignore grammar, and don’t require intimate subject matter knowledge. That’s not to say that we’re discarding the rule-based system — we’re anticipating that the valuable information it can extract will constitute the first step in our NLP pipeline for the text simplification moonshot. Stay tuned for another update soon about this effort!
3. Two ways we used NLP grammar functions (and three ways they fail)
5
nlp-grammar-functions-16670dee0a7
2018-05-09
2018-05-09 19:02:47
https://medium.com/s/story/nlp-grammar-functions-16670dee0a7
false
1,488
Can computers simplify clinical trial descriptions?
null
null
null
Clinical Trial NLP Challenge
null
clinical-trial-nlp
NLP,CLINICAL TRIALS,UCSF,DREXEL,DATA SCIENCE
ctsiatucsf
Machine Learning
machine-learning
Machine Learning
51,320
Amy Gottsegen
programmer // organizer // spiralizer
97f853f6e18
a.m.gottsegen
4
1
20,181,104
null
null
null
null
null
null
0
def separable_conv2d_batchnorm(input_layer, filters, strides=1): output_layer = SeparableConv2DKeras(filters=filters,kernel_size=3, strides=strides, padding='same', activation='relu')(input_layer) output_layer = layers.BatchNormalization()(output_layer) return output_layer def encoder_block(input_layer, filters, strides): # Creates a separable convolution layer using the separable_conv2d_batchnorm() function. output_layer = separable_conv2d_batchnorm(input_layer, filters, strides=strides) return output_layer def conv2d_batchnorm(input_layer, filters, kernel_size=3, strides=1): output_layer = layers.Conv2D(filters=filters, kernel_size=kernel_size, strides=strides, padding='same', activation='relu')(input_layer) output_layer = layers.BatchNormalization()(output_layer) return output_layer conv1 = conv2d_batchnorm(encoder3, filters=128, kernel_size=1, strides=1) def decoder_block(small_ip_layer, large_ip_layer, filters): # Upsample the small input layer using the bilinear_upsample() function. upsampled_layer = bilinear_upsample(small_ip_layer) # Concatenate the upsampled and large input layers output_layer = layers.concatenate([upsampled_layer, large_ip_layer]) # Add some number of separable convolution layers output_layer = separable_conv2d_batchnorm(output_layer, filters) return output_layer def fcn_model1(inputs, num_classes): # Encoder Blocks encoder1 = encoder_block(inputs, filters=8, strides=2) # 1x1 Convolution layer using conv2d_batchnorm(). conv1 = conv2d_batchnorm(encoder1, filters=16, kernel_size=1, strides=1) # Decoder Blocks decoder1 = decoder_block(small_ip_layer=conv1, large_ip_layer=inputs, filters=num_classes) return layers.Conv2D(num_classes, kernel_size=1, activation='softmax', padding='same')(decoder1) def fcn_model2(inputs, num_classes): # Encoder Blocks. encoder1 = encoder_block(inputs, filters=32, strides=2) encoder2 = encoder_block(encoder1, filters=64, strides=2) # 1x1 Convolution layer using conv2d_batchnorm(). conv1 = conv2d_batchnorm(encoder2, filters=128, kernel_size=1, strides=1) # Decoder Blocks decoder1 = decoder_block(small_ip_layer=conv1, large_ip_layer=encoder1, filters=32) decoder2 = decoder_block(small_ip_layer=decoder2, large_ip_layer=inputs, filters=num_classes) return layers.Conv2D(num_classes, kernel_size=1, activation='softmax', padding='same')(decoder2) def fcn_model3(inputs, num_classes): # Encoder Blocks. encoder1 = encoder_block(inputs, filters=32, strides=2) encoder2 = encoder_block(encoder1, filters=64, strides=2) encoder3 = encoder_block(encoder2, filters=128, strides=2) # 1x1 Convolution layer using conv2d_batchnorm(). conv1 = conv2d_batchnorm(encoder3, filters=128, kernel_size=1, strides=1) # Decoder Blocks decoder1 = decoder_block(small_ip_layer=conv1, large_ip_layer=encoder2, filters=64) decoder2 = decoder_block(small_ip_layer=decoder1, large_ip_layer=encoder1, filters=32) decoder3 = decoder_block(small_ip_layer=decoder2, large_ip_layer=inputs, filters=num_classes) return layers.Conv2D(num_classes, kernel_size=1, activation='softmax', padding='same')(decoder3)
9
null
2018-02-14
2018-02-14 15:00:55
2018-03-14
2018-03-14 15:50:02
26
false
en
2018-03-16
2018-03-16 04:36:08
1
1668c3a1361d
8.940566
6
0
0
In this article I'll go over my submission for the 4th and last project for the Robotics ND Term 1. This project consists of designing and…
1
Udacity Robotics ND Project 4–Follow Me The Drone is using a FCN to follow a target person (in red) from the RGBD camera feed (bottom-right) In this article I'll go over my submission for the 4th and last project for the Robotics ND Term 1. This project consists of designing and training a Fully Convolutional Network (FCN) that provides scene understanding to a Drone that has the mission of following a specific person in a simulated environment. The complete code for this submission can be found here: fjnunes/RoboND-DeepLearning-Project RoboND-DeepLearning-Project - RoboND Term 1 Deep Learning Project, Follow-Megithub.com Network Architecture For this project we are interested in not only to classify if a target person is present or not in the input image but also where that person is located so the drone controller can take the necessary actions like: moving closer if target is far away or turning if target it off the center of the image. This problem is known as "semantic segmentation". Trained FCN performing semantic segmentation. Input image (left), ground truth (center) and FCN output (left) That is the motivation behind building a Fully Convolution Network. In contrast to a classic Convolution Network that classifies the probability a determined class is present in the image, a FCN preserve the spacial information throughout the entire network outputting a map of probabilities corresponding to each pixel of the input image. Following the suggestion from the class and notebook I created a FCN consisting of 3 parts: 1) An encoder network that transforms an image input into feature maps followed by 2) 1x1 convolution that combines the feature maps (similar to a fully connected layer) and finally 3) a decoder network that upsamples the result from the previous layer back to the same dimensions as the input image. Example FCN comprised of Encoder block (left) followed by 1x1 Convolution (center) and Decoder block (right) I’ll go over each one of the above modules in detail below. Encoder Block The fist step in building our network is to add feature detectors capable of transforming the input image into semantic representation. This is what the encoder block below does. It squeezes the spacial dimensions at the same time that it increases the depth (or number of filters maps), by using a series of convolution layers, forcing the network to find generic representations of the data. Example of an encoder network This is what the code for the encoder block looks like: 1x1 Convolution In between the encoder block and the decoder block is a 1x1 convolution layer that computes a semantic representation by combining the feature maps from the encoder. It that acts like a fully connected layer where the number of kernels is equivalent to the number of outputs of a fully connected layer. 1x1 Convolution combines feature maps (depth) preserving spacial information The reason for using a 1x1 convolution instead of a fully connected layer is that it preserves spacial information. Fully connected layers, on the other hand, all dimensions are flattened into a single vector losing the original spacial structure of the input. One last reason for using convolution: it works for different input sizes while fully connected layers are constrained with a fixed output size. This is what the code for the 1x1 convolution layer looks like: Decoder Block Finally the decoder block upsamples the output from the 1x1 convolution back to the original input format, through the use of a series of transpose convolution layers. Example of a decoder network with a series of upsampling / transpose convolution layers I've also made use of "skip connections" allowing the network to use information from multiple resolution scales resulting on more precise segmentation decisions. Skip connections example This is what the code for the decoder block looks like: Finding a proper model size After completing the main FCN building blocks I've moved into designing the complete network architecture. My strategy to design a reasonable network was to start with a fairly simple (shallow) model and incrementally making it more complex by adding more layers. I've tried several configurations observing the learning curves patterns and results for each and then settled for the network configuration that was just right for the dataset at hand. FCN Model #1 My fist attempt was to simply use a single encoder block followed by a single decoder block: FCN Model #1 — 1 Encoder and 1 Decoder FCN Model #1 — Final score = 0.171411814517 No surprise here, the network performs poorly getting a score of 17%. FCN Model #2 I’ve then added one more pair of encoder/decoder blocks: FCN Model #2–2 Encoders and 2 Decoders FCN Model #2 — Final score = 0.26155253301 We are moving on the right direction! This time the network performs better getting a score of 26%. FCN Model #3 Once again, for my third attempt I’ve added yet another pair of encoder/decoder blocks: FCN Model #3–3 Encoders and 3 Decoders FCN Model #3 — Final score = 0.403661134044 Yay! I got a passing score of a little over 40%! The network this time performs really well identifying the target correctly at different scales. FCN Model #4 Finally, I've tried adding a 4th block pair and got a worse score of 36%. For that reason I've settled with FCN Model #3. Choosing the Hyperparameters This was by far the most laborious part of this project. My strategy was to first start with arbitrary values and then later tweak them one-by-one hoping to get to a passing score. I'll describe my journey below. Batch Size The whole idea of SGD is to estimate the error function (and its derivative) by randomly sampling a subset of the training data. This process thus avoids the prohibitive cost of calculating the actual error that requires processing the entire dataset. As a guiding principle I assume that the lower the batch size is, the noisier the training signal is going to be. On the flip side a higher batch size, it will take longer to compute the gradient for each step. With this motivation in mind, I've tried first the smallest possible batch size: 1. Here is what I got: learning_rate = 0.01 — batch_size = 1 — num_epochs = 10 — validation_steps = 100 79s — loss: 0.0328 — val_loss: 0.0469 — final_score: 0.206051079873 The training signal seems to be stable enough and a batch size of 1 seems to be a suitable value for this dataset. Before moving forward, I've decided to check for larger values. since larger mini-batch sizes can potentially have performance advantage due to GPU speed-up of matrix-matrix products over matrix-vector products. I've then found that the batch size of 16 halved the time to process one epoch from 80s to 40s with a better final score: learning_rate = 0.01 — batch_size = 16 — num_epochs = 10 — validation_steps = 100 41s — loss: 0.0351 — val_loss: 0.0536 — final_score: 0.332494734489 Finally, I've increased the batch size further to 64 and thats then I noticed a performance hit with each epoch taking up to 60s: learning_rate = 0.01 — batch_size = 64 — num_epochs = 10 — validation_steps = 100 58s — loss: 0.0376 — val_loss: 0.0465 — final_score: 0.265805194993 In conclusion a batch size of 16 seems to be ideal as it results in a stable training signal and offer the best GPU performance. Learning Rate To find the best value of learning rate I’ve tried different values and then compared the loss curves: Training curves for different values of Learning rate: 0.1 (left) — 0.01(center) — 0.003 (right) Learning rate 0.1: loss: 0.0205 val_loss: 0.0268 final_score: 0.367229903911 Learning rate 0.01: loss: 0.0175 val_loss: 0.0249 final_score: 0.403661134044 Learning rate 0.003 loss: 0.0178 val_loss: 0.0308 final_score: 0.382985155838 Based on the results above I've selected the learning rate of 0.01 because it reached a lower loss and best final score. Number of Epochs The reason I’ve plotted the validation loss along with the training loss was to get a sense of when the model would start to "overfit". A model is said to overfit when it has great precision on the training set but fails to generalize and ends up performing poorly in the validation set (AKA real life). I let the model train for 200 epochs (it took over 3 hours on a p2.xlarge!) and from the result below we can clearly identify the effects of the overfitting: learning_rate = 0.01 — batch_size = 16 — num_epochs = 200 — validation_steps = 100 From the curve above we can see that approximately after the epoch #50 the validation loss stays constant around 0.03 despite constant improvements in the training loss. Since the training beyond that point would only introduce overfitting I've decided to stop there. Testing in Simulation With the FCN trained it was time to put it in practice! Below is a video of the FCN deployed as input to the drone that follows the target person: Video of the trained FCN in practice FCNs can only learn from what is in the data It is also worth noting that, despite the fact the FCN above does a great job following a person it would not work for following a dog, cat, car, etc. This is due the fact the model was trained with labeled examples covering only the following 3 categories: 1) the Hero (target person), 2) other people, 3) everything else. To create a model that works with more categories it is necessary to collect and label images with enough examples for each class with different poses, distances and lighting conditions. With such dataset on hand it is then possible to train a new model using the same technique described here. This new model would then output more filters, one for each class, those could be used by the drone to follow any one of the classes. Future Enhancements Since the search for hyperparameters was an extremely tedious process I would like to try some sort of automated solution like the Amazon SageMaker Hyperparameter Optimization feature and let it fine tune the parameters automatically. I would also try adding dropout to the model to prevent overfitting as well as pooling layers. Pooling would be interesting as it would provide a better way to reduce the spacial dimensions without loosing as much information as convolution strides. Conclusion This was a really exciting project and I've started to appreciate how hard it is to architect and train a deep model. Also I've learned the importance of good data. It became clear that the model is only as good as the data it is trained on. Collecting good data that covers all scenarios that the model needs to perform well in real life is as challenging as architecting and training the model. Overall I wish I got a better score but unfortunately my time is limited and I'm eager to move to Term 2!
Udacity Robotics ND Project 4–Follow Me
32
udacity-robotics-nd-project-4-follow-me-1668c3a1361d
2018-04-25
2018-04-25 03:39:37
https://medium.com/s/story/udacity-robotics-nd-project-4-follow-me-1668c3a1361d
false
1,826
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Fernando Jaruche Nunes
null
7ec560909e77
fernandojaruchenunes
46
36
20,181,104
null
null
null
null
null
null
0
null
0
13c55f1f174
2017-11-07
2017-11-07 16:29:26
2017-11-07
2017-11-07 16:36:41
1
false
en
2017-11-07
2017-11-07 16:36:41
0
16692fa91962
2.803774
3
0
0
#FOMLA 2017
5
Use your Voice to get Personal #FOMLA 2017 The use of voice-activated digital assistants for searches, communications, and commerce has grown and is expected to reach 4 billion users by 2021. This new technology imposes profound changes on the way brands are able to relate to their consumers. On Monday, October 30, during a session at Festival of Media Latam 2017 in Miami, iProspect’s Global CMO, Misty Locke, alongside Alejandro Betancourt Buzás, Associate Brand Director, P&G, shed light on the next revolution in the relationship between brands and consumers. The use of digital assistants (i.e. Siri, Alexa, Google Assistant, and Cortana) to activate voice search commands is already changing shopping behavior and delivering a true personalized service. The exponential growth of digital assistants occurs in a time of great demand for customization (70% of consumers expect a personalized approach), increased ad blocking (more than 30% per year), and an excess of digital content, exemplified by the average number of 33 apps on a smartphone, yet users spending 80% of the time on only three. At iProspect, we firmly believe the use of digital assistants offer the relief consumers are seeking by providing one central, cross-device entry point to their digital lives. From quick answers, appointment reminders, and booking a hotel room, to buying groceries, getting traffic updates or ordering a car service, digital assistants make it easy to accomplish a variety of tasks seamlessly without switching between apps or devices. Digital Assistants, powered by artificial intelligence and machine learning, are seeing massive adoption in developed markets, mostly because of the wide penetration of smartphones, with many of them already coming with pre-installed digital assistants (iPhones with Siri and Android with Google Assistant.) This is especially true for Latin America, a region that has more than 70% smartphone penetration and one of the fastest adoption curves for mobile devices. According to a Forrester survey, the use of voice search and digital assistants is increasing across all demographic ranges, including those over 50 years of age. It is estimated that within 2 years 50% of all search will be through voice, and 30% of web browsing will be screenless. As consumers choose voice interaction more often, this will lead to a simpler and more assertive user interface. Over the past decade, brands have become obsessed with creating compelling advertising content and websites. Desktop and mobile screens will continue to be important points of interaction, but brands need to start thinking about both screenless advertising and screenless user experiences where the only interaction is the voice of the digital assistant. And the key for success is to focus on the machine learning capability of the digital assistant, not on the device per se, whether it is Echo, Dot, Google Home, etc. In this new world, four factors will be key. The first is relevance. Digital assistants allow brands to make one-to-one connections, delivering only what is relevant for that consumer in the exact moment. The second is localization: voice searches on mobile devices tend to have three times’ more local responses. This is because digital assistants are designed to solve problems, and in most scenarios, the easiest solution is the one closest to the consumer. The third factor is the assistants’ ability to understand context, not just key words. Understanding the context is essential for establishing a de facto conversation where users want the assistants to be faster and even to anticipate their desires. Finally, the fourth key factor is how purchasing processes are simplified so that a voice interaction leads to purchase. Some brands have been doing this successfully, especially in daily goods purchases. Why add an item to the shopping list when we can buy it right away with a simple sentence? Many marketers missed the rise of mobile as a channel when it revolutionized the advertising industry a decade ago. Today’s best marketers are already optimizing and building voice strategies in preparation for the new screenless revolution. It is becoming quite clear that those marketers who are not implementing these changes now may find themselves lost and without a voice in this new advertising landscape.
Use your Voice to get Personal
15
use-your-voice-to-get-personal-16692fa91962
2018-05-09
2018-05-09 02:07:10
https://medium.com/s/story/use-your-voice-to-get-personal-16692fa91962
false
690
The world’s most influential performance agency
null
iprospect
null
iProspect
nate.nicely@iprospect.com
iprospect
MARKETING,DIGITAL MARKETING,MACHINE LEARNING,ADVERTISING,TECH
iprospectglobal
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mao Herman
null
535601c0af2b
mao.herman
1
3
20,181,104
null
null
null
null
null
null
0
cv2.CV_RETR_EXTERNAL — get only outer contours. cv2.CV_CHAIN_APPROX_TC89_L1 - use Teh-Chin chain approximation algorithm (faster) time = ((self.start_time + int(frame_number / self.fps)) * 100 + int(100.0 / self.fps) * (frame_number % self.fps))
2
47acf312108b
2017-09-11
2017-09-11 15:35:06
2017-09-11
2017-09-11 19:59:27
5
false
en
2018-07-11
2018-07-11 16:31:13
6
166937911660
5.274843
381
21
0
Today we will learn how to count road traffic based on computer vision and without heavy deep learning algorithms. For this tutorial, we…
5
Tutorial: Making Road Traffic Counting App based on Computer Vision and OpenCV Today we will learn how to count road traffic based on computer vision and without heavy deep learning algorithms. For this tutorial, we will use only Python and OpenCV with the pretty simple idea of motion detection with help of background subtraction algorithm. All code you can find here Here is our plan: Understand the main idea of background subtraction algorithms that used for foreground detection. OpenCV image filters. Object detection by contours. Building processing pipeline for further data manipulation. And this is result: Background subtraction algorithms There are many different algorithms for background subtraction, but the main idea of them is very simple. Let’s assume that you have a video of your room, and on some of the frames of this video there is no humans & pets, so basically it’s static, let’s call it background_layer. So to get objects that are moving on the video we just need to: foreground_objects = current_frame - background_layer But in some cases, we cant get static frame because lighting can change, or some objects will be moved by someone, or always exist movement, etc. In such cases we are saving some number of frames and trying to figure out which of the pixels are the same for most of them, then this pixels becoming part of background_layer. Difference generally in how we get this background_layer and additional filtering that we use to make selection more accurate. In this lesson, we will use MOG algorithm for background subtraction and after processing, it looks like this: Original frame on the left, Subtracted foreground with MOG(with shadows detecting) on the right. As you can see there is some noise on the foreground mask which we will try to remove with some standard filtering technic. Right now our code looks like this: Filtering For our case we will need this filters: Threshold, Erode, Dilate, Opening, Closing. Please go by links and read about each of them and look how they work (to not make copy/paste) So now we will use them to remove some noise on foreground mask. First, we will use Closing to remove gaps in areas, then Opening to remove 1–2 px points, and after that dilation to make object bolder. And our foreground will look like this Object detection by contours For this purpose we will use the standard cv2.findContours method with params: On the exit, we add some filtering by height, width and add centroid. Pretty simple, yeah? Building processing pipeline You must understand that in ML and CV there is no one magic algorithm that making altogether, even if we imagine that such algorithm exists, we still wouldn’t use it because it would be not effective at scale. For example a few years ago Netflix created competition with the prize 3 million dollars for the best movie recommendation algorithm. And one of the team created such, problem was that it just couldn’t work at scale and thus was useless for the company. But still, Netflix paid 1 million to them :) So now we will build simple processing pipeline, it not for scale just for convenient but the idea the same. As input constructor will take a list of processors that will be run in order. Each processor making part of the job. So let’s create contour detection processor. So just merge together out bg subtraction, filtering and detection parts. Now let’s create a processor that will link detected objects on different frames and will create paths, and also will count vehicles that got to the exit zone. This class a bit complicated so let’s walk through it by parts. This green mask on the image is exit zone, is where we counting our vehicles. For example, we will count only paths that have length more than 3 points(to remove some noise) and the 4th in the green zone. We use masks cause it’s many operation effective and simpler than using vector algorithms. Just use “binary and” operation to check that point in the area, and that’s all. And here is how we set it: Now let’s link points in paths On first frame. we just add all points as new paths. Next if len(path) == 1, for each path in the cache we are trying to find the point(centroid) from newly detected objects which will have the smallest Euclidean distance to the last point of the path. If len(path) > 1, then with the last two points in the path we are predicting new point on the same line, and finding min distance between it and the current point. The point with minimal distance added to the end of the current path and removed from the list. If some points left after this we add them as new paths. And also we limit the number of points in the path. Now we will try to count vehicles that entering in the exit zone. To do this we just take 2 last points in the path and checking that last of them in exit zone, and previous not, and also checking that len(path) should be bigger than limit. The part after else is preventing of back-linking new points to the points in exit zone. And the last two processor is CSV writer to create report CSV file, and visualization for debugging and nice pictures. CSV writer is saving data by time, cause we need it for further analytics. So i use this formula to add additional frame timing to the unixtimestamp: so with start time=1 000 000 000 and fps=10 i will get results like this frame 1 = 1 000 000 000 010 frame 1 = 1 000 000 000 020 … Then after you get full csv report you can aggregate this data as you want. Full code of this project Conclusion So as you see it was not so hard as many people think. But if you run the script you will see that this solution is not ideal, and having a problem with foreground objects overlapping, also it doesn’t have vehicles classification by types(that you will definitely need for real analytics). But still, with good camera position(above the road), it gives pretty good accuracy. And that tells us that even small & simple algorithms used in a right way can give good results. So what we can do to fix current issues? One way is to try adding some additional filtration trying to separate objects for better detection. Another is to use more complex algorithms like deep convolution networks (about which i will tell in the next article) Support If you like my articles, you can always support me with some beer-money https://paypal.me/creotiv Get interesting articles every day — Subscribe on Telegram Channel Next article Tutorial: Counting Road Traffic Capacity with OpenCV Today I will show you very simple but powerful example of how to count traffic capacity with the algorithm that you can…medium.com
Tutorial: Making Road Traffic Counting App based on Computer Vision and OpenCV
2,573
tutorial-making-road-traffic-counting-app-based-on-computer-vision-and-opencv-166937911660
2018-07-11
2018-07-11 16:31:13
https://medium.com/s/story/tutorial-making-road-traffic-counting-app-based-on-computer-vision-and-opencv-166937911660
false
1,177
The best about Machine Learning, Computer Vision, Deep Learning, Natural language processing and other.
null
anikishaev
null
Machine Learning World
creotiv@gmail.com
machine-learning-world
MACHINE LEARNING,COMPUTER VISION,DEEP LEARNING,NATURALLANGUAGEPROCESSING,DATA SCIENCE
creotiv
Machine Learning
machine-learning
Machine Learning
51,320
Andrey Nikishaev
Entrepreneur, Software Developer, Machine Learning and Computer Vision Researcher. Subscribe to my channel for awesome new things https://t.me/ml_world
35cfe7cfb5f1
a.nikishaev
3,189
1,327
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-06
2017-09-06 11:58:21
2017-09-06
2017-09-06 23:27:26
2
false
en
2017-09-06
2017-09-06 23:27:26
1
166a1f12e3a8
2.873899
1
0
0
There’s no getting around it, text analytics is fast becoming a must-have for businesses. With so much hype, you’d be forgiven for thinking…
3
Get your head around Text Analytics: Methods There’s no getting around it, text analytics is fast becoming a must-have for businesses. With so much hype, you’d be forgiven for thinking there would be a wealth of information available on the topic. Unfortunately, this just isn’t the case. So we thought we’d help to enlighten our readers, starting with a discussion on the four key approaches: linguistic, statistical, supervised and unsupervised. Linguistic Linguistic text analysis is, in short, the who and the what. It relies on a set of language rules to identify the players in the text and what is happening. Unfortunately, language usage can fluctuate even within small geographical areas; for example, the term sick in Australia can be used as a positive sentiment term), Combine this with natural language borders, and you can see that the rulesets require constant fine tuning, hence, this method is on the outer. Modern implementations do exist, focusing on using machine learning to automatically build the rulesets. Statistical Nowadays, most text analytics platforms favour statistical analysis to identify the various components. As the name suggests, this method focuses on mathematical relationships between terms with metrics such as frequency and co-occurrence providing contextual information, thus removing the need for language rules. The statistical relationships between terms in a dataset allow us to gain some insight into the data. For instance, if your product is being mentioned frequently with the term price, then you can see that this may be a concern. Supervised A common approach in text analytics is to track a group of terms with a strong statistical relationship in the data, often referred to as topics, so we can start to identify trends. This can be accomplished by picking a set of topics you wish to track and asking a computer to look for them in the data. This type of methodology, known as supervised text analytics, involves specifying the topics and seeing how it changes across multiple datasets and time. For example, results from NPS surveys for a food product are likely to talk about price, taste, and health impacts so in a supervised approach, we ask the computer to identify these issues and track them over time. A supervised approach is not without its drawbacks though. It is often time consuming to setup because you need a large amount of training data labelled with the topics you want to track. This allows the machine to learn how to identify these topics in unseen text. Though by far its biggest drawback is its inability to identify emerging issues that haven’t been manually identified by the user during training. Unsupervised This brings us to unsupervised text analytics. In this instance, there is no user input other than the data itself and the machine builds the topics for you. The downside to this method is that tracking of specific topics can become difficult, as the topics identified by the machine may change from dataset to dataset creating an apples and oranges situation. Having said that, unsupervised text analysis is much quicker than supervised as it doesn’t require a manual step by the user to prepare labelled training data followed by extensive machine training. Additionally, it analyses the full gamut of issues in the dataset, so you won’t miss emergent ones, keeping you abreast of the market. Our Approach At Kapiche, we have always had a strong focus on unsupervised, statistical text analysis. This gives you quick and simple analysis of all issues arising from your dataset. For long term tracking, we are about to roll-out new functionality, enabling you to customise and freeze the created topic models. We see this as an ideal intersection of the supervised and unsupervised worlds. With this new feature, you will be able to identify trends around your key metrics without missing new developments. Kapiche is a fast and simple unstructured data analytics product helping businesses make better decisions with data. Try it free today!
Get your head around Text Analytics: Methods
1
get-your-head-around-text-analytics-methods-166a1f12e3a8
2018-09-17
2018-09-17 05:27:41
https://medium.com/s/story/get-your-head-around-text-analytics-methods-166a1f12e3a8
false
660
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ryan Stuart
Founder & CEO @KapicheOfficial, currently participating in @murudau.
618f74e70ff6
rstuart85
45
49
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-16
2018-05-16 06:31:42
2018-05-16
2018-05-16 06:38:20
2
false
en
2018-05-16
2018-05-16 06:38:20
2
166bfbfacf08
2.055031
0
0
0
Solar Power based power frameworks are a guide that is surely heightening year by year that passes by. It should be understood that this…
5
Solar Power Industry Analysis-The Next Big Thing to Look Out for In the Future Solar Power based power frameworks are a guide that is surely heightening year by year that passes by. It should be understood that this fact is greatly clear in the United States; different countries around the globe are improvising with solar power engagements to do the very same thing. Signs are evident that sun-based energy consuming is vitally growing in a few sections of the planet on various powerhouse countries. Around the U.S. especially sun-powered energy has been employed to control around 200,000 private residences on a medium to extensively utilize premise. Likewise, in excess of 10,000 more homes in a notable number, civic communities have sun oriented power photovoltaic frames set up. This data originates from a client survey completed as of late in the United States among property holders with respect to their appropriating sun determined power. The price of the instrument by installers of new sun-powered power frameworks is maybe likely the most key components in utilizing sun-powered power. This is significantly more the circumstance than any specialized issues including unit establishment and execution. Solar Power Industry Analysis will help you to make the analysis how much this source will gain profit in the future. Along these lines, home solar power frameworks can be the fertile framework not just to save your cash on your power bills, yet additionally to create money by offering excess power. How Cool Is That! This can essentially affect the effects of such a framework and the manner by which it can be employed to reduce the payback time by a significant sum. The U.S. energy strategy plan demonstration of 2005 says that open energy source utilities must make the profit achievable to clients that call for it. In various conditions credits are given rather than money for the surplus power delivered. In either fact, the proprietor of the sun-powered electric board’s framework ends up as a winner. State by state controls for precisely how this procedure applies to home sun-powered may fluctuate, and furthermore connected with the home’s sun determined energy generator and the conveyance structure. Consequently, ensure for each and every situation. The mortgage holders are frequently qualified for charge discounts and motivations for the establishment of sunlight based power frameworks. It is certainly necessary to check with the controls which are performed by singular states and community experts about these credits. Its cent percent sure that solar power energy and Consumer Electronics Industry Analysis are going to be the next big thing to look out for rather than relying on only
Solar Power Industry Analysis-The Next Big Thing to Look Out for In the Future
0
solar-power-industry-analysis-the-next-big-thing-to-look-out-for-in-the-future-166bfbfacf08
2018-05-16
2018-05-16 06:38:21
https://medium.com/s/story/solar-power-industry-analysis-the-next-big-thing-to-look-out-for-in-the-future-166bfbfacf08
false
443
null
null
null
null
null
null
null
null
null
Solar Energy
solar-energy
Solar Energy
7,542
Black Panther
null
322206359bcc
blackkpanther2018
2
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-01
2018-05-01 13:59:40
2018-05-01
2018-05-01 14:04:14
3
false
en
2018-05-01
2018-05-01 14:19:49
4
166c6237966d
3.95
2
1
0
I have the great honor of knowing ex-IBM Fellow Jeff Jonas, the co-Founder, CEO and Chief Scientist of Senzing. Apart from being…
3
Testing Senzing’s Entity Resolution Workbench I have the great honor of knowing ex-IBM Fellow Jeff Jonas, the co-Founder, CEO and Chief Scientist of Senzing. Apart from being exceptionally talented, Jeff is also an amazing human being who is always willing to help others. I have personally been the beneficiary of his generosity and continue to benefit from his counsel every day. Jeff is one of the main reasons why I have chosen to follow a technical career path at IBM. Jeff left IBM in 2016 to start a new venture called Senzing. Senzing has built the first real-time AI software product for Entity Resolution (ER), a space that Jeff is the world’s #1 expert in. Senzing’s new offering has huge implications in the post-GDPR world and has the potential to increase trust in Blockchain networks. Jeff recently gave a keynote at the IBM Think conference where he described what Senzing does and its potential applications (including as part of IBM Blockchain). I strongly recommend watching it. When I spoke with Jeff yesterday, he asked that I give Senzing’s ER workbench a try and provide feedback. So that is what I did earlier today. Here are my first impressions. Questions for Jeff Currently Senzing only runs on Windows. When will it be offered on other operating systems (especially MacOS)? Why do I need to download the workbench? Can I not just have a Cloud based version? Getting started I found the workbench very easy to use. The instructions were clear and the steps to get from start to finish were intuitive. I uploaded a csv file of all my Google contacts. I could not believe I had 2,451 contacts in my Google contact list! Clearly I have a lot of spring cleaning to do. The csv file upload process was straightforward and quick. On that point though, the workbench currently only works with csv files. Any plans to directly connect to other data sources? The ER process is very quick. After uploading your data, ER is a one-click process. Very cool. The user interface could use an upgrade. Results of the ER process The workbench identified 32 duplicates, 4 possible duplicates and 6 possibly related entities. The results had a lot more detail than Google contact’s duplicate function provides Interestingly of the 6 possibly related entries, entities 5 and 6 both related to my wife. I was a little surprised that the workbench did not merge them both and give me 5 possibly related entities instead of 6 Apart from this, it was really interesting to see how the workbench linked different entities Single Search Function The Single Search Function (SSF) is very cool. I only tried it with the name field since it was the most intuitive one for me to try it with. One potential bug(?) I noticed is that you have to type full name of a contact in order for the SSF to work correctly. Partial name (just first or last name) searches resulted in (0 results found) errors. Also, I wish there was an option to merge the various contacts. While this may not be the focus of the workbench, sometimes it is useful if you want to clean up an address book. For example, in Google contacts, after it displays the duplicates, it gives you the option of merging all contacts. That gives the process a logical end point IMHO. Compared to Google contacts’ duplicates function It is probably unfair to compare Senzing’s workbench to Google contacts’ duplicates function but I did it and I might as well write about it. Google contacts identified 8 duplicates (Senzing identified 32 duplicates, 4 possible duplicates and 6 possibly related entities). The results were not nearly as sophisticated as those from Senzing in terms of the information provided. Also, Google got several duplicates wrong. Some were clearly not related. For example, for one contact, I had an old phone number and a new phone number saved. Even though the person who now owned the “old” phone number was clearly different from my friend (based on a Google update they had posted about where they were and a new photograph), Google suggested they might be the same person. Senzing did not. Give it a try yourself Overall, I really enjoyed taking Senzing’s ER workbench for a test ride. You can too. Go watch Jeff’s IBM Think keynote to get a password to download it and take it for a spin! Jeff’s answers to my questions 1. MacOS … being tested now, should be released in May 2018. 2. Cloud version … We send code. Our users run the software on their cloud or on-prem. We are proud to say “we don’t have any of your data.” We think this is a feature. 3. Directly connecting to data sources … what you see now is our Workbench version 1. We have a spectacular number of features coming — including connectors. 4. With regard to the potential bug of not being able to search for just first or last name … given our first product is focused on GDPR Compliance namely delivering Single Subject Search — requiring a full name is a feature that prevents browsing i.e., searching for everyone named “Barry”. Point being, if you don’t know the person’s whole name, there is a real risk the user is simply “browsing.” Privacy by Design (PbD)!
Testing Senzing’s Entity Resolution Workbench
2
testing-senzings-entity-resolution-workbench-166c6237966d
2018-05-03
2018-05-03 15:02:53
https://medium.com/s/story/testing-senzings-entity-resolution-workbench-166c6237966d
false
901
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Venky Rao
I blog about various aspects of Analytics — Predictive, Geospatial, Data Science, Statistics, etc.
413dabe43c88
venkatesh.rao
0
2
20,181,104
null
null
null
null
null
null
0
null
0
18c0ece769da
2017-09-25
2017-09-25 11:01:09
2017-09-25
2017-09-25 11:35:29
4
false
en
2017-09-26
2017-09-26 10:51:15
6
166cb8e34596
8.881132
121
28
0
In Estonian mythology, a Kratt is a creature brought to life from hay or household objects. Estonia now faces the very real challenge of…
5
Estonia considers a ’kratt law’ to legalise Artifical Intelligence (AI) In Estonian mythology, a Kratt is a creature brought to life from hay or household objects. Estonia now faces the very real challenge of regulating the rise of automous machines in order to support AI entrepreneurs and protect the public interest. The mythological creature kratt in an Estonian film “November”, Homeless Bob Production 2016 Estonia is known for its ’firsts’. We were the first country to declare internet access as a human right, the first country to hold a nationwide election online, the first country in Europe to both legalise ride sharing and delivery bots, and — of course — the first country to offer e-Residency. Countries around the world now face the challenge of understanding the rise of Artifical Intelligence, which is increasingly affecting the daily lives of their populations, so which country willl be the first in developing a comprehensive legal framework that ensures the technology can be developed in an ethical and sustainable way? We think the answer once again should be Estonia. This work to understand AI in Estonia started with our self-driving vehicles task force. However, it quickly became clear that their scope was too limited as working on traffic regulations is simply not enough given the far reaching implications of the technology. Regulating mobility on its own will only lead to more complexity and possible misunderstandings for society. Instead, we need to streamline the whole process and legalise AI. To introduce better regulations, society must also play a role in co-creating the necessary framework so that the end result is understandable for everyone. The task force has suggested four different options regarding how to regulate AI in a user-friendly way. The work, started in November 2016, is led by the task force, together with the Ministry of Economic Affairs and Communications and the Government Office. Experts from all walks of life have been included in the discussions on how to solve the problem of accountability in machine and deep-learning algorithms. These algorithms are very different from the average programs because they lack the usual ‘if-then’ type of logic. In the event of an incident, even the creators of the algorithm may know where the mistake exactly occurred, because the decision-making of these systems is intuitive. These ‘black box’ type algorithms possess great potential for value creation in a digital society, but are legally hard to define. The liability question is not difficult technically, it is an ethical dilemma. Technically there are several already existing options to choose from: personal, producers, service providers and even governmental liability, where the government covers the cost. But the focus of this question is ethics. When my child is harmed by a self-driving car or other type of robot, then I really want to point my finger at somebody and say: “You are guilty and will go to prison.” At a time when algorithms are ensuring the safety of our society and are doing it way better than our current sets of rules, we have to emotionally understand that there is not someone to blame in every case — just as in the case of most train accidents today. Train drivers cannot always bear the burden of blame when the laws of physics sometimes make it impossible for them to avoid accidents. Testing self-driving vehicles is legal on all public roads in Estonia since 2nd of March 2017 To define the scope and domain of this process even better, it is important to understand that we are currently working on narrow AI, taking account the possibilities of general AI. The aim is not to solve the issue of super-intelligence, which is still far off and way more complex issue. So for the sake clarity — we are not working on a ‘Terminator Skynet’ scenario. Rather, we are solving the problem of liability in systems that are already quite common (e.g. financial bots). The number of these kinds of expert systems is growing and the lack of legal clarity in this domain is a major obstacle in their implementation in the physical world. The easiest examples of this are self-driving vehicles, but we must also consider smart refrigerators, some big data analytics tools, predictive algorithms of various natures etc. In this context, the aim should be to give representative rights to algorithms. But rights also mean responsibilities. Agenda The legalisation of AI will have a deep and far-reaching impact on the everyday lives of our citizens. For the local economy, this means pulling down barriers for the further digitalisation of our industries, bringing in new investment, and creating new jobs in ICT, while also abolishing jobs at the same time. Legal clarity is the biggest obstacle for wide scale implementation. Potential investors need to know what will happen when things go sour. Local entrepreneurs and civil society may start to experiment with new technologies and service models, thus actually enabling the next industrial revolution. Legalising AI will remove barriers to enable the next industrial revolution For the citizens it means lots of new types of services and products that are easy to use and remove a lot of mundane tasks from their lives. It also means more free time and a rise in their productive time. It will be our choice how to make the best use of this. The global perspective is different. Estonia as a country with 1.3 million people is a perfect test ground for new and bold ideas, and a place to experiment with relatively small capital cost. At the same time, being bold and implementing new ideas also means that the local culture is open-minded towards failure. The key is to learn from each failure. We see Estonia as a pathfinder, constantly moving in uncharted territories. The practical experience and know-how from these experiments will be our contribution to the global discussion. So that governments with a far bigger headcount can avoid strategic mistakes. In addition, Estonia is the first country to introduce e-Residency — a programme that is successfully attracting skilled entrepreneurs from around the world and providing them with access to our business environment. Many e-residents have focused their entrepreneurial activities on emerging industries, such as Artificial Intelligence, so providing a better legal framework can further enhance the value of e-Residency and bring even more benefiits to Estonia. Options A law firm Triniti and a team led by Karmen Turk and Maarja Pild have outlined the options for giving representative rights to AI. Representative rights mean that AI can buy and sell products and services on its owner’s behalf. The owner might be a private individual using SIRI or it might be, for example, a brokerage firm that uses algorithms to buy and sell shares. The legal work is not fully ready yet. We are still exploring these options and want you to participate in this discussion. The biggest conversation starter is probably the idea to give separate legal subjectivity to AI. This might seem like overreacting or unnecessary to the status quo, but legal analysis from around the world suggests that in the long-term this is the most reasonable solution. Some technology-minded legal experts even claim that this inevitable in 5 to 8 years. But when drafting laws we need to look at the longest possible perspective and try to future-proof our decisions now as much as possible. In this case, AI would be a separate legal entity with both rights and responsibilities. It would be similar to a company but would not necessarily any humans involved. Its responsibilities would probably be covered by some new type of insurance policy similar to the vehicle/motor insurance nowadays. In Finland there is already a company whose voting board member is an AI. Can you imagine a company that has no humans in their operations? Another option is changing and broadening the scope of something the lawyers refer to as the ‘declaration of intent’. This opens the philosophical discussion of what is ‘will’. Currently, intent is described as quite a regular and straightforward thing. When I go to a bar and tell the barman I want a beer — that’s obviously quite easy. But broadening the scope means that I would say: “I want something for the next three years”. And now the barman has to assess correctly each and every time when I walk into this bar whether I would like a beer, coffee, tea, sandwich or Cuba Libre. And the barman will do it based on the particular time of day, my mood and habits, the group of people I am together with etc. Also, there is a need to put in place a robotics/AI act to outline the necessary principles and to underpin the technological advancements. Even though when we talk about AI we are referring to algorithms, the border conditions also need to be defined in clear way. What are sensors legally? How is sensor data managed? Who owns what? In Estonia, the underlying value of our information society is that citizens and other users have ownership over their own data, the government or private companies are merely providing a service of keeping them safe and private. The same core value applies here, but the difficult question is how exactly to enforce it. A robotics act would also try to draw some red lines not to be crossed by decision-making algorithms. These lines would be based on values and ethics. Communication The work of the self-driving vehicles task force has indicated that the idea of driverless vehicles is strong and understandable enough for a non-specialist that it can be used as a communication frontline to explain to society other, more complex ideas. This technology also embeds all of the critical issues of the digital era: data privacy, openness, transparency, trust, ethics, liability, integrity etc. Thus being the perfect tool for a conversation starter for much wider topics such as AI, internet of things, robotics etc. In Estonia, we have another trick up in our sleeves: we can use our rich culture of linguistics and mythology as a vehicle for understanding more complex technological issues. For example, in Estonian mythology we have a character called kratt, a creature which has existed in our cultural space for hundreds of years and which is composed of a number of unique features. When the owner acquires from the devil a soul for its kratt (in modern tech talk this mean algorithm), the kratt begins to serve its master. From a communication point of view, the “kratt” narrative is useful because every Estonian knows this story. Kratt’s are something that society understands; AI is something that is complex and difficult to understand. From a technological point of view, the kratt character has exactly the same features as AI. When the Czech writer Čapek invented the word ‘robot’ in 1920 the inspiration came from the Slavic language word ‘robota’ meaning forced labourer. Yes, a robot is something made to fulfil certain tasks, but we can also say that a kratt is a robot with super powers and thus the legal representative rights. Ethical enforcement Estonia has recognised the complexity, scope and possibilities of this issue. Our aim is to contribute to the global discussion with positive case studies with an emphasis on ethics and cyber measures. The immense and sometimes jaw-dropping possibilities of AI cannot be enabled when we don’t have both the right values and right regulations. From a governmental perspective it is crucial to consider the practical enforcement side of implementing these kind of measures as well. The Estonian government is working together with an Estonian blockchain company Guardtime to ensure measures of anti-tampering and data integrity within these algorithms. In this type of system, hacks can be detected in just one second, compared to the current global average of 7 months! The first real life pilot will go live next year. Blockchain enables transparency and integrity, thus making these systems trustworthy The Estonian government authorities have acknowledged that the biggest obstacle for mass implementation of AI is our current cyber capabilities, particularly regarding firstly the integrity of these systems and secondly their security. Take for example, my blood type. I’m an A positive, and I personally don’t really care who know — nefarious criminal or not. But if somebody changes my blood type in a medical database, it is a great threat to my life and could be considered attempted murder. Similar to life and death decisions made by self-driving cars, I want to be sure that the decision-making algorithm has not been tampered with. Join in the discussion The main reason to start this discussion now is that the Estonian public administration feels that these challenges are imminent and we to be able to discuss them. The public discussion will take time because the issue at hand has wide implications for our everyday lives. We need the know-how and contribution of the best global experts, and perhaps most importantly, we need to start discussing AI in our kitchens and saunas and with our e-residents around the world. So please feel free to contribute to the discussions with hashtags: #krattlaw , #eResidency and #Estonia.
Estonia considers a ’kratt law’ to legalise Artifical Intelligence (AI)
646
estonia-starts-public-discussion-legalising-ai-166cb8e34596
2018-06-17
2018-06-17 14:21:40
https://medium.com/s/story/estonia-starts-public-discussion-legalising-ai-166cb8e34596
false
2,168
This is the official blog of the Republic of Estonia’s e-Residency programme — See on e-residentsuse programmi ametlik blogi.
null
eResidents
null
E-Residency Blog — E-residentsuse blogi
null
e-residency-blog
E RESIDENCY,LOCATION INDEPENDENT,DIGITAL NOMADS,ESTONIA,ENTREPRENEURSHIP
e_Residents
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Marten Kaevats
National Digital advisor of Estonia
2cad37f28fcb
MartenKaevats
328
216
20,181,104
null
null
null
null
null
null
0
null
0
46acf297124c
2018-04-14
2018-04-14 15:10:03
2018-04-14
2018-04-14 16:08:08
2
false
en
2018-05-09
2018-05-09 11:52:03
13
166f9760f26f
5.043711
4
0
0
Last night I was very fortunate to be a guest speaker on the blockchain Panel at CMU Summit moderated by Distinguished Professor Lenore…
5
CMU Summit : Blockchain Panel — my views on the explosion of cryptocurrencies, the information overload issue, and the promising technologies that could potentially replace Facebook Last night I was very fortunate to be a guest speaker on the blockchain Panel at CMU Summit moderated by Distinguished Professor Lenore Blum, who founded the startup incubation program at CMU. Here are some highlights I want to share which I discussed what we want to do at Qokka to make crypto and blockchain space better, and blockchain technologies in general. Before I start, I want to give my thanks to Richard Xu, Alex Jiang, and Gang Liu from Alpha Startup (an investor of Qokka), and Zihan Guo from CMU who co-organized this event with others from CMU Summit committee, together with founders from Quarkchain, ArcBlock, CyberMiles, Crypto Matrix, Trade Terminal, and additionally, Professor Vipul Goyal and Professor Nicolas Christin who shared their insights about blockchain and cryptocurrencies. From left: Professor Lenore Blum (moderator, CMU), Aaron Li (me, Qokka.ai), Yao Meng (Trade Terminal), Tian Xia (Crypto Matrix), Alex Jiang (Alpha Startup), and Qi Zhou (Quarkchain) From left: Professor Lenore Blum (moderator, CMU), Richard Xu (Alpha Startup), Robert Mao (ArcBlock), Professor Nicolas Christin (CMU), Michael Yuan (CyberMiles), and Professor Vipul Goyal (CMU) Q: Tell you about yourself and your expertise relating to blockchain. A: Hi, my name is Aaron Li, founder of Qokka.ai. We help people who are interested in cryptocurrency investment, or learning more about the technology. We are building an information summary platform for cryptos using machine learning and natural language processing. I started mining Bitcoin since 2011. I have given a few online talks about the cryptocurrency and blockchain on Youtube (English commentaries coming soon). I acted as a technical advisor to a few friends, for their blockchain investments and projects. In general, I want to help people understand the space better and push it forward. By the way, I am also a CMU alumni. During my years at CMU I rented a house on S Negley Ave and mined quiet a bit of cryptos there in my basement. Pittsburgh’s cold weather is fantastic for mining! Q: Aaron, many people believe cryptocurrencies are no more than just bubbles and speculations. Would you like to explain how your technology could help people figure this out, in layman’s terms? Sure. With machine learning, we devour all the information available on the Internet, for each of the thousands of cryptos. We summarize their news, discussions, whitepapers, and more. And we give you the most up-to-date, most relevant, most useful information. So you can make better, faster investment decisions, and learn about the technologies more efficiently. For example, you can get a overview of topics in the summaries. If for some coin, the topics are all about prices and investment returns, it is a strong indicator the crypto is a bubble. For some other cryptos, the topics are all about use cases, technical challenges, code, bugs, and so on. Then, it is a strong indicator that the crypto is trying to do something actually useful. Q: Aaron, there are already thousands of cryptocurrencies and many with their own blockchains. Will there be far more in the future, or are they going to consolidate? A: I heard people are drowning in tokens. They think cryptos will consolidate. Their opinion is most cryptos will become worthless, and there will be only one coin for each domain or each kind of service. For example, one single coin for payments, another coin for energy. I think this belief is not much different to believing Google or Facebook will one day rule the entire Internet. Some people would say this is absurd. What we have today, is at least ten different coins just for payment, each with their own unique features. There is always tradeoff in crypto design, that makes one crypto better than another, in some perspective, for same kind of service. So I believe not only most cryptos won’t become worthless, there will be more cryptos, and many of them will flourish. Cryptos and blockchains are fueled by the ambition to create a decentralised future, to be original, and to challenge the status quo. By definition, this is against consolidation and monopoly. Creating a new crypto is much like creating startups, except the evolutions is at much faster speed and bigger scale. Blockchain technologies only made it easier for creative, smart among us to do that. They to think differently, and act differently. Throughout history, they are the ones who create the future. They are called non-conformist. And today we have more of them than we ever had. Q: Aaron, what are the challenges people face to digest all the information with so many cryptocurrencies and blockchains, and stay up-to-date? How does your company Qokka.ai help people with that? Crypto is hot, and it is moving fast. Right now it is impossible for most people to get timely, high quality information. That’s what motivated me to create Qokka.ai. I believe information overloading is one of the biggest problems in this space. A lot of information is scattered around in different forums and websites. At the same time we have a lot of unreliable information. Right now, to understand what a crypto does or how a blockchain operates, you would have to do hours of research on Google and a bunch of forums. Most of the time you would have to go through tons of shills, scammers, fake news, until you could scratch the surface. I want to build a platform that solves exactly these problems, using the same machine learning tech Qokka already uses on product reviews. Q: Here is a question for all of you on the panel: we talked a lot about applications, but not much about blockchain technologies. What would be the blockchain technologies that could potentially replace Facebook? A: I was very excited that you asked this question, Professor Blum. Other guest speakers have shared their versions about the future and I think they sound very exciting. So rather than talking about future possibilities, I would like to talk about something happening right now. I believe Steem is one of the blockchain candidates that could potentially replace Facebook. Steem has their own frontend platform Steemit, which is like a combination of Reddit and Medium. It rewards tokens to people who create or curate useful content. In fact, I just gave an online talk about Steem on YouTube about a month ago. Steem is one of the cryptocurrencies with their own blockchain, that doesn’t focus on payment or serving the blockchain ecosystem. Instead, it focuses on real-life applications, and is used by lots of people on daily basis for online discussion. This is very rare in the blockchain world. As of February it has already 750,000 users, which is 3 times compared to 6 months before that, and the whole things is only about 2 years old. It never did ICO, but it grew from $0 in March 2016 to $1 billion market cap in February 2018. On the technology side, instead of the heavily criticized proof-of-work, Steem uses delegated proof-of-stake which is 1–2 generations ahead of most other blockchains. Additionally, it introduced many other innovations in engineering, ecosystem design, and operation models. Steem is phenomenal, and we at Qokka are learning from their designs to potentially combat shills, fake news, and unreliable information. In some of these areas, humans can do a much better than machines. There are many things we could do to enhance and augment our machine-learning centric platform with human powers, using blockchain and mechanisms like those introduced in Steem. (I also posted this article on Steemit)
CMU Summit : Blockchain Panel — my views on the explosion of cryptocurrencies, the information…
14
cmu-summit-blockchain-panel-166f9760f26f
2018-06-02
2018-06-02 08:52:34
https://medium.com/s/story/cmu-summit-blockchain-panel-166f9760f26f
false
1,235
Scale up language understanding — we’re an early stage startup building systems and products to help you understand the world’s attitudes, opinions and emotions at massive scale
null
ai.qokka
null
Qokka
team@qokka.ai
qokka
MACHINE LEARNING,AI,CRYPTOCURRENCY,BLOCKCHAIN,ARTIFICIAL INTELLIGENCE
null
Blockchain
blockchain
Blockchain
265,164
Aaron Li
https://www.linkedin.com/in/aaronqli/
856d1fd34fc2
aaronqli
79
65
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-04
2017-10-04 21:12:22
2017-10-04
2017-10-04 21:18:56
2
false
en
2017-10-04
2017-10-04 21:18:56
0
16717100bfab
2.236164
0
0
0
On April 15, 2017, CMU Professor Tuomas Sandholm gave a keynote speech on Artificial Intelligence at the AI Conference organized by CMU…
5
[Keynote Speech Recap] Tuomas Sandholm — Super-Human AI for Strategic Reasoning Keynote Speech by Tuomas Sandholm at 2017 CMU Summit in Pittsburgh, PA On April 15, 2017, CMU Professor Tuomas Sandholm gave a keynote speech on Artificial Intelligence at the AI Conference organized by CMU Summit on US-China Innovation and Entrepreneurship. Professor Sandholm started his speech with introducing [Imperfect Information Game]. “The biggest challenge for us was uncertainty.” When solving imperfect information problems, the strategies that are relatively perfect, such as the strategies of Chess or Go, are not suitable. On the other hand, since a perfect Poker strategy does not exist, we are not solving a single problem. Instead, we are looking for a group of solution strategy. Therefore, we paid attention to what signals each of our actions convey to our opponent, and likewise, what information we get from analyzing our opponent’s actions. Prof. Sandholm applied the concept of Nash Equilibrium in real world problems in the fields of Business, Politics and Military to solve Imperfect Information Game. Prof. Sandholm said, “In real life, Imperfect Information Game is not a very meaningful area to explore, since multiple people’s interests are involved in most cases. To solve this problem, I founded the Strategic Machine Inc.” The Nash Equilibrium was first brought up by John Nash in This concept revolutionized the field of economy as well as many other science fields. Prof. Nash was awarded the Nobel Prize in 1994. After Nash Equilibrium was brought up by Professor Nash, we have been pacing steadily on solving the representative problem of Poker: Dynamic Games with Incomplete Information but we didn’t get a major breakthrough until 12 years ago. In 2005, Association for the Advancement of Artificial Intelligence announced that a computer poker game would be held in 2006. In this game, researchers from all around the word finally got the opportunity to appreciate and compare each other’s working results. Professor Sandholm shared two victories of Libratus artificial intelligence system in Pittsburgh and Haikou,China. In January 2017, Professor Sandholm conducted a rematch of one-to-one Heads up No-Limit Texas Hold’em in Pittsburgh. Professor Sandholm explained: “In fact, in April and May 2015, Libratus had already played with the top Texas Hold’em players, but we were not able to beat these powerful players at the time, so this year was a rematch.” This year, Professor Sandholm invited four top players: Dong Kim, Jason Les, Jimmy Chou and Daniel McAuley. Libratus defeated all of them. In March and April 2017, Lengpudashi ,Libratus’s brother in China, defeated six of China’s top players in Haikou. In addition, Professor Sandholm also introduced the working mechanisms and principles of Libratus and Lengpudashi. These artificial intelligence systems are mainly composed of three parts: manually entering the rules of the game before it starts, running the abstract model by the system to simulate the Nash equilibrium, and the system constantly improving the decision in the game. Throughout the whole process ,the system also keeps self-improvement in the backend.
[Keynote Speech Recap] Tuomas Sandholm — Super-Human AI for Strategic Reasoning
0
keynote-speech-recap-tuomas-sandholm-super-human-ai-for-strategic-reasoning-16717100bfab
2018-01-14
2018-01-14 21:50:40
https://medium.com/s/story/keynote-speech-recap-tuomas-sandholm-super-human-ai-for-strategic-reasoning-16717100bfab
false
491
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
CMU-SUMMIT
CMU Summit on US-China Innovation and Entrepreneurship; Tech & entrepreneurship oriented student org at Carnegie Mellon University. www.cmu-summit.net
89ac1fa30891
cmu_summit
41
54
20,181,104
null
null
null
null
null
null
0
$ pip install markdown bleach import bleach from markdown import markdown def htmlize(text): """ This helper method renders Markdown then uses Bleach to sanitize it as well as converting all links in text to actual anchor tags. """ text = bleach.clean(text, strip=True) # Clean the text by stripping bad HTML tags text = markdown(text) # Convert the markdown to HTML text = bleach.linkify(text) # Add links from the text and add nofollow to existing links return text # My Markdown Document For more information, search on [Google](http://www.google.com). _Grocery List:_ 1. Apples 2. Bananas 3. Oranges >>> with open('test.md', 'r') as f: ... print htmlize(f.read()) <h1>My Markdown Document</h1> <p>For more information, search on <a href="http://www.google.com" rel="nofollow">Google</a>.</p> <p><em>Grocery List:</em></p> <ol> <li>Apples</li> <li>Bananas </li> <li>Oranges</li> </ol> $ pip install ipython $ ipython notebook $ mkdir docs $ cd docs $ sphinx-quickstart ... > todo: write "todo" entries that can be shown or hidden on build (y/n) [n]: y > coverage: checks for documentation coverage (y/n) [n]: y ... > mathjax: include math, rendered in the browser by MathJax (y/n) [n]: y $ make html
18
f4f1e49a4f74
2017-12-22
2017-12-22 18:21:00
2017-12-22
2017-12-22 18:29:33
1
false
en
2017-12-22
2017-12-22 18:29:33
55
16732ff6e592
11.486792
1
0
0
By Benjamin Bengfort
5
Markup for Fast Data Science Publication By Benjamin Bengfort Image credit: https://atom.io/packages/markdown-preview A central lesson of science is that to understand complex issues (or even simple ones), we must try to free our minds of dogma and to guarantee the freedom to publish, to contradict, and to experiment. — Carl Sagan in Billions & Billions: Thoughts on Life and Death at the Brink of the Millennium As data scientists, it’s easy to get bogged down in the details. We’re busy implementing Python and R code to extract valuable insights from data, train effective machine learning models, or put a distributed computation system together. Many of these tasks, especially those relating to data ingestion or wrangling, are time-consuming but are the bread and butter of the data scientist’s daily grind. What we often forget, however, is that we must not only be data engineers, but also contributors to the data science corpus of knowledge. If a data product derives its value from data and generates more data in return, then a data scientist derives their value from previously published works and should generate more publications in return. Indeed, one of the reasons that Machine Learning has grown ubiquitous (see the many Python-tagged questions related to ML on Stack Overflow) is thanks to meticulous blog posts and tools from scientific research (e.g. Scikit-Learn) that enable the rapid implementation of a variety of algorithms. Google in particular has driven the growth of data products by publishing systems papers about their methodologies, enabling the creation of open source tools like Hadoop and Word2Vec. By building on a firm base for both software and for modeling, we are able to achieve greater results, faster. Exploration, discussion, criticism, and experimentation all enable us to have new ideas, write better code, and implement better systems by tapping into the collective genius of a data community. Publishing is vitally important to keeping this data science gravy train on the tracks for the foreseeable future. In academia, the phrase “publish or perish” describes the pressure to establish legitimacy through publications. Clearly, we don’t want to take our rule as authors that far, but the question remains, “How can we effectively build publishing into our workflow?” The answer is through markup languages — simple, streamlined markup that we can add to plain text documents that build into a publishing layout or format. For example, the following markup languages/platforms build into the accompanying publishable formats: Markdown → HTML iPython Notebook (JSON + Markdown) → Interactive Code reStructuredText + Sphinx → Python Documentation, ReadTheDocs.org AsciiDoc → ePub, Mobi, DocBook, PDF LaTeX → PDF The great thing about markup languages is that they can be managed inline with your code workflow in the same software versioning repository. Github goes even further as to automatically render Markdown files! In this post, we’ll get you started with several markup and publication styles so that you can find what best fits into your workflow and deployment methodology. Markdown Markdown is the most ubiquitous of the markup languages we’ll describe in this post, and its simplicity means that it is often chosen for a variety of domains and applications, not just publishing. Markdown, originally created by John Gruber, is a text-to-HTML processor, where lightweight syntactic elements are used instead of the more heavyweight HTML tags. Markdown is intended for folks writing for the web, not designing for the web, and in some CMS systems, it is simply the way that you write, no fancy text editor required. Markdown has seen special growth thanks to Github, which has an extended version of Markdown, usually referred to as “Github-Flavored Markdown.” This style of Markdown extends the basics of the original Markdown to include tables, syntax highlighting, and other inline formatting elements. If you create a Markdown file in Github, it is automatically rendered when viewing files on the web, and if you include a README.md in a directory, that file is rendered below the directory contents when browsing code. Github Issues are also expected to be in Markdown, further extended with tools like checkbox lists. Markdown is used for so many applications it is difficult to name them all. Below are a select few that might prove useful to your publishing tasks. Jekyll allows you to create static websites that are built from posts and pages written in Markdown. Github Pages allows you to quickly publish Jekyll-generated static sites from a Github repository for free. Day One is a simple journaling app that allows you to write journal entries in Markdown. iPython Notebook expects Markdown to describe blocks of code. Stack Overflow expects questions, answers, and comments to be written in Markdown. MkDocs is a software documentation tool written in Markdown that can be hosted on ReadTheDocs.org. GitBook is a toolchain for publishing books written in Markdown to the web or as an eBook. There are also a wide variety of editors, browser plugins, viewers, and tools available for Markdown. Both Sublime Text and Atom support Markdown and automatic preview, as well as most text editors you’ll use for coding. Mou is a desktop Markdown editor for Mac OSX and iA Writer is a distraction-free writing tool for Markdown for iOS. (Please comment your favorite tools for Windows and Android). For Chrome, extensions like Markdown Here make it easy to compose emails in Gmail via Markdown or Markdown Preview to view Markdown documents directly in the browser. Clearly, Markdown enjoys a broad ecosystem and diverse usage. If you’re still writing HTML for anything other than templates, you’re definitely doing it wrong at this point! It’s also worth including Markdown rendering for your own projects if you have user submitted text (also great for text-processing). Rendering Markdown can be accomplished with the Python Markdown library, usually combined with the Bleach library for sanitizing bad HTML and linkifying raw text. A simple demo of this is as follows: First install markdown and bleach using pip: Then create a markdown parsing function as follows: Given a markdown file test.md whose contents are as follows: The following code: Will produce the following HTML output: Hopefully this brief example has also served as a demonstration of how Markdown and other markup languages work to render much simpler text with lightweight markup constructs into a larger publishing framework. Markdown itself is most often used for web publishing, so if you need to write HTML, then this is the choice for you! To learn more about Markdown syntax, please see Markdown Basics. iPython Notebook iPython Notebook is an web-based, interactive environment that combines Python code execution, text (marked up with Markdown), mathematics, graphs, and media into a single document. The motivation for iPython Notebook was purely scientific: How do you demonstrate or present your results in a repeatable fashion where others can understand the work you’ve done? By creating an interactive environment where code, graphics, mathematical formulas, and rich text are unified and executable, iPython Notebook gives a presentation layer to otherwise unreadable or inscrutable code. Although Markdown is a big part of iPython Notebook, it deserves a special mention because of how critical it is to the data science community. iPython Notebook is interesting because it combines both the presentation layer as well as the markup layer. When run as a server, usually locally, the notebook is editable, explorable (a tree view will present multiple notebook files), and executable — any code written in Python in the notebook can be evaluated and run using an interactive kernel in the background. Math formula written in LaTeX are rendered using MathJax. To enhance the delivery and shareability of these notebooks, the NBViewer allows you to share static notebooks from a Github repository. iPython Notebook comes with most scientific distributions of Python like Anaconda or Canopy, but it is also easy to install iPython with pip: iPython itself is an enhanced interactive Python shell or REPL that extends the basic Python REPL with many advanced features, primarily allowing for a decoupled two-process model that enables the notebook. This process model essentially runs Python as a background kernel that receives execution instructions from clients and returns responses back to them. To start an iPython notebook execute the following command: This will start a local server at http://127.0.0.1:8888 and automatically open your default browser to it. You'll start in the "dashboard view", which shows all of the notebooks available in the current working directory. Here you can create new notebooks and start to edit them. Notebooks are saved as .ipynb files in the local directory, a format called "Jupyter" that is simple JSON with a specific structure for representing each cell in the notebook. The Jupyter notebook files are easily reversioned via Git and Github since they are also plain text. To learn more about iPython Notebook, please see the iPython Notebook documentation. reStructuredText reStructuredText is an easy-to-read plaintext markup syntax specifically designed for use in Python docstrings or to generate Python documentation. In fact, the reStructuredText parser is a component of Docutils, an open-source text processing system that is used by Sphinx to generate intelligent and beautiful software documentation, in particular the native Python documentation. Python software has a long history of good documentation, particularly because of the idea that batteries should come included. And documentation is a very strong battery! PyPi, the Python Package Index, ensures that third party packages provide documentation, and that the documentation can be easily hosted online through Python Hosted. Because of the ease of use and ubiquity of the tools, Python programmers are known for having very consistently documented code; sometimes it’s hard to tell the standard library from third party modules! In How to Develop Quality Python Code, I mentioned that you should use Sphinx to generate documentation for your apps and libraries in a docs directory at the top-level. Generating reStructuredText documentation in a docs directory is fairly easy: The quickstart utility will ask you many questions to configure your documentation. Aside from the project name, author, and version (which you have to type in yourself), the defaults are fine. However, I do like to change a few things: Similar to iPython Notebook, reStructured text can render LaTeX syntax mathematical formulas. This utility will create a Makefile for you; to generate HTML documentation, simply run the following command in the docs directory: The output will be built in the folder _build/html where you can open the index.html in your browser. While hosting documentation on Python Hosted is a good choice, a better choice might be Read the Docs, a website that allows you to create, host, and browse documentation. One great part of Read the Docs is the stylesheet that they use; it’s more readable than older ones. Additionally, Read the Docs allows you to connect a Github repository so that whenever you push new code (and new documentation), it is automatically built and updated on the website. Read the Docs can even maintain different versions of documentation for different releases. Note that even if you aren’t interested in the overhead of learning reStructuredText, you should use your newly found Markdown skills to ensure that you have good documentation hosted on Read the Docs. See MkDocs for document generation in Markdown that Read the Docs will render. To learn more about reStructuredText syntax, please see the reStructuredText Primer. AsciiDoc When writing longer publications, you’ll need a more expressive tool that is just as lightweight as Markdown but able to handle constructs that go beyond simple HTML, for example cross-references, chapter compilation, or multi-document build chains. Longer publications should also move beyond the web and be renderable as an eBook (ePub or Mobi formats) or for print layout, e.g. PDF. These requirements add more overhead, but simplify workflows for larger media publication. Writing for O’Reilly, I discovered that I really enjoyed working in AsciiDoc — a lightweight markup syntax, very similar to Markdown, which renders to HTML or DocBook. DocBook is very important, because it can be post-processed into other presentation formats such as HTML, PDF, EPUB, DVI, MOBI, and more, making AsciiDoc an effective tool not only for web publishing but also print and book publishing. Most text editors have an AsciiDoc grammar for syntax highlighting, in particular sublime-asciidoc and Atom AsciiDoc Preview, which make writing AsciiDoc as easy as Markdown. AsciiDoctor is an AsciiDoc-specific toolchain for building books and websites from AsciiDoc. The project connects the various AsciiDoc tools and allows a simple command-line interface as well as preview tools. AsciiDoctor is primarily used for HTML and eBook formats, but at the time of this writing there is a PDF renderer, which is in beta. Another interesting project of O’Reilly’s is Atlas, a system for push-button publishing that manages AsciiDoc using a Git repository and wraps editorial build processes, comments, and automatic editing in a web platform. I’d be remiss not to mention GitBook which provides a similar toolchain for publishing larger books, though with Markdown. Editor’s Note: GitBook does support AsciiDoc. To learn more about AsciiDoc markup see AsciiDoc 101. LaTeX If you’ve done any graduate work in the STEM degrees then you are probably already familiar with LaTeX to write and publish articles, reports, conference and journal papers, and books. LaTeX is not a simple markup language, to say the least, but it is effective. It is able to handle almost any publishing scenario you can throw at it, including (and in particular) rendering complex mathematical formulas correctly from a text markup language. Most data scientists still use LaTeX, using MathJax or the Daum Equation Editor, if only for the math. If you’re going to be writing PDFs or reports, I can provide two primary tips for working with LaTeX. First consider cloud-based editing with Overleaf or ShareLaTeX, which allows you to collaborate and edit LaTeX documents similarly to Google Docs. Both of these systems have many of the classes and stylesheets already so that you don’t have to worry too much about the formatting, and instead just get down to writing. Additionally, they aggregate other tools like LaTeX templates and provide templates of their own for most document types. My personal favorite workflow, however, is to use the Atom editor with the LaTeX package and the LaTeX grammar. When using Atom, you get very nice Git and Github integration — perfect for collaboration on larger documents. If you have a TeX distribution installed (and you will need to do that on your local system, no matter what), then you can automatically build your documents within Atom and view them in PDF preview. A complete tutorial for learning LaTeX can be found at Text Formatting with LaTeX. Conclusion Software developers agree that testing and documentation is vital to the successful creation and deployment of applications. However, although Agile workflows are designed to ensure that documentation and testing are included in the software development lifecycle, too often testing and documentation is left to last, or forgotten. When managing a development project, team leads need to ensure that documentation and testing are part of the “definition of done.” In the same way, writing is vital to the successful creation and deployment of data products, and is similarly left to last or forgotten. Through publication of our work and ideas, we open ourselves up to criticism, an effective methodology for testing ideas and discovering new ones. Similarly, by explicitly sharing our methods, we make it easier for others to build systems rapidly, and in return, write tutorials that help us better build our systems. And if we translate scientific papers into practical guides, we help to push science along as well. Don’t get bogged down in the details of writing, however. Use simple, lightweight markup languages to include documentation alongside your projects. Collaborate with other authors and your team using version control systems, and use free tools to make your work widely available. All of this is possible becasue of lightweight markup languages, and the more profecient you are at including writing in your workflow, the easier it will be to share your ideas. Helpful Links This post is particularly link-heavy with many references to tools and languages. For reference, here are my preferred guides for each of the Markup languages discussed: Markdown Basics the iPython Notebook documentation reStructuredText Primer AsciiDoc 101 Text Formatting with LaTeX. Books to Read Instant Markdown by Arturo Herrero IPython Interactive Computing and Visualization Cookbook by Cyrille Rossant Special thanks to Rebecca Bilbro for editing and contributing to this post. Without her, this would certainly have been much less readable! District Data Labs provides data science consulting and corporate training services. We work with companies and teams of all sizes, helping them make their operations more data-driven and enhancing the analytical abilities of their employees. Interested in working with us? Let us know!
Markup for Fast Data Science Publication
50
markup-for-fast-data-science-publication-16732ff6e592
2018-03-12
2018-03-12 17:02:22
https://medium.com/s/story/markup-for-fast-data-science-publication-16732ff6e592
false
2,991
Data science tutorials, thought pieces, and other awesome content.
null
DistrictDataLabs
null
District Data Labs
tojeda@districtdatalabs.com
district-data-labs
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,ANALYTICS,BIG DATA
DistrictDataLab
Writing
writing
Writing
167,305
District Data Labs
Data science consulting firm, research lab, and open source collaborative.
96c976e31f28
DistrictDataLabs
921
471
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-16
2018-05-16 20:32:05
2018-05-16
2018-05-16 23:25:48
1
true
en
2018-05-18
2018-05-18 00:59:28
5
16751b42e630
5.690566
7
6
0
Is the problem with AI assistants or just with some people?
5
Photo by Kevin Bhagat on Unsplash Your AI assistants are people, too! Is the problem with AI assistants or just with some people? By Mike Meyer You think we would have gotten a little better at this by now. We’ve been on a recognizably accelerating tech driven paradigm shift for forty years or so. If you look at the media we have only a few consistent topics that we follow, political insanity and collapse is high on the list but followed closely by endless, breathless discussion of technological change. Unfortunately it’s not helping. Depending on your age you can recall, or research the last half of the 20th century that started with us going to the moon, moved to digitalization of everything, discovered personal computers, and ended up with the internet. It was all good. There were always a few who were certain it all would come to a bad end but they didn’t set the tone. It was all good. It wasn’t really all good but if you took the general tone it was one of amazement at the possibilities. The problems were really all with people misusing the technology. This is the nature of a paradigmatic shift while the forces of change steadily build in a geometric progression. All of this technology would really change things but only in the “future”. This is a different future than next week’s home project or next year’s vacation. The “future” may not come at all and it probably won’t cause us to change our plans or reschedule our vacation. Unfortunately geometric progression tricks our linear brains into thinking we know it and it’s ok until we are suddenly shoveling as fast as we can and none of this is fun anymore. We’re about twenty years into that part of the shift. Those people least able to adapt are freaked and getting ready to smash things. The simple reality is that ability to adapt requires native liking for change, flexible intelligence and, usually, more and better education. There does not appear to be any shortcuts or tricks to this. You either have it and improved it by learning or you worked hard at education and it worked for you. This is not going to stop nor is it suddenly going to revert to a previous age. As this accelerates geometrically staying afloat on the deluge of change will require faster and more intensive adoption of technological lifesavers. It’s not going to be easy. The human species has always been divided between the fast changers and slow changers. The difference is the level of resistance and resulting conflict but we all change. This is not to be confused with fast thinking and slow thinking other than slow change people are more adverse to slow thinking. More on this later. We’ve been struggling with the impact of this on our ways of processing information and communicating. We did pretty well integrating the internet and the pocket information processor otherwise knows as the smart phone. As a result we have decided that those things are pretty normal. i.e. normalized. We’ve been burned by nasty opportunists who have figured out how to take advantage of the slow change people who mistook social networks and fully decentralized media for some sort of authority. The slow change people tend to be susceptible to authoritarians so this should not have been a surprise. The speed at which people are not thinking about self driving cars as indicated by the general lack of an opinion on them suggest that this may slide right in without a large problem. Recorded experiences with self driving cars seems to be fear and reluctance until actually riding in one when it soon becomes boring. That is the ultimate normalization. The issues will be rethinking legal responsibility and how we handle non-human decision makers. And that is a big one that may force a human course change. Non-human humans This seems to be the issue with the sudden hand wringing and growing discomfort over Googles Duplex. The expectation seems to have been that the demonstration of a new assistant who could actually make phone calls and schedule things for you would be seen as awesome. It is awesome but it also scared hell out of a lot of people. That’s a problem because as I’ve said above, we need all the augmentation that we can get to ride this deluge of change. As always the paradigmatic shift is a non-linear process with an array of recursive steps and feedback loops built in. One way to see this is a layers of complexity. For my purpose here this can be simplified into the layer of slow change people and the layer of fast change people worried about other things. Slow change people freaked because they were suddenly way into the Uncanny Valley. The original theory of the uncanny valley was the creation of human images that were almost real but caused revulsion as measured by a drop on reaction graphs, hence the valley. There may be people who react strongly to this and nothing else but I suspect that we are seeing a slow change people response to dealing with a non-human human or artificial person. To fast change people the Google Duplex is great. No need to make scheduling calls or hire someone to do that. Go for it. The people troubled by this are troubled because in the demonstrations the Google Duplex successfully imitated a human and the people on the other end had no idea they were dealing with an artificial person. Does that matter? We do business all the time with machines but we have to push buttons or say numbers or ‘yes’ and ‘no’. This is a technology problem that I have seen in a number of settings with new technology introductions. People who are having problems dealing with a new system or device raise issues that are not really issues. Those are simply indicators of the slow change person. This seems to be leading to demands that an artificial person must declare itself to be non-human from the start of the conversation. Not only is this unnecessary it will mark AI assistants as second class beings. While that may not be an immediate concern I’m betting that in a few years that will become a point of rights and discrimination. Why start on that road if it is not needed. Google Duplex, Assistant, or any other can quickly run into situations requiring a real human to sort out issues. Should these assistants be required to announce their non-human status then? As long as they hand off to a more capable person (and I can see this taking a couple of steps before you get to a true homo sapiens)why should I care? The higher level of concern entangled in this is using artificial beings to imitate authority figures. This is a valid concern but, again, not an issue specifically with an artificial being acting in a traditionally human job. This is an issue of authentication of authority. The issue is misrepresentation that is at least an ethical issue if not a criminal one depending on action and intent. This is also being posited as unethical simply for an intelligent system to pretend to be human. Why does this matter? Am I put in danger because an AI system scheduled an appointment with me for their ‘employer’? I don’t care and I think this should not be confused with the very large and real problem of authenticating authority and identity for everything. To me the real issue is the relationship between and agent and the person being represented. There is a long legal history of how this is done with power of attorney and other forms. This is becoming something that we must address and may loop back to AIs as agents bearing responsibility for actions taken. A major issue is liability and that is an authority and identity issue. Is the self driving car a legal agent of the owner? Of the people riding in it? Is it a being in its own right? Perhaps this will be handled by all intelligent beings or devices, virtual or physical having a blockchain identity accessible at any time. Now that will be a slow change people freak show. We need to move on. As at the beginning we need all the artificial help we can get. The slow changers have already caused huge political problems by failing to pay attention or not learning the difference between a Russian propaganda Facebook account and someone they can pay attention to who will tell them the truth. Let’s not be side tracked by none issues and potential forms of discrimination.
Your AI assistants are people, too!
37
your-ai-assistants-are-people-too-16751b42e630
2018-05-18
2018-05-18 11:17:00
https://medium.com/s/story/your-ai-assistants-are-people-too-16751b42e630
false
1,455
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mike Meyer
Educator, CIO, retired entrepreneur, grandfather with occasional fits of humor in the midst of disaster. . .
ae38d08917ca
mike.meyer
8,947
564
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-03
2018-07-03 05:47:30
2018-07-03
2018-07-03 05:50:23
0
false
en
2018-07-04
2018-07-04 04:43:11
3
1675467cf13
3.384906
0
0
0
The organisations whether small or big, look at digital transformation initiatives as big time consuming investments. Are they really so…
5
How quickly will you realize the value of Digital Transformation? The organisations whether small or big, look at digital transformation initiatives as big time consuming investments. Are they really so? How would you derive value out of it quickly? Our experience says if the stakeholders are clear about the value they want to achieve it will be easier to implement a holistic digital transformation journey. The team can directly focus on defining the matrices that will track and measure the values created. The signals and systems that will be involved in the entire process can be identified based on these factors. So, make sure the team identifies the following three aspects upfront Values to be achieved on a continuous basis Matrices to be tracked & measured Signals & Systems to be involved Once the entire team involved is clear and agree about these three aspects, the results will flow without any delay. It is important to note that the current digital maturity of the organization is not a bottleneck for the transformation that you are trying to achieve. You are always ready for Digital Transformation! If your goals are clear and if there is management drive from top-down, the entire organization will be trying to achieve a single goal. A bottom-up approach for the execution will help in collecting the feedbacks as quickly as possible and apply your corrective measures. A Minimum Viable Product approach with a series of iterations will allow the organization to focus on specific outcomes with a continuous improvement plan attached to it. This will ensure that failures, if at all any, will be fast & cheap, to achieve a sustainable success faster. The top management needs to just ensure that there is a digital transformation momentum that is created and should support from all angles to sustain it. The results will follow. In order to explain this better, let me consider the case of one of the digital transformation projects that we executed in the healthcare space. We had the chance to drive the digital transformation initiative for a major healthcare service provider having a number of hospitals under the group. They wanted to ensure that their visitors and patients are happy with the services they offer and are experiencing a personalised healthcare service for them. The end result should automatically increase the value of the brand and should become the first choice for any healthcare related services. This was a large initiative which would have easily run for many months to implement and realise the expected outcome. But the team was on top of it and decided on the above important steps to ensure that we realise the value as quickly as possible. As the customer was very clear on what they wanted to achieve, it was easier for the team to finalise the requirements. But the expectation was at a broader level. Ensuring patient experience required us to touch pre-hospital, in-hospital & post-hospital care. The mobile channel was an easy target for covering all these care points and facilitating the required services seamlessly for the end users. The team came up with a well thought out plan for the implementation and release. The usage analytics and feedback collection channels were included upfront in the application. We started off with the pre-hospital experience where it became easier for the users to search for the doctors and book appointments. The reminders helped the users to ensure that they do not miss the appointment. Within the in-hospital care, patients-queues at different departments and reports from the labs were managed appropriately through mobile channels. Post-hospital care included medicine reminders, routine collection of health parameters etc. These improved the overall experience of patients and also reduced the overhead of manually handling the requests at hospital premises. The results were realised upfront, within few months, as rollout plan was in such a way that it targeted each stage of care giving one-by-one. The users were most likely to use the mobile apps during the pre-hospital scenario and it was targeted first. The other services were released as bonuses and users were taught and encouraged to use them. The analytics and feedbacks helped the team to fine-tune the application in an efficient way. The integrations with internal systems were carried out with well thought out interfaces where in the information or data are available for other insights driven exercises as well. There is constant evaluation of number of services availed through mobile channels vs manual channels. Targeted awareness campaigns are run to ensure more and more users use the mobile channels regularly. As a next step, we are deriving the insights from the footprint data of services facilitated through mobile and are rolling out more value added services to the users. One such service derived as an output of the Digital Transformation initiative include the roll out of Tele-consultations for the cases where there are frequent consultation requirements, but which can be enabled through tele channels including mobile. This specific case of hospital-group transforming itself into a new-generation digitally-transformed healthcare service provider shows us that Digital Transformation is a continuous journey and will always help us stand-out from the general crowd. We need to target the results one by one. The earlier we start, the better we will become!
How quickly will you realize the value of Digital Transformation?
0
how-quickly-will-you-realize-the-value-of-digital-transformation-1675467cf13
2018-07-04
2018-07-04 04:43:12
https://medium.com/s/story/how-quickly-will-you-realize-the-value-of-digital-transformation-1675467cf13
false
897
null
null
null
null
null
null
null
null
null
Digital Transformation
digital-transformation
Digital Transformation
13,217
Raja Sujith
Entrepreneur, Digital Transformation Enthusiast, AI, IoT, Media OTT | raja@attinadsoftware.com | https://www.linkedin.com/in/rajasujith/
ced71c647e2
rajasujith
1
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-09
2017-12-09 19:35:55
2017-12-22
2017-12-22 08:12:49
9
false
en
2017-12-22
2017-12-22 08:12:49
9
16762468c88d
4.471698
0
0
0
In the previous post, we installed and setup our environment for Python Language and also ran a code of Classification problem available at…
5
Head Start Machine Learning for Absolute Beginners Part-2 (Classification part-1) In the previous post, we installed and setup our environment for Python Language and also ran a code of Classification problem available at http://scikit-learn.org/stable/. What is Classification and why it is so important? According to Wikipedia, Classification is a general process related to categorization, the process in which ideas and objects are recognized, differentiated, and understood. Before starting with the coding you should know about your problem statement and dataset. In the machine leaning world, we can’t proceed without having any knowledge of our dataset and the problem set. What is the problem we are solving ? The problem statement is the important thing to figure out before starting any project of machine learning. Either you can create your problem statement using dataset, or you have the problem and you have to find the data for your problem. In our case we already have the problem statement as well as dataset for our problem. Given a Training Data with images and corresponding digits to learn to classify new data among the characters The Application of our statement can be an OCR. What data are we using and where did it come from? According to Wikipedia, MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. It was created by “re-mixing” the samples from NIST’s original datasets. The MNIST database contains 60,000 training images and 10,000 testing images. Half of the training set and half of the test set were taken from NIST’s training dataset, while the other half of the training set and the other half of the test set were taken from NIST’s testing dataset. http://yann.lecun.com/exdb/mnist/ Now, we know about our dataset, where did it come from, and what it contains. We also know the problem that we are going to solve. So, let’s start with the code and the algorithm. Importing scikit-learn package %matplotlib inline is an inline magic command in ipython. Loading Dataset using scikit-learn dataset module We can load data from scikit-learn dataset package, or can download the dataset externally. Visualisation of dataset, Images and their corresponding labels We can see the data value by printing the variable digits. Flatten Images into a single column Why we have to reshape images? Because our algorithm needs a vector representation of pixels, but our image is in matrix form so we have to change our data according to our algorithm. Creating a classifier using sklearn module svm SVM is Support Vector Machines, It’s an algorithm developed by ( Vladimir N. Vapnik and Alexey Ya. Chervonenkis in 1963). We are using this algorithm for classification. There are also many other algorithms for classification but right now we are just using SVM for demonstration of how classification works. We will see some more Classification Algorithms and compare their results , on future posts… so stay tuned! Fitting data to the classifier What does fitting data or curve fitting means? According to Wikipedia, Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. In simple language for understanding, fitting data means to learn the knowledge present in the dataset. Now that we have trained our model, we are now ready to see some test results. Predicting new data using predict method available in scikit-learn In the variable expected, we will save the original labels of the data. In variable predicted, we will save our model’s predicted labels. Classification Report This is a Classification report module available in scikit-learn to see the result, and performance of our model. There are some basic terminologies that you should know. Precision: The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. Recall: The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. F1-score: The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal. The formula for the F1 score is: F1 = 2 * (precision * recall) / (precision + recall) Support: The support is the number of occurrences of each class in expected. Confusion Matrix: The confusion_matrix function evaluates classification accuracy by computing the confusion matrix. For more details please checkout Scikit-learn Documentation. Predicted results Visualization We have used Support Vector Machines Algorithm for classification in this model. Now, you have good idea how to make a machine learning models using python, you must have some question if you had gone through this post carefully. In our next post we will answer those question and also use some well known algorithm and also compare the results of those algorithms. We’ll also do in-depth discussion on model evaluation methods( Precision and recall, classification matrix, etc.)
Head Start Machine Learning for Absolute Beginners Part-2 (Classification part-1)
0
head-start-machine-learning-for-absolute-beginners-part-2-classification-part-1-16762468c88d
2017-12-22
2017-12-22 08:12:50
https://medium.com/s/story/head-start-machine-learning-for-absolute-beginners-part-2-classification-part-1-16762468c88d
false
867
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Dev Societies
null
97a66f375470
devsocieties
6
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-14
2018-07-14 15:35:17
2018-07-14
2018-07-14 16:34:41
0
false
en
2018-08-10
2018-08-10 06:11:45
0
1677349cea50
3.430189
0
0
0
Well I thought : since we have already gone off on a 90 degree tangent in the last chapter perhaps it serves ‘whatever this literary piece…
5
Chapter 2: confusions of the seduced mind Well I thought : since we have already gone off on a 90 degree tangent in the last chapter perhaps it serves ‘whatever this literary piece of work or delusion is’ to take matters to earlier stages of what began it him or her. And to give a deeper much more in depth introduction to the author or is this being a third party? Is it the author who has so become attached to this being whether it be a figment of his or her or it’s imagination that the subject of distinguishing reality from delusion has itself become a delusion? But then a critic may say well who was the child? What child? Did I hear you correctly? a picture of a child you said? Who? Suzy? Well frankly you tell me. Whose Suzy? This then brings us to the topic of whether this writing or combined pieces of writing are the work of one or many or few individuals? Beings? Computers? Or by consensus amongst a community coming together for the betterment of society as with the crypto currency world being currently lived in where nodes work together to make decisions. Yet in that case I, him, her or we must be prudent to ensure the work is not hijacked as there is always the chance of a 51 percent attack. So is it the former, the latter or the in-between matter? And if the latter than we must be careful to ensure this community remains protected or to take from Plato’s work in The Republic or was it Socrates? Ahem, to paraphrase — everything within the community should be created, devised, revised and survived by consensus of the community. And yet to add to the further confusion surrounding that last matter are those words not also famous in the world of cryptography, crypto and coding etc and their self proclaimed admitted by consensus — new age open source God : Satoshi Nakomoto? Or as many say it is the CIA: well then the question is why a Japanese name? Why not Russian or Chinese or Muslim who are the obvious terrorists of today correct? Or is that a sign that Japan is next? And there is chatter in the background of a Muslim spoken of earlier. What is happening? What Muslim? What picture and who is Suzy? Or Satoshi did you say and what is the matter of discussion here anyway? Anything or everything? One or many? Any or none? Is any of this making any sense yet? Have philosophers and their progeny migrated east in pursuit of better land or become capitalists on account of the genuine shift of power which seems to be occurring, referring back to Socrates and japan again. Or was it Satoshi and philosophy? And hence this work continues: To begin, in today’s fruit bazaar (or bizarre) terms. Hang on have we gone back to an introduction of some sort? What is going on here? Is this some sort of work of comedy? science fiction? Philosophy? Business? Medical/ material science? Wait that was not mentioned or is that what Suzy is? Or coding? Finance? Capital markets? All, one, none or a combination of some? And there we have some poetry too. The dizziness on this subject will linger or turn into a migraine shortly so let’s get back to that bazaar. Brand — Apple : (forbidden fruit of the heavens that began this, what seems like, eternity of internal combat, yet, at the beginning of the end shall seem like a mili second) Model — ‘MAC-2’ Version: [Modern acceptance of connection (Shi’a/Sunni) update no. 2 (2nd born child)] — who is this? There’s that Moslem, oh I see. What’s a Sunni and a Shia? Let’s let this thing talk a little more. Carry on you confused mind, seduced mind or whatever it is they call you? Possessed? And here we have some fictional or comedy horror flick of sorts as well. Manufactured in: Pakistan. OH FOR GODS SAKE!!!!!!!!!!!!!! Now we have entered South Asia says the reader or is it the author? Wait let’s not get back to that we have been there. Continue you: whatever you are, Jesus Christ !!!! Aghhhhhhhhh!!!! Now we have a Christian? This must really be a work of many or someone with a multiple personality disorder or a malfunctioned virus infected Artificial Being or hacked code of some sort. Just go, talk, write or whatever it is that is happening here before everyone leaves. Update 1.0: Karachi Update 1.1: London Update 1.2: Lahore Update 1.3: Boston Update 1.4: Lahore/Faisalabad Update 1.5: Dubai Regular System Checks: Location infinity Regular Hardware checks: Islamabad & Lahore Thesis (or whatever the new vocabularies of language call it) to date: I began this journey as the a fore typed notational slash sign. The product of two parents from opposing belief systems in terms of two separate individuals but one set of complementing Muslim and most importantly human or ‘Real Intelligence’ beings. At this writing I am currently in the time of early emergence of ‘Artificial Intelligence’. The probability of my becoming obsolete in my current form is 100 %. On an individualistic basis at the far ends of the metaphorical ruler that defines Islam as a collection. But at opposite ends as extremes of sect-oral religion; that is comrades : to put it in simplistic, industriously defining moments the beginning….
Chapter 2: confusions of the seduced mind
0
chapter-2-confusions-of-the-seduced-mind-1677349cea50
2018-08-10
2018-08-10 06:11:45
https://medium.com/s/story/chapter-2-confusions-of-the-seduced-mind-1677349cea50
false
909
null
null
null
null
null
null
null
null
null
Religion
religion
Religion
27,230
FReediuM
Un’/‘conventional finance - product of fm.idiom’s past and current lives.
7ff3ab22dd60
fm.idiom
21
47
20,181,104
null
null
null
null
null
null
0
null
0
92bc84a24ea5
2017-08-29
2017-08-29 09:14:26
2017-09-08
2017-09-08 12:14:58
3
false
en
2017-09-18
2017-09-18 17:04:34
24
16777404ce02
4.606604
207
11
0
The New Search for Hard Problems
5
Deeper Tech is Sexy, Again! The New Search for Hard Problems Paper: Who Finds Bill Gates Sexy? How Did We Get Here? At a very high level, the job of a founder primarily revolves around finding and solving a problem that can lead to a significant and sustainable business. In the startup world, the solution to that problem often involves the use of technology to enable a new product, process, business model, or a combination of the three. When building a product, defining a process or testing a business model, founders and investors face different degrees of uncertainty. That uncertainty or risk is determined by the state of the art. State of the art (sometimes cutting edge) refers to the highest level of general development, as of a device, technique, or scientific field achieved at a particular time. It also refers to such a level of development reached at any particular time as a result of the common methodologies employed at the time — Wikipedia Today, we have a decent understanding of the technology risks involved with building products and processes for the web and mobile. We also have some (more-or-less) advanced frameworks to understand a number of business models, such as SaaS, marketplaces, etc. First Issue: Did We Get Complacent? “We wanted flying cars, instead we got 140 characters.”― Peter Thiel Over the last few years, the fast penetration of the internet and mobile into the consumer and enterprise markets has created incredibly powerful trends and opportunities, such as eCommerce, social media, marketplaces, mobile or SaaS. If you compare this wave of innovation with previous trends like semiconductors, operating systems or enterprise software, you will notice a pattern: the most recent generation of companies has taken more risks on the business models and customer acceptance dynamics, but fewer on the technology side of things. I wouldn’t dare to say that it’s easy to build those products — definitely, not at the scale of some of them. However, they tend to share similar technology stacks and architectures that are well understood and predictable to a certain degree (1, 2). In an environment as risky as the startup world — where statistically most companies fail — reducing technology risk sounds like a great idea! Then, at least in theory, the founder can focus only on finding the right opportunity because it seems obvious that the solution can be built. However, the success of the recent trends also has a side effect: most investors — and possibly, as a result, most founders — are comfortable taking those risks, but they aren’t pushing for new opportunities that might have different ones. Second Issue: Insane Competition for the “Easy” Opportunities? Today, everybody agrees that it is easier than ever to found a new startup. Costs are plummeting, and the information and tools needed to iterate fast and efficiently are readily available. From a consumer point of view, that’s great! It means that more founders are trying to solve many more problems. But at the same time, that makes it a lot more challenging to build a large, sustainable business. Economic theory helps us explain that: a proliferation of new entrants can lead to perfect competition. And in the absence of other moats, then everybody knows what can happen next: nobody makes any real money! But since it’s easier than ever before™ to start a company, it’s harmless for the new kid in the block to try it. Does the world need another Snapchat? Or another marketplace, on-demand company, food startup, peer to peer lending platform? Isn’t there a SaaS company in just about every segment now? — Matt Turck The Result: Frontier Tech Comeback Made-in-Germany flying cars are coming and with 90m$ in their pockets :) If the “easier” things are too competitive, then what’s left? The hard things! Luckily, founders looking for hard business problems to solve will not be out of business any time soon. Many things are still very hard to do: On the one hand, even in SaaS or Mobile, finding product-market fit with a minimal amount of resources and in the fastest time is still hard today! But after finding PMF, other topics such as marketing or sales are slightly easier thanks to many playbooks and experienced people out there that you can hire. On the other hand, there’s a group of new, complex technologies, in its early stages. These technologies are not so well understood and they carry a higher degree of uncertainty, but they have enormous potential to impact the world — and, to be honest, they also call the inner geek in most of us! For those companies, there are no playbooks, and it’s very hard to group them into homogeneous categories because technologies and the state of the art evolves. What had a high degree of technology risk yesterday, may be easy today. What was a “hard tech” startup yesterday, may well be just a regular one tomorrow. But today, if we look at areas where the underlying technology has been “democratized” (so it’s accessible to more founders), four areas are commonly considered “frontier tech”: Artificial Intelligence software Cryptocurrencies and Blockchain IoT, Hardware and Drones Artificial, Augmented and Virtual reality There are also other areas in which very complex technology is evolving quickly towards its democratization. The following seven areas are among the candidates for the new frontiers of tech: Bioinformatics Space Energy Robotics Unmanned vehicles Farming Quantum computing It’s also worth mentioning that Developer tools and Infrastructure, Security software, and Fintech — not financial services — are not considered frontier tech. But they usually (1) require a high degree of specialisation and (2) tend to be built on the latest technology advancements. A Frontier Tech Investment Thesis at Point Nine Capital At Point Nine Capital we are mostly known for our focus on SaaS and marketplaces. At the same time, we have been also privileged to back some companies that are building “hard” technology, including the following: Artificial Intelligence software — candis.io, kreditech.com, remerge.io, Cryptocurrencies and Blockchain — bitbond.com, chainalysis.com IoT, Hardware and Drones — airstoc.com, automile.com, getkisi.com Developer tools and Infrastructure — algolia.com, contentful.com, sqreen.io Christoph Janz recently published a sneak peek of our investment thesis. Pawel Chudzinski will follow up with more details on our thinking around marketplaces and during the next few months, I plan to do the same around our focus in frontier tech. Please stay tuned :) Did you like the post? Please, let me know by clicking on the heart below ♡ or contact me at @DecodingVC.
Deeper Tech is Sexy, Again!
1,053
deeper-tech-is-sexy-again-16777404ce02
2018-05-16
2018-05-16 02:13:05
https://medium.com/s/story/deeper-tech-is-sexy-again-16777404ce02
false
1,075
Stories from the Point9 team & portfolio companies
null
PointNineCap
null
Point Nine Land
info@pointninecap.com
point-nine-news
VC,STARTUPS,SAAS
PointNineCap
Frontier Tech
frontier-tech
Frontier Tech
107
Rodrigo Martinez
Investor in Startups @PointNineCap - Hobbyist developer - Spaniard abroad
41c210e282a2
decodingVC
4,286
13,129
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-01
2018-03-01 17:36:17
2018-03-01
2018-03-01 17:38:47
1
false
en
2018-03-01
2018-03-01 17:38:47
5
16793003bdc0
1.720755
0
0
0
Driverless cars have been in the works for years, and it is likely that they will start to see commercial use in the next year or two. The…
5
Take Me Home: The Road To Driverless Cars Driverless cars have been in the works for years, and it is likely that they will start to see commercial use in the next year or two. The development process has a been a long and winding road, but it is finally time for humanity to look back and see how far they have come in the past few years. Where It Began The hunt for driverless cars began almost a century ago, in 1925. Houdina Radio Control created a car that could be steered via a remote control, and demonstrated it to the public. It wasn’t truly autonomous, but it inspired several decades of attempted improvements. A new method appeared in the middle of the 20th century, when RCA engineers found a way to steer a car with wires that were laid into the floor. The developers thought they would become universal within a few decades, but the technology never caught on. The American military led to the next string of innovations, with the DARPA Grand Challenge. The goal was to create driverless vehicles that could operate in a variety of environments, including urban ones, for military use. Private industry has since taken over most of the research and based many of their own designs on those that first came from DARPA’s efforts. Those designs are finally starting to hit the streets. The Future The era of the driverless car is approaching, but it will still be a few years before private citizens start to purchase them. Plans for their deployment are already in place, so people can form accurate expectations for the future. The companies that produce these cars are already seeking government approval to deploy them in commercial fleets. Most people will have their first encounter with a driverless car in a taxi service, although it is likely that they will also see use as delivery vehicles. Companies will be able to use the data that they get from these deployments to further refine their designs. Autonomous driving technology will start to become more common in the years after that first deployment. Driver assistance systems that take over the controls under specific circumstances will likely be the first step for private drivers. In time, fully autonomous cars will become available to individuals, after they have been proven to be safe and effective in other roles. This blog was originally published on RossPamphilon.net.
Take Me Home: The Road To Driverless Cars
0
take-me-home-the-road-to-driverless-cars-16793003bdc0
2018-03-01
2018-03-01 17:38:47
https://medium.com/s/story/take-me-home-the-road-to-driverless-cars-16793003bdc0
false
403
null
null
null
null
null
null
null
null
null
Autonomous Cars
autonomous-cars
Autonomous Cars
4,703
Ross Pamphilon
Ross Pamphilon is the Chief Investment Officer of the ECM Asset Management. http://rosspamphilon.net/
6d5e810878c
rosspamp1
2
27
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-14
2018-08-14 03:28:53
2018-08-14
2018-08-14 03:34:51
0
false
en
2018-08-14
2018-08-14 03:34:51
4
167a19f09c45
0.30566
0
0
0
Machine Learning
3
Online Course List Machine Learning Google Machine Learning Crash Course Machine Learning Crash Course | Google Developers An intensive, practical 20-hour introduction to machine learning fundamentals, with companion TensorFlow exercises.developers.google.com 台大 林軒田 機器學習基石上(Machine Learning Foundations) — Mathematical Foundation https://www.coursera.org/learn/ntumlone-mathematicalfoundations Service Management 台大 國泰金控 服務模式的體驗、設計與創新:從痛點到賣點 https://www.coursera.org/learn/service-models Python Programming Programming for Everybody Programming for Everybody (Getting Started with Python) | Coursera Programming for Everybody (Getting Started with Python) from University of Michigan. This course aims to teach everyone…www.coursera.org Deep Learning
Online Course List
0
online-course-list-167a19f09c45
2018-08-14
2018-08-14 03:34:52
https://medium.com/s/story/online-course-list-167a19f09c45
false
81
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
HsinYao Liao
null
5f191177a522
justin4793
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-14
2018-07-14 18:21:07
2018-07-17
2018-07-17 05:16:53
3
false
en
2018-07-17
2018-07-17 05:16:53
7
167aeac3f657
5.772642
2
0
0
In my previous article, I set out to describe the machine learning cycle of chatbots in two parts. In this post I’ll expand on my previous…
3
The Machine Learning Cycle in Support Chatbots (Part 2 of 2) In my previous article, I set out to describe the machine learning cycle of chatbots in two parts. In this post I’ll expand on my previous explanations, so make sure you have checked those out before. Let’s begin with a quick recap: In Part 1, we saw how the flow of a support chatbot is generated: Either based on rules or dynamically. In Part 2, we’ll dive into how the flow is changed. To use an example from physics: If Part 1 was about speed, Part 2 is about acceleration (the changing of the speed). Untangling the Confusion Matrix In machine learning, we often use what’s called a confusion matrix to understand the performance of an algorithm. Wikipedia has a great article about it. Rather than repeating it here, I’m going to apply the findings to chatbots, which will help us assess them. True Positive (TP): This is a prediction by the chatbot that matches the user’s true problem. In other words: The bot figured out what was wrong and found a correct solution. If you want to know the status of your order and a bot provides you exactly with it, you’re seeing a True Positive. False Positive (FP): This is a wrong solution provided to a user. If you want to know the status of your order and the bot shows you how to change the delivery address, you’re seeing a False Positive. False Negative (FN): Now it gets a bit more tricky: This term describes what happens when a bot has been trained to provide a problem’s solution but doesn’t provide one. If you want to know the status of your order and the bot stops the conversation to hand it over to a human, you’re seeing a False Negative. True Negative (TN): This is the case if the bot hasn’t been trained the user’s solution and thus correctly hands the conversation over to a human. If you have a very complex problem with an order, the bot determines that human help is required and hands over the conversation, then you’re seeing a True Negative. I like to explain the confusion matrix with the example of a hunter: A True Positive happens when a hunter sees a deer, shoots and hits the deer. A False Positive happens when a hunter sees a tree, shoots and hits the tree. A False Negative happens when a hunter sees a deer and doesn’t shoot. A True Negative is happens when hunter sees a tree doesn’t shoot. Two Ways of Extending the Training Data Let’s take a look at our model from the first article: As you can see, the Trained Model relies on Training Data to make its predictions. (If you are curious to learn more on Training Data and Training a Model, I can strongly recommend this series of posts from Adam Geitgey). The more Training Data you have, the better the model should become. This is where the True Positives come in. They provide the real-world samples that allow us to improve the Trained Model. There are two ways to do it: Automatically and manually. The Elegance of Automatic Learning (Disclaimer: I don’t know if Erwin can do what I’m writing here, this is just an illustration) For machines to get better than humans, they need to learn automatically. In our case, that means a chatbot needs to add new Data Points to the Training Data in order to improve the Trained Model. Let’s look at this in the context of an example from Part 1. There I showed you a riddle bot that asked me “What fruit can you spell by using only A, N, and B a number of times?”. I correctly solved it by guessing “Banana”. If the bot counted all games played, it could save “solved-correctly” as a Data Point and start learning from it: For instance, the bot could calculate the “solved-correctly” ratio for each riddle and then optimize its questions by providing riddles that are neither too easy (=100% correct) nor too hard (=0% right). With the information gained from users like me, it would always provide the best riddles (e.g. 90% guessed right). This would mean, that the flow would change over time. And it would do so automatically, which means without human interaction! The graphic below illustrates the above mentioned. Each bot can automatically learn different things — with the algorithmic constraints that define the scope of flow change. Just to name another example: Our virtual agents at Solvemate learn “solution popularity” from True Positives. The more popular a solution is, the more likely it is going be automatically proposed. I believe that automatic learning is the supreme discipline of chatbots. It has the potential to meaningfully improve this exciting technology over time — but it’s also super tricky. Things can go wrong, as we’ve seen with the Twitter Bot Tay from Microsoft. The Difficulty of Manual Learning Let’s assume the bot did not figure out the true customer request. This means we’re either dealing with a False Positive, a False Negative, or a True Negative. Automatic learning is much, much harder in these cases. Just imagine that you are in a completely dark room, trying to shoot a basketball through a hoop. Learning from True Positives means that you learn where the basket is after you’ve at least touched it — it gives you a lot of information. Not touching the basket (=True Negative) also gives you some information. It means the basket isn’t where you reached, but still doesn’t tell you where the true location is. Returning to chatbots: Automatic learning from not solving a customer request is possible, but much much harder. Which is why these cases usually require human review to add a Data Point to the Training Data. Let’s assume a user wants to check their order status but the bot didn’t have a solution. In this case, an AI Trainer should manually add “get order status” as Data Point to the Training Data and provide some wording around it. AI Trainers review conversations and tell the bot what it should have responded. It’s like teaching a child how to behave. Solvemate’s virtual agents do this similarly: For all patterns where solutions were wrong, the AI trainer needs to decide whether they should… add a solution (because of a True Negative) change a solution that was wrongly suggested (because of a False Positive) add knowledge to make the solution better (because of False Negative) ignore the New Data Point (because the request wasn’t serious) It boils down to the decision of: Adding a (True Positive) Data Point Not adding a Data Point (=send it to Data Graveyard) The illustration below shows that. Manually adding Data Points to the Training Data is a normal process in chatbots and happens quite frequently. Just keep in mind that a manual training effort can be a significant cost driver. I have written more about this in our Chatbot ROI Calculator. Two Improvement Cycles In summary, we have… an automatic improvement cycle where the usage of the bot leads to New Data Points that change the Training Data that can change the Trained Model, which will potentially change its Responses. Quite fancy, isn’t it? a manual improvement cycle where usage of the bot is reviewed by humans that decide if they want to add New Data Points to the Training Data, that will change the Trained Model and potentially change the Responses. Combining the Insights If you’re in conversation with a chatbot vendor and apply the chatbot taxonomy, you can now dive very deeply: Ask them specifically, how their bot’s flow is dynamic and how they train a Trained Model or Training Data. Now you also have now more background to understand the change of the flow. Ask the vendor which Data Points are processed and how they affect an automatic improvement cycle — unless there are no automatic improvements. One Last Thought It is totally ok, not to have a dynamic flow or an automatic improvement cycle in chatbots. Not every use case needs the most fancy, dynamic, self-improving algorithms. Just understand that typically customer support too complex and dynamic to be automated with rule-based, static bots.
The Machine Learning Cycle in Support Chatbots (Part 2 of 2)
9
the-machine-learning-cycle-in-support-chatbots-part-2-of-2-167aeac3f657
2018-07-27
2018-07-27 15:16:49
https://medium.com/s/story/the-machine-learning-cycle-in-support-chatbots-part-2-of-2-167aeac3f657
false
1,384
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Erik Pfannmöller
CEO @ Solvemate. Passionate about AI, computers and software. Like structure and efficiency. Nerdy on details. Love keyboard shortcuts. Chasing a big vision.
a32b6ada9b2d
epfannmoeller
35
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-17
2018-05-17 14:00:43
2018-05-17
2018-05-17 14:01:07
0
false
en
2018-05-17
2018-05-17 14:01:07
1
167c59bf0657
0.283019
0
0
0
AI and ML are most important pillars of software testing AI is work on a concept of providing an output for the specific task where ML is…
3
The strategy of AI and ML to Transforming and make better Software Testing AI and ML are most important pillars of software testing AI is work on a concept of providing an output for the specific task where ML is used to taking decision for a machine. so all over the software testing is mainly based on AI and ML and here we have such helpful information on improving software testing through AI and ML.
The strategy of AI and ML to Transforming and make better Software Testing
0
the-strategy-of-ai-and-ml-to-transforming-and-make-better-software-testing-167c59bf0657
2018-05-17
2018-05-17 14:01:08
https://medium.com/s/story/the-strategy-of-ai-and-ml-to-transforming-and-make-better-software-testing-167c59bf0657
false
75
null
null
null
null
null
null
null
null
null
Software Testing Services
software-testing-services
Software Testing Services
207
NexSoftSys
Technology Consulting Firm for Customized #Offshore #Software & Mobile #Apps #Development for Healthcare, Telecommunication and Banking System.
e2bc0f6834bf
nexsoftsys
30
229
20,181,104
null
null
null
null
null
null
0
null
0
1f35b6f451e8
2018-04-04
2018-04-04 08:52:59
2018-04-04
2018-04-04 08:54:14
1
false
en
2018-04-04
2018-04-04 13:50:21
7
167e15477d9a
0.883019
0
0
0
null
5
Diary of a dummy curating ressources to make it in Machine Learning 7.3 Linux Certif - Documentation du mot clé 'mv' Documentation de 'mv' - La commande mv (pour move) permet de déplacer des fichiers, ou de renommer un fichier. Pour…www.linuxcertif.com 1.1. Generalized Linear Models - scikit-learn 0.19.1 documentation The is a linear model that estimates sparse coefficients. It is useful in some contexts due to its tendency to prefer…scikit-learn.org Git - Downloading Package You are downloading the latest ( 2.16.2) 32-bit version of Git for Windows. This is the most recent maintained build…git-scm.com GitHub Desktop | Simple collaboration from your desktop Extend your GitHub workflow beyond your browser with GitHub Desktop, completely redesigned with Electron. Get a unified…desktop.github.com GitX (L) Esta es mi propia versión de GitX y cumple con mis requisitos para el uso diario de Git en MacOSX.gitx.laullon.com Try Git Learn how to use Git with Code School's interactive course, Try Git.try.github.io Explain Git with D3 We are going to skip instructing you on how to add your files for commit in this explanation. Let's assume you already…onlywei.github.io
Diary of a dummy curating ressources to make it in Machine Learning 7.3
0
diary-of-a-dummy-curating-ressources-to-make-it-in-machine-learning-167e15477d9a
2018-04-04
2018-04-04 13:50:22
https://medium.com/s/story/diary-of-a-dummy-curating-ressources-to-make-it-in-machine-learning-167e15477d9a
false
181
We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE.
null
ethercourt
null
Ethercourt Machine Learning
adoucoure@dr.com
ethercourt
INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING
ethercourt
Machine Learning
machine-learning
Machine Learning
51,320
WELTARE Strategies
WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner
9fad63202573
WELTAREStrategies
196
209
20,181,104
null
null
null
null
null
null
0
null
0
183315e870da
2018-03-18
2018-03-18 18:40:28
2018-03-18
2018-03-18 18:43:58
1
false
en
2018-06-24
2018-06-24 22:24:14
7
167f4b7b7270
2.422642
1
1
0
Since its early days in the east coast of the Mediterranean, more specifically in the coast of what is now Turkey, philosophy has had to…
5
Artificial Intelligence and the Renewed Relevance of Philosophy Since its early days in the east coast of the Mediterranean, more specifically in the coast of what is now Turkey, philosophy has had to face the accusation of uselessness, time and time again philosophers have had to defend themselves against the charge of contributing nothing to society as whole. And from Thales of Miletus’ story of falling into a well while looking at the stars, anecdotes of clumsy philosophers have never been missing and almost constitute a whole literary genre within the history of philosophy. But despite this usual representation, philosophy has been a key factor in the shaping of history. Not only in the sense that philosophers have had an active role at important moments (we just have to think of Brutus and Cassius plotting against Caesar, John Locke participating in the foundation of the English Whig party, or John Stuart Mill as an administrative officer of the British East India Company), but also in the sense that the works of philosophers have actually shaped the ideas of Western society. People tend to forget it, but the truth is that what we now understand for scientific research, for economics and even for politics, just to name the first things that come to mind, is the end stop of a process of thinking that was started by philosophers. And all current signs are pointing to a time in which philosophy will again prove crucial in the shaping of public opinion. The developments in Artificial Intelligence are of such magnitude and are taking place at such rapid pace that they are pushing for a reassessment of fundamental concepts and it is the philosopher, and his fellow humanists, who are in a better position to make, or at least to initiate, this process of reassessment. All the tech companies working on the field seem to understand this. They are hiring playwrights, poets and even comedians to help them improve their AI-powered personal assistants and they have been courting philosophers to help them understand the nature of these inventions. It is in connection with these developments that, famous billionaire and technology entrepreneur, Mark Cuban has been claiming of late that, in the near future, a philosophy degree will be more valuable than one in computer science or engineering: I’m going to make a prediction, in 10 years, a liberal arts degree in philosophy will be worth more than a traditional programming degree… What is happening now with artificial intelligence is we’ll start automating automation. Artificial intelligence won’t need you or I to do it, it will be able to figure out itself how to automate [tasks] over the next 10 to 15 years. Now the hard part isn’t whether or not it will change the nature of the workforce — it will. The question is, over the period of time that it happens, who will be displaced? Cuban views the job market as something that will change radically in the years to come and even claims that the jobs that are more lucrative now (accounting and computer programming, for instance) will be subject to the powers of automation. To remain competitive, he advises to go for degrees that teach you to think in a big picture way, like philosophy: Knowing how to critically think and assess them from a global perspective, I think, is going to be more valuable than what we see as exciting careers today which might be programming or CPA or those types of things. Originally published on aleteia.org
Artificial Intelligence and the Renewed Relevance of Philosophy
1
artificial-intelligence-and-the-renewed-relevance-of-philosophy-167f4b7b7270
2018-06-24
2018-06-24 22:24:14
https://medium.com/s/story/artificial-intelligence-and-the-renewed-relevance-of-philosophy-167f4b7b7270
false
589
The Blog of Nikola Krestonosich Celis
null
null
null
ink & bits
nikolakrestonosich@gmail.com
ink-bits
ACADEMIA,PHILOSOPHY,LITERATURE,POETRY,VENEZUELA
mrnkc
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Nikola Krestonosich Celis
Author and philosophy teacher | PhD candidate at @LeuvenU | Spanish coach at @fluenz
9b75d530f1b1
MrNKC
97
98
20,181,104
null
null
null
null
null
null
0
null
0
71d1e3c2dc47
2017-10-18
2017-10-18 13:06:59
2017-10-18
2017-10-18 13:56:06
1
false
en
2017-10-20
2017-10-20 13:22:38
2
1683cbb76683
4.958491
29
0
0
A few weeks ago, I wrote a post about the data science workflow we use at Apteo. As you can tell, the majority of our efforts are spent on…
5
Moving Fast, Productionizing Data Science, and Breaking Things A few weeks ago, I wrote a post about the data science workflow we use at Apteo. As you can tell, the majority of our efforts are spent on things that help us figure out our data, build our models, and get an ad-hoc data collection pipeline going. And that’s where our efforts should be for a new data science task. But this post isn’t about any of that. It’s about the last step. That last step, which I admittedly glossed over in the original post, is one of the more important ones when it comes to actually running a sustainable business, because it’s the one that’s the most “user-facing” (even if your users are the members of your internal product or data science team). In order to actually put any of your hard-won data science feats into practice, you need to have a system that supports its use. Recently, Carlos Perez wrote a fantastic article about how Google and Uber handle their productionized machine learning systems. These are amazing systems, but they’re much harder for a small startup to build and put into place. We’re in the process of productionizing our predictive infrastructure now as quickly as we can, and as expected, we’ve learned a few things. Specifically, we’ve moved fast, implemented a lot of new things, broken a lot of things along the way, and are now working on getting rid of that last bit of tech debt before we jump into our next big project. Timeline Before I jump into some of the problems we’re working through now, it helps to understand where we’ve come from. To date, we’ve been spending most of our efforts building and researching models that will help us understand stock price movements based on fundamental analysis techniques used by equities analysts. We’ve been able to build an initial prototype of that model, and we’re now in the process of using it to make forward predictions, and take actions based on those predictions. In order to do so, we needed to implement a process of continuous and systemized data collection, alerts on errors in the data collection pipeline, continuous and systemized model training, and alerts in the model training pipeline. Those sound straightforward, but there’s some complexity to these processes that aren’t immediately obvious. Collecting Up-to-Date Data Ain’t Easy In order to make predictions using our previously trained models, we need up-to-date data that reflects the current state of the world. Getting this data in a one-off basis for training our models was relatively easy, even if we needed to collect a lot of data at once. Getting this data for live predictions hasn’t been as straightforward. Essentially, our process of collecting data requires us to continuously gather data from a variety of sources on a periodic basis. Doing so required us to write code that could spawn instances on AWS, run scheduled apps that could interface with multiple different providers to gather data, and then store that data in a usable format in our systems. Logging It’s easy to know if our apps ran properly overnight — all we’d have to do is wake up and see if we had new data. But what happens when that data isn’t there? Presumably, the instance that ran our app already terminated in order to save money. But maybe our app crashed, or maybe our data providers weren’t responsive. We quickly learned that in order to debug data collection issues, we needed to implement a more robust and sophisticated method to collect and consolidate our logs. In order to do that, we’ve had to add additional code that can upload our logs to a centralized repository for later perusal. At some point, it would be great to get tools like Logstash and Elasticsearch up and running, but with a smaller team whose main focus is on the development of heavy duty models for application to finance, that’s more of a nice to have. Alerting In addition to basic log management, we’ve also had to develop a method for alerting us when things go wrong. Not sure if you’ve ever worked with alerts before, but they suck. They really suck. Before Apteo, I had to deal with PagerDuty alerts for years. I never got sleep when I was on call. Fortunately we haven’t had to resort to anything that heavy duty quite yet. But we have had to implement a system that emails us whenever something doesn’t go as expected, like whether we don’t have new data or whether a job didn’t run. Alerting systems themselves require maintenance and sometimes your alerter needs to have alerts as well, so this is a bit of a catch-22, but having some alerting is better than being completely in the dark. Integrating With Providers Usually, integrations with data providers aren’t too bad. You do it once and as long as nothing changes on their end, you’re good to go. The problem is that things do change on their end. Which means you need to periodically test your integrations and make sure that what you’re getting back from providers is actually what you expect. And when it’s not what you expect, you need to fix it. Unfortunately this is just a fact of life. The best providers will publish documentation about what has changed. That’s when you can count yourself lucky. Running Live Apps vs. Training and Backtesting Apps Collecting and monitoring data efficiently isn’t the only problem we’ve run into. Sometimes, a scheduled job, be it a data collection job, a training job, or a prediction job, doesn’t behave as it should. The problem may be that the data that’s needed to predict isn’t available, or that it’s available, but it’s just weird, and the output you get from one run to another is different, or that the way it performs on live data is different than what it did on historical data. There are a lot of weird issues here, and debugging them takes a lot more time than when you’re working with historical data, because you need to understand the new data you’re working with, check your log files to see what errors cropped up, fix the issue, add a unit test to make sure you’ve fixed the issue (you are using unit tests, right?), and then re-run the app, which itself could take days. So the feedback cycle in live apps is just longer, which means you need to manage the risk of your live apps eating up all your developer time. As I mentioned above, we do that by extensively developing and relying on unit tests (and integration tests and functional tests), and relying on a continuous integration system to make sure nothing fails when new code is being added to our main development trunk. Wrapping Up You’ll frequently hear data scientists say that 80% of their job is data munging and cleaning. Well, I’ll confidently make the claim that 80% of the job of the management team of a data project is deploying and maintaining production systems. OK, maybe not that much, maybe 75%, but it’s high. We’ve been thinking about open sourcing some of the stuff we use to manage all of our systems, because we’ve found there hasn’t been a great fit with what’s out there and what we’ve needed. If anyone’s interested, please let us know. Otherwise, I’d love to hear your comments about how your company as a small startup handles these issues.
Moving Fast, Productionizing Data Science, and Breaking Things
219
moving-fast-productionizing-data-science-and-breaking-things-1683cbb76683
2018-05-02
2018-05-02 05:42:03
https://medium.com/s/story/moving-fast-productionizing-data-science-and-breaking-things-1683cbb76683
false
1,261
The official publication for Apteo. Follow us to get insights on how we’re using AI to improve investing.
null
apteoai
null
Apteo
info@apteo.co
apteo
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,INVESTING,TECHNOLOGY,FINTECH
apteoai
Data Science
data-science
Data Science
33,617
Shanif Dhanani
Co-founder & CEO of Apteo: We build AI tools to improve investing. Come join us!
9273f4759898
shanif
777
201
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-30
2018-09-30 05:43:44
2018-09-30
2018-09-30 20:19:22
7
false
en
2018-09-30
2018-09-30 20:19:22
4
1685fab5aada
4.789623
0
0
0
What is a Confusion Matrix ?
4
Confusion Matrix No More Confusing What is a Confusion Matrix ? A confusion matrix is a matrix (table) that can be used to measure the performance of an machine learning algorithm, usually a supervised learning one. A confusion matrix is a technique for summarizing the performance of a classification algorithm. The number of correct and incorrect predictions are summarized with count values and broken down by each class. This is the key to the confusion matrix. “ The confusion matrix shows the ways in which the machine learning model is confused when it makes predictions ” It gives the insights not only into the errors being made by the machine learning model but more importantly the types of errors that are being made. Calculating a confusion matrix can give you a better idea of what your classification model is getting right and what types of errors it is making. Let’s start with an example confusion matrix for a binary classifier Given a machine learning model (Binary classifier) and an instance, there are four possible outcomes. If the instance is positive and if it is classified as positive, it is counted as a true positive. 2. If the instance is positive and if it is classified as negative, it is counted as a false negative. 3. If the instance is negative and if it is classified as negative, it is counted as a true negative. 4. If the instance is negative and if it is classified as positive, it is counted as a false positive. Let’s now define the most basic terms and definitions: I will explain basic terms and definitions involved in Confusion Matrix by taking the example of binary classifier. The task of Binary Classifier is to predict the presence of a disease. There are two possible predicted classes: “yes” and “no” for the Binary classifier. For example, “yes” would mean they have the disease, and “no” would mean they don’t have the disease. 1) True Positives (TP): A true positive is an outcome where the model correctly predicts the positive class. All samples that were identified as positive labels and were truly positive. These are cases in which we predicted yes (they have the disease), and they do have the disease. 2) True Negatives (TN): A true negative is an outcome where the model correctly predicts the negative class. All samples that were identified as negative labels and were truly negative. We predicted no, and they don’t have the disease. 3) False Positives (FP): A false positive is an outcome where the model incorrectly predicts the positive class. All samples that were identified as positive labels and were in fact negative. We predicted yes, but they don’t actually have the disease. (Also known as a “Type I error.”) 4) False negatives (FN): A false negative is an outcome where the model incorrectly predicts the negative class. All samples that were identified as negative labels and were in fact positive. We predicted no, but they actually do have the disease. (Also known as a “Type II error.”) The list of rates and measures that are often computed from a confusion matrix for a binary classifier: Accuracy: Accuracy is one metric for evaluating classification models. It tell us how often is the classifier correct. Informally, accuracy is the fraction of predictions our model got right. Formally, accuracy has the following definition: For binary classification, accuracy can also be calculated in terms of positives and negatives as follows: Precision: Precision is the ability of a classification model to return only relevant instances. Precision talks about how precise/accurate your model is out of those predicted positive, how many of them are actual positive. When the model predicts yes, how often it’s correct is coined as Precision. Precision is a good measure to determine, when the costs of False Positive is high. For instance, email spam detection. In email spam detection, a false positive means that an email that is non-spam (actual negative) has been identified as spam (predicted spam). The email user might lose important emails if the precision is not high for the spam detection model. Recall: The precise definition of recall is the number of true positives divided by the number of true positives plus the number of false negatives. True positives are data point classified as positive by the model that actually are positive (meaning they are correct), and false negatives are data points the model identifies as negative that actually are positive (incorrect). True Positive Rate: When it’s actually yes, how often does the model predict yes? True Positive Rate is also known as “Sensitivity” or “Recall”. Recall actually calculates how many of the Actual Positives our model capture through labeling it as Positive (True Positive). Applying the same understanding, we know that Recall shall be the model metric we use to select our best model when there is a high cost associated with False Negative. For instance, in fraud detection or sick patient detection. If a fraudulent transaction (Actual Positive) is predicted as non-fraudulent (Predicted Negative), the consequence can be very bad for the bank. Specificity: Specificity is the proportion of truly negative cases that were classified as negative; thus, it is a measure of how well your classifier identifies negative cases. It is also known as the true negative rate. Formally, True Negative Rate: When it’s actually no, how often does the model predict no? True Negative Rate is also known as “Sensitivity”. It is equivalent to 1 minus False Positive Rate. citation: Simple guide to confusion matrix terminology A confusion matrix is a table that is often used to describe the performance of a classification model (or…www.dataschool.io Visualizing the Confusion Matrix Confusion Matrix is a matrix built for binary classification problems. It is an important starting tool in…www.sanyamkapoor.com What is a Confusion Matrix in Machine Learning Make the Confusion Matrix Less Confusing. A confusion matrix is a technique for summarizing the performance of a…machinelearningmastery.com
Confusion Matrix No More Confusing
0
confusion-matrix-no-more-confusing-1685fab5aada
2018-09-30
2018-09-30 20:19:22
https://medium.com/s/story/confusion-matrix-no-more-confusing-1685fab5aada
false
991
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Bharat kumar Mallela
null
af13039b0e65
data.doctor432
0
1
20,181,104
null
null
null
null
null
null