audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2018-04-28
2018-04-28 18:17:58
2018-07-01
2018-07-01 19:05:56
1
false
ru
2018-07-05
2018-07-05 09:04:30
33
1cb79437540f
1.939623
11
2
0
BigQuery Insights
5
Подборка Телеграм-каналов по анализу данных / Datascience / Machine Learning BigQuery Insights Анализ больших данных в Google BigQuery, примеры решений, шаблоны SQL-запросов и советы по работе с данными. Автор: Александр Осиюк, Product Analyst rabota.ua WebAnalytics Полезная информация по веб-аналитике, повышению конверсии и анализу данных в маркетинге. Автор: Дмитрий Осиюк, Marketing Analyst в LUN.ua Data Science и все такое Об анализе данных и машинном обучении — понятным языком. Интернет-аналитика Статьи про аналитику, заметки и отчеты, которые сопровождаются содержательными комментариями. Главная фишка канала — обилие инфографики и полезной статистики. Автор: Алексей Никушин, аналитик. DataRoot University Бесплатные курсы data science / engineering и актуальные новости из сферы анализа данных. Школа бородатого веб-аналитика Статьи, новинки, кейсы и практические советы по веб-аналитике. Автор: Андрей Осипов, Школа веб-аналитики Андрея Осипова. Веб-аналитика с OWOX BI Статьи, актуальные новости, инструменты, вебинары и лайфхаки в сфере аналитики. Автор: OWOX BI Hey Machine Learning Hey Machine Learning — команда специалистов по машинному обучению и искусственному интеллекту. Автор: Богдан Каминский Data Place Канал про данные, науку о данных и про обучение работе с данными. Автор: Ирина Радченко, доцент, канд. техн. наук, независимый эксперт Всемирного банка, любитель данных и Computer Science This is Data Статьи по аналитике и работе с данными. Автор: thisisdata.ru DeepLearning ru Материалы из области Глубокого обучения с уклоном на Машинное зрение. UniLecs Задачки по алгоритмизации и программированию, а также новости из мира Computer Science. Автор: Альберт Давлетов Juicy Data Your guide to the DataScience world! Boring Berlin Scientist Useful articles about Data Science, Machine Learning, Data Engineering and not only. Автор: Вячеслав Дубров. Data Science Boom The hottest Data Science, Machine Learning, Artificial Intelligence news feed and learning resources. All you need to know in one place! Loss function porn There are three things you can watch forever: fire, water and descending loss function. Groks Канал для диджитал гроккеров. Технологии, маркетинг, аналитика. Автор: Илья Пестов Data Science First Telegram Data Science channel Data Science Notes Материалы по Data Science. Spark in me — Internet, data science, math, deep learning, philosophy Internet, data science, math, deep learning, philosophy. AI / нейросети Канал об искусственном интеллекте и нейросетях Machine Learning World Все самое интересное в мире ИИ и Машинного обучения. Just links That’s just link aggregator of everything I consider interesting, especially ML and quantum physics. OpenDataUkraine Канал об открытых данных от команды Opendatabot. Machine Learning Research Исследования в машинном обучении. Data Science World Мир Data Science Молотилка Новости о машинном обучении, кейсы и анонсы. Автор: Глеб Ивашкевич Datapreneurs Data Science & Machine Learning Gentleminds Новости и статьи о deeplearning. BigDataScience Big Data and Data Science community in SPB devdigest // data science Data Science Digest. Жалкие низкочастотники Развлекателтный канал. Безумные картинки, странная математика, кибернетическая некрофилия, нёрдовский юмор. + Добавить канал в список.
Подборка Телеграм-каналов по анализу данных / Datascience / Machine Learning
74
подборка-телеграм-каналов-по-анализу-данных-data-science-machine-learning-1cb79437540f
2018-07-05
2018-07-05 09:04:30
https://medium.com/s/story/подборка-телеграм-каналов-по-анализу-данных-data-science-machine-learning-1cb79437540f
false
461
null
null
null
null
null
null
null
null
null
Telegram
telegram
Telegram
3,592
Aleksandr Osiyuk
Product Analyst at MacPaw.com. BigQuery Insights: https://t.me/BigQuery about analytics, data science and Web / App analysis
a0ce461f6757
aleksandrosiyuk
79
54
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-25
2018-04-25 15:08:20
2018-04-25
2018-04-25 15:12:34
1
false
en
2018-04-25
2018-04-25 15:12:34
1
1cb7f5a19796
3.109434
0
0
0
The market of consumer electronics industry is driven by the needs and demands of the consumer, it has “consumer” even in its name, it is a…
5
Get in the smart game, before it’s too late The market of consumer electronics industry is driven by the needs and demands of the consumer, it has “consumer” even in its name, it is a market, for the consumers. The revolution hatched by the Internet of Things is bigger than the one brought to the world by the Internet itself and it has successfully permeated the consumer electronics (CE) Industry. The smart market of this smart age demands smart appliances. Afterall market trends are like the mast of a yacht controlled by the winds of the demands, praxis, and progressions of the time. And this international trend could have easily been observed at the CES in January this year. When so many renowned companies are aiming for the same goal, there must be reasons to do so? New technologies bring changes in consumer behavior, which allows for new players to emerge. It is a series of chain reactions, one following the other while making way for another major modernization. Latest studies show, that especially in the US about 70% of homeowners already own a smart appliance. It is mainly the millenials that are interested in owning a smart appliance. The overwhelming majority though agrees, that the smart appliance contributes to the value of their home. We believe this new disruption is more promising than it seems as it is not only adding value for the consumers but also for to the business of the manufacturers.While this new age of appliances is about to make the lives of consumers unbelievably convenient and efficient, the manufacturers of these appliances are also set to unlock a new revenue stream. The manufacturer’s today, find themselves at the crossroad of a crucial decision, whether to embrace the current innovation or to go the old-school way. We believe it is time for the manufacturers to reap the benefits from the innovation of intelligence. How? The slew of smart appliances is a minefield of benefits and enormous potential of revenue generation for the manufacturers, and, we at Untrodden Labs are giving you ways to unlock the power of connected smart appliances for your business. Our IoT platform, ThingsGoSocial(TGS) is connecting all the appliances to intelligence with Thing Green. Thing Green is a modular data aggregation device to give your appliances sixth sense! We at Untrodden Labs have developed a device Thing Green and a platform Things Go Social which has the power to transform any appliance into a smart appliance for a consumer-driven innovation in a consumer led market. The changes on your original product design would be marginal and the costs at its lowest. Thing Green was designed to be easy to implement and furthermore, it allows to be upgraded to new technologies in the future. All the features of the smart appliance would be available straight away and are customizable. We are leveraging the technologies of IoT and AI to help the manufacturers with New revenue generation opportunity: This new age of smart appliances can enable manufacturers to optimize their production and revenue generating streams by giving them in-depth insight of consumer usage and behaviour. They can easily introduce additional services and features to increase consumer satisfaction fulfilling greater demands while introducing new revenue opportunities. Reduced time to market: With in-depth insights manufacturers can quickly deliver their products to the market knowing exactly what is required! Enhanced services: Smart appliances give 100% transparency with real-time monitoring which equals real-time diagnostics. A smart refrigerator, for instance, with real-time monitoring and fault detection can give clarity with the fault as to whether it is with the compressor, evaporator, thermostat or fan. The manufacturer can send the maintenance services to the doorstep of the consumer and can even schedule it beforehand with predictive maintenance. This would result in enhanced consumer services and increased consumer satisfaction, saving time and increasing efficiency. Energy efficient appliances: A new range of exclusive smart appliances which are energy efficient with real-time energy monitoring. It keeps a track of the energy consumption to give the consumers and the environment a gift of energy savings! And getting a panoramic view of the TGS enabled smart appliances show us the great influence they are capable of! These appliances use computer and telecommunication technology to become more efficient in regard to energy utilization and output.They can even tap into the smart grid to make the most efficient use of the electricity available. Smart appliances prove to be the most promising innovation for not only individuals, but, for the world at large! Get into the smart game, with Things Go Social!
Get in the smart game, before it’s too late
0
get-in-the-smart-game-before-its-too-late-1cb7f5a19796
2018-04-25
2018-04-25 15:12:35
https://medium.com/s/story/get-in-the-smart-game-before-its-too-late-1cb7f5a19796
false
771
null
null
null
null
null
null
null
null
null
Smart Appliances
smart-appliances
Smart Appliances
5
Things Go Social
Your interaction with machines will change when your machines will talk to you. Find out what happens when things go social!?
d6d7546b0773
ThingsGoSocial
18
155
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-01
2018-09-01 23:39:46
2018-09-01
2018-09-01 23:42:09
2
true
en
2018-09-01
2018-09-01 23:42:09
3
1cba36aa8729
1.888994
3
0
0
Basically what an auto-encoder does is it takes some kind of input data it could an image or a vector anything at all with a very high…
5
Variational Auto-encoders Basically what an auto-encoder does is it takes some kind of input data it could an image or a vector anything at all with a very high dimensionality. it is going pass these data in a neural network and it gonna try and compress the data into a smaller representation it does this with two principal components is what we call the encoder . Autoencoder takes input data it could be Image or vector with high Dimensionality. It gonna try and compress the data into a smaller representation it does this with two principal components is what we call encoder . From the Latent space with less dimension , the network will try to reconstruct the input by using again convolutional layer. Loss function is computed by comparing input to output with the pixel difference. This is simply a bunch of layers they could be fully connected layer or convolution layer that are going to take the input and they are going to compress it down to a smaller representation. which has less dimension that of input , from he bottle neck is going to try and reconstruct the input by using again fully connected or convolution layers. generated_loss - mean(square(generated_image - real_image)) latent_loss = KL_Divergence(latent_variable , unit_gaussian) loss = generation_loss +l atent_loss Loss function of training an auto-encoder is simply looking at the reconstructed version at the end of your decoder network . It going to simply compute the reconstruction loss with respect to input and by comparing pixel to pixel difference in the output we can create a loss function and we can start training our network to compress images . This Git is intended as a playground for experimenting with various neural network models and libraries. It contains implementations of mnist_mlp: A simple multilayer perceptron for MNIST implemented with keras mnist_cnn: A simple convolutional neural network for MNIST implemented with keras usps_cnn: A simple convolutional neural network for USPS dataset implemented with keras. variational_autoencoder: Two implementations (one in pure Theano, one in lasagne) of the model proposed . GitHub : VariationalAutoEncoder The dreams of yesterday are the hopes of today and the reality of tomorrow. Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten. Placeholder text by Clean Blog. ArXiv : Auto-Encoding Variational Bayes. Originally published at bala.ac.
Variational Auto-encoders
52
variational-auto-encoders-1cba36aa8729
2018-09-02
2018-09-02 08:05:15
https://medium.com/s/story/variational-auto-encoders-1cba36aa8729
false
399
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Bala Vivek
null
763eefb8562e
bsivanantham
1
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-25
2018-09-25 14:09:49
2018-09-25
2018-09-25 14:14:14
2
false
en
2018-09-25
2018-09-25 15:18:50
5
1cbb2fc4acfe
3.424843
0
0
0
The Department of Homeland Security created the Hidden Signals Challenge with the purpose of identifying emerging biothreats to our nation…
4
Monitoring emergency department wait times to detect an emergent influenza pandemic The Department of Homeland Security created the Hidden Signals Challenge with the purpose of identifying emerging biothreats to our nation in real time. A group of six people from Vituity’s data team elected to participate and were eventually chosen as one five finalists. In this post we describe our prototype and highlight what we learned building it along the way. Authorities can’t respond effectively with a two-week reporting lag The Centers for Disease Control (CDC) continuously monitors for pandemic influenza across the United States with their FluView program. They have curated multiple data streams from hospitals across the country to create an accurate estimate of the number of influenza deaths. Each of these streams is collected with the intention of rapidly responding to an emerging influenza pandemic. Unfortunately, they share a common blind spot in their reporting lag to the CDC. This delays the CDC’s potential response to pandemic influenza by weeks! To reduce this reporting lag we need to use a statistical model to transform data we have into an estimate for the number of influenza deaths in a city. In order to be useful these data need to be reliable, timely, available, and scalable. Reliable — we need to be able to build an accurate statistical model with these data as an input, otherwise our predictions would be useless. Timely — we need to provide the CDC an estimate for the number of influenza deaths in the present, otherwise we are not improving their ability to respond in a crisis. Available — we need years of data to reference them with historical influenza deaths, otherwise our model won’t be a reliable indicator of influenza deaths. Scalable — we need data that represent a large geographic region, otherwise our model will be useless for a federal agency. Wait times are a leading indicator We built a prototype around the intuition that emergency department wait times would be a leading indicator for influenza deaths. We began in October 2017, and at this time influenza activity was mild in the majority of the US. By December the severity of this flu season was clear. Long wait times in emergency departments were at the forefront of national attention. After much thought we realized emergency department wait times fit these criteria. They were timely because many hospitals update wait times on their website at least hourly. They were available at Vituity because our data team has recorded emergency department wait times of Vituity providers for many years. They were scalable because wait times were easily accessible with a web scraping engine on top of hospital websites. What remained to be shown was that we could build a reliable estimate for the number of influenza deaths based on emergency department wait times. We defined reliable as any model that matched the performance of CDC’s seasonal baseline for influenza deaths. After several weeks of work we finally arrived at an ensemble method with a ~10% lower mean absolute error than the CDC baseline in our 2 year validation region. 10% sounds like peanuts, but the important thing is that our model allows us to reliably predict the number of influenza deaths during the two-week reporting lag. Below, we show the model’s forecasts in red and acceptable levels from the CDC baseline in grey. We chose a middle path There is a natural tradeoff between the reliability and geographic scale of data. We could have built a more reliable model of influenza deaths by leveraging clinical notes from Vituity providers. This would have come at the cost of scaling our predictions across the United States. We could have scaled over a larger portion of the United States by leveraging streams of data from social media or web searches. This would opened our model up to the same problems with causality that shut down Google Flu Trends. In the end we chose emergency department wait times as an intermediate solution to this tradeoff. These data are reliable because have good evidence from simulations that pandemic influenza would roughly double wait times in emergency departments. These data are scalable because they are readily available from hospital websites around the country. Our team stood up a prototype of this concept in the second half of the Hidden Signals competition. Even though we weren’t selected as a winner, we’re proud of the work we accomplished in a short time together. Stay tuned next week when we publish a companion post describing how this competition was an effective exercise to build empathy between our data scientists and engineers. MedAmerica Data Services, a Vituity data company, provides customized data tools and analytic solutions for health care providers and organizations. For more information on MedAmerica Data Services tools and solutions, please send an inquiry to Data@vituity.com.
Monitoring emergency department wait times to detect an emergent influenza pandemic
0
monitoring-emergency-department-wait-times-to-detect-an-emergent-influenza-pandemic-1cbb2fc4acfe
2018-09-25
2018-09-25 15:18:50
https://medium.com/s/story/monitoring-emergency-department-wait-times-to-detect-an-emergent-influenza-pandemic-1cbb2fc4acfe
false
806
null
null
null
null
null
null
null
null
null
Open Data
open-data
Open Data
5,306
Nate Sutton
null
cffc89afabd0
nasutton
0
1
20,181,104
null
null
null
null
null
null
0
import os from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By import time import sys import numpy as np import pandas as pd import regex as re chromedriver = "~/Downloads/chromedriver" # path to the chromedriver executable chromedriver = os.path.expanduser(chromedriver) print('chromedriver path: {}'.format(chromedriver)) sys.path.append(chromedriver) driver = webdriver.Chrome(chromedriver) zillow_pleasanton_url = "https://www.zillow.com/homes/recently_sold/Pleasanton-CA/house_type/47164_rid/globalrelevanceex_sort/37.739092,-121.750317,37.583086,-122.028408_rect/11_zm/" driver.get(zillow_pleasanton_url) soup = BeautifulSoup(driver.page_source, 'html.parser') listings = soup.find_all("a", class_="zsg-photo-card-overlay-link") listings[:5] [<a class="zsg-photo-card-overlay-link routable hdp-link routable mask hdp-link" href="/homedetails/6820-Singletree-Ct-Pleasanton-CA-94588/25068715_zpid/"></a>, <a class="zsg-photo-card-overlay-link routable hdp-link routable mask hdp-link" href="/homedetails/2804-Tangelo-Ct-Pleasanton-CA-94588/25085538_zpid/"></a>, <a class="zsg-photo-card-overlay-link routable hdp-link routable mask hdp-link" href="/homedetails/5650-Hansen-Dr-Pleasanton-CA-94566/25080514_zpid/"></a>, <a class="zsg-photo-card-overlay-link routable hdp-link routable mask hdp-link" href="/homedetails/592-Tawny-Dr-Pleasanton-CA-94566/25077658_zpid/"></a>, <a class="zsg-photo-card-overlay-link routable hdp-link routable mask hdp-link" href="/homedetails/764-Saint-John-Cir-Pleasanton-CA-94566/24931458_zpid/"></a>] listings[0]['href'] '/homedetails/2804-Tangelo-Ct-Pleasanton-CA-94588/25085538_zpid/' house_links = ['https://www.zillow.com'+row['href'] for row in listings] next_button = soup.find_all("a", class_="on") next_link = ['https://www.zillow.com'+row['href'] for row in next_button] def get_house_links(url, driver, pages=20): house_links=[] driver.get(url) for i in range(pages): soup = BeautifulSoup(driver.page_source, 'html.parser') listings = soup.find_all("a", class_="zsg-photo-card-overlay-link") page_data = ['https://www.zillow.com'+row['href'] for row in listings] house_links.append(page_data) time.sleep(np.random.lognormal(0,1)) next_button = soup.find_all("a", class_="on") next_button_link = ['https://www.zillow.com'+row['href'] for row in next_button] if i<19: driver.get(next_button_link[0]) return house_links def get_html_data(url, driver): driver.get(url) time.sleep(np.random.lognormal(0,1)) soup = BeautifulSoup(driver.page_source, 'html.parser') return soup def get_price(soup): try: for element in soup.find_all(class_='estimates'): price = element.find_all("span")[1].text price = price.replace(",", "").replace("+", "").replace("$", "").lower() return int(price) except: return np.nan def get_sale_date(soup): try: for element in soup.find_all(class_='estimates'): sale_date = element.find_all("span")[3].text sale_date = sale_date.strip() return sale_date except: return 'None' def get_lot_size(soup): try: lot_size_regex = re.compile('Lot:') obj = soup.find(text=lot_size_regex).find_next() return obj.text except: return 'None' def get_address(soup): try: obj = soup.find("header",class_="zsg-content-header addr").text.split(',') address = obj[0] return address except: return 'None' def get_city(soup): try: obj = soup.find("header",class_="zsg-content-header addr").text.split(',') city = obj[1] return city except: return 'None' def get_zip(soup): try: obj = soup.find("header",class_="zsg-content-header addr").text.split(',') list = obj[2].split() zip_code = list[1] return zip_code except: return 'None' def get_num_beds(soup): try: obj = soup.find_all("span",class_='addr_bbs') beds = obj[0].text.split()[0] return beds except: return 'None' def get_num_baths(soup): try: obj = soup.find_all("span",class_='addr_bbs') beds = obj[1].text.split()[0] return beds except: return 'None' def get_floor_size(soup): try: obj = soup.find_all("span",class_='addr_bbs') floor_size_string = obj[2].text.split()[0] floor_size = floor_size_string.replace(",","") return floor_size except: return 'None' def get_year_built(soup): try: objs = soup.find_all("span",class_='hdp-fact-value') built_in_regex = re.compile('Built in') for obj in objs: out = obj.find(text=built_in_regex) if out is not None: return out except: return 'None' def flatten_list(house_links): house_links_flat=[] for sublist in house_links: for item in sublist: house_links_flat.append(item) return house_links_flat def get_house_data(driver,house_links_flat): house_data = [] for link in house_links_flat: soup = get_html_data(link,driver) address = get_address(soup) city = get_city(soup) zip_code = get_zip(soup) beds = get_num_beds(soup) baths = get_num_baths(soup) floor_size = get_floor_size(soup) lot_size = get_lot_size(soup) year_built = get_year_built(soup) sale_date = get_sale_date(soup) price = get_price(soup) house_data.append([address,city,zip_code,beds,baths, floor_size,lot_size,year_built,sale_date,price]) return house_data house_links_10pages = get_house_links(zillow_pleasanton_url,driver,pages=10) house_links_flat = flatten_list(house_links_10pages) house_data_10pages = get_house_data(driver,house_links_flat) [[' 2804 Tangelo Ct', ' Pleasanton', '94588', '3', '3', '1614', '2,526 sqft', 'Built in 1998', '04/23/18', 1039000], [' 5650 Hansen Dr', ' Pleasanton', '94566', '4', '2', '1527', '6,699 sqft', 'Built in 1973', '04/20/18', 1200000], [' 592 Tawny Dr', ' Pleasanton', '94566', '3', '2', '1956', '0.28 acres', 'Built in 1977', '04/19/18', 1150000],] ... file_name = "%s_%s.csv" % (str(time.strftime("%Y-%m-%d")), str(time.strftime("%H%M%S"))) columns = ["address", "city", "zip", "bedrooms", "bathrooms", "floor_size", "lot_size", "year_built", "sale_date", "sale_price"] pd.DataFrame(house_data_10pages, columns = columns).to_csv( file_name, index = False, encoding = "UTF-8" )
27
null
2018-04-29
2018-04-29 23:21:22
2018-05-01
2018-05-01 00:27:45
2
false
en
2018-05-01
2018-05-01 00:27:45
5
1cbb94ba9492
6.651258
4
1
1
If you’ve stumbled upon this post, there’s a good chance you’ve tried or would like to try scraping house listing data from one of the…
5
Scraping House Listing Data using Selenium and Beautiful Soup If you’ve stumbled upon this post, there’s a good chance you’ve tried or would like to try scraping house listing data from one of the online real estate databases. Let me first start by saying how not to approach this problem. Definitely do not try to use Selenium to accomplish everything, that is to navigate the website and to do the scraping. Even though the documentation claims this is possible, I spent a weekend fighting with Selenium and all I can say is that I lost this battle. So, instead of doing it the hard way, please read on to learn a much better approach for interacting with and scraping data from one of these sites. Overview of a typical housing website The main homepage of Zillow provides a search tool to allow the user to narrow down the list of possible houses. In my case, I was interested in houses in Pleasanton, California. In addition, I was interested in single family houses that had been recently sold. This is easy to do by applying filters to the search. Once I did that, then I was left with the following page. Zillow.com search results (Image courtesy: https://www.zillow.com/) Although some data is available from this site such as sale price, number of bedrooms, number of bathrooms, and home square footage, this is not the full story. In order to get more information about the particular listing, I had to manually click on the listing and search for the fields I was interested in. The problem is, how do you automate this process? A really useful tool that can provide clues to automate the web scraping process is Inspect in Chrome or Inspect Element in Firefox. Here is the output after doing an Inspect on the 6820 Singletree Ct. listing. Inspect on a particular home (Image courtesy: https://www.zillow.com/) In html, the <a href> tag refers to a link. The link highlighted in blue is specific to that house. If we did a search for all the <a href> tags on this site, we should see 25 results, which correspond to the 25 houses for each page. In order to get more listings, we would need to navigate to the next page and search for the next list of <a href> tags. If you repeat this process enough times, you may end up with 500 URLs like I did. Data scraping workflow This is where we get into the meat and potatoes of the actual implementation. First, we start by importing all the necessary libraries into an ipython or an ipython notebook session. Be sure to refer to the Selenium Installation instructions before attempting to run any of this code. The next codeblock shows how a WebDriver object gets instantiated. You will need to specify the chromedriver location specific to your computer. Next we specify the URL of the main Zillow homepage after setting the necessary filters. Then we call chromedriver’s get method to open the website in a Chrome window. Now we can simply use Beautiful Soup to scrape the screen by invoking a very handy Selenium trick. The html source code can be called using driver.page_source and read into a Beautiful Soup object. This produces the following output: Each one of these entries has a class and href attribute. We are interested in the href attribute. We can call that as follows: These links are not complete, however. We must append these to the www.zillow.com prefix. This can be done simply through list comprehension. Now we have a list of links for our first page. The next step is to find the link to the next button, which will navigate us to page 2, page 3, and so on. We can use the Inspect feature of Chrome to locate this button, which has the following structure <a href=”/homes/recently_sold/house_type/47164_rid/0_singlestory/37.720288,-121.859322,37.601788,-121.918888_rect/12_zm/2_p/” class=”on” onclick=”SearchMain.changePage(2);return false;” id=”yui_3_18_1_1_1525048531062_27962">Next</a> It is then a simple matter to use Beautiful Soup’s find_all method and filter on tag “a” and class “on”. The process of reading in the soup object, creating a list of links, and proceeding to the next page to repeat this process can all be done in a single function, which I call get_house_links(). Once we have our list of links, we can loop through each of those links and use Selenium to open up the page. The following will get the html data from the URL specified and return it as a Beautiful Soup object. Now we can write various functions to extract the specific data we are looking for. For my application, I was interested in scraping the following fields: Address, City, Zip code Number of bedrooms and bathrooms Size of the house and lot Year built Sale price and sale date One very handy trick I learned is to use a Python try statement. This is because if the scraping for a specific field raises an exception, then it doesn’t break your code. You can just return a ‘None’ string or a NaN if it reaches the except clause. Here is my code to scrape the various fields previously discussed. Before we can proceed, we have to flatten the house_links list. This can be achieved as follows. Now we can put all the functionality of reading in the html data and scraping the relevant fields into a single function that I call get_house_data(). We can now use all this functionality we have built to complete our scraping. After running the above code, the first few entries of house_data_10pages will look something like this. Because we scraped house listings for 10 pages and there are 25 houses on each page, we will have 250 total entries. Now it is a relatively simple matter to save all of this data in a csv file for later analysis. My code for doing this is as follows. Wrap up Scraping data from an online real estate marketplace can be a frustrating experience, but hopefully this post has given you the knowledge to make the experience just a little less painful. The main takeaway I would like to convey is to limit the use of Selenium to just navigation and to really take full advantage of Beautiful Soup to extract the information you need. In my next blog post, I plan to go into detail regarding what you can actually do with this housing data and some of the insights that can be gained with it. So, please stay tuned for that. Thanks so much for reading!
Scraping House Listing Data using Selenium and Beautiful Soup
14
scraping-house-listing-data-using-selenium-and-beautiful-soup-1cbb94ba9492
2018-06-15
2018-06-15 19:34:38
https://medium.com/s/story/scraping-house-listing-data-using-selenium-and-beautiful-soup-1cbb94ba9492
false
1,661
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Ben Sturm
Data Scientist with significant experience gained during the 12-week Metis Data Science immersive. Holds a Ph.D. in Nuclear Engineering.
5e874fced4ad
ben.sturm
45
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-18
2017-10-18 17:59:08
2017-10-18
2017-10-18 18:00:41
0
false
en
2017-10-18
2017-10-18 18:00:41
2
1cbd2f91a97
1.826415
0
0
0
(This content originally appeared on TechWell Insights)
5
Artificial Intelligence Only Works alongside Skilled Testers (This content originally appeared on TechWell Insights) When discussing the future of artificial intelligence (AI), people often think of machines replacing what humans do in the workplace. Why pay an employee when you can program a robot to do all the same tasks — and likely at an even higher success rate? Testers might cringe hardest at the concept because more and more, it feels like the notion of “testing is dead” is traveling like wildfire. Testers are being forced to learn how to code and pick up other software skills in order to stay relevant, so why would they ever want to incorporate machines that could replace what they do? Fortunately for testing teams, AI isn’t going to come along and make testers obsolete. Quite the contrary — the future of AI in the realm of software testing is all about what it can do to help testers, not hurt or replace them. Jason Arbon, the CEO of Appdiff, spoke at this year’s STAREAST Conference about the future of software testing when it comes to AI. He laughed at the idea of robots taking over for humans, instead pointing to humans as the necessary factor when it comes to proper AI use. “The thing that testers don’t realize is that AI is perfectly suited to replace testing activities. The reason is, fundamentally, AI is just a way to train software or let software train itself,” Arbon explained. “If you have a bunch of input data and you have a bunch of output data, all you need is the input and the output. If you have those things, guess what you can do? Train a machine to do it. That’s literally the fundamental thing about machine learning. What do testers do? They come up with test inputs.” As a tester concerned over your job, that’s comforting to hear. Similar to automation tools, AI can make your life easier by taking over some of the more tedious tasks. And while it does take away some of your work, it still requires a strong understanding of your product and specific testing needs. “The only question really is, how much of that data do you need to train an application or train a bot to test an application?” Arbon continued. “Really, of all the professions that’s most in need of help of automation, it’s also the most ripe one for automation with AI. AI is not just this mysterious thing. It’s actually a really…it’s a tool. It’s perfectly suited I think for software testing and people are waking up to that idea generally.” If you look at AI as the next big tool that can take your testing over the top rather than an inevitable replacement, the future of the profession becomes much brighter. Testing is changing, but for the foreseeable future, real testers still need to be closely involved.
Artificial Intelligence Only Works alongside Skilled Testers
0
artificial-intelligence-only-works-alongside-skilled-testers-1cbd2f91a97
2017-10-18
2017-10-18 18:00:42
https://medium.com/s/story/artificial-intelligence-only-works-alongside-skilled-testers-1cbd2f91a97
false
484
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Josiah Renaudin
null
915b9dadf014
jrenaudin
209
190
20,181,104
null
null
null
null
null
null
0
null
0
b468e053644a
2018-03-16
2018-03-16 22:35:33
2018-03-16
2018-03-16 22:54:22
0
false
en
2018-03-16
2018-03-16 22:54:22
5
1cc07dfe8e69
0.569811
1
0
0
“How much effort are you willing to put in to correcting your past sins and atoning for those before you move up the sophistication ladder…
4
S01E11 AI — Paying for your sins and creating a productive data-driven culture “How much effort are you willing to put in to correcting your past sins and atoning for those before you move up the sophistication ladder. I would argue it would be a good practice to do especially if you’re going to start marrying your business data with any unstructured data that you’re going to be brining in.” — Joe Reis In this episode David talks to Joe Reis, self-described reluctant Data Scientist and “full-stack data nerd”, about how your team can avoid the “trough of disillusionment” that will hit many companies that try to use Ai within their organization. We discuss the real wins that can materialize when you build a sound and culture around data and the potential for transforming your company with intelligent automation. Episode Links Joe Reis: blog, twitter, LinkedIn Meatball Sunday
S01E11 AI — Paying for your sins and creating a productive data-driven culture
1
s01e11-ai-paying-for-your-sins-and-creating-a-productive-data-driven-culture-1cc07dfe8e69
2018-03-17
2018-03-17 03:18:41
https://medium.com/s/story/s01e11-ai-paying-for-your-sins-and-creating-a-productive-data-driven-culture-1cc07dfe8e69
false
151
a double entendre where point can be interpreted both as the moment in time of or the meaning to struggle — our focus is on the nexus of user experience and artificial intelligence
null
null
null
the point of struggle
gonzo@ziff.io
the-point-of-struggle
UX,AI,CUSTOMER SUCCESS,PRODUCT DESIGN,DESIGN THINKING
pointofstruggle
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
David "Gonzo" Gonzalez
Data Scientist, Storyteller, LEGO Coach
573cab224fc
datagonzo
240
4
20,181,104
null
null
null
null
null
null
0
null
0
8cba014e0002
2017-11-10
2017-11-10 17:28:58
2017-11-10
2017-11-10 17:37:38
5
false
en
2017-11-14
2017-11-14 11:28:04
1
1cc0ba843a88
5.840881
1
0
0
Written by: Paul Bell
5
Eggplant AI from Testplant: First impressions Written by: Paul Bell Originally published at www.nccgroup.trust. Disruption. A word used to describe Testplant’s latest offering throughout its official launch at Digital Automation Intelligence Roadshow, an event hosted by Testplant in the impressive surroundings of Altitude London (Figure 1). Altitude London The gaps in modern testing In the opening presentation by Dr. John Bates, Testplant CEO, he talked about a desire to create testing disruption, challenging the way in which test automation is done and moving towards a technology that learns and adapts its strategy over time. It was stated that almost 1 in 4 people who download an app only use it once and only 4% of apps downloaded from Google Play Store and Apple’s App Store are used after a month. In the view of Testplant there are five key gaps that are contributing to these issues (Figure 2): UX gap — Current testing is too focused on compliance rather than looking at how we can delight users. Productivity gap — An increase in application complexity alongside shrinking timescales, particularly for agile and DevOps, is making it difficult for test teams to keep up. Automation gap — Also due to the increase in system complexity, existing test tool solutions are unable to test everything that we need them too. Visibility gap — The user perception of the end product does not fulfil expectations. Confidence and Predictability gap — Quality throughout software development is often reported using metrics such as number of tests passed and failed or open defects. This level of reporting can make it challenging to interpret how shippable a product is, how it would be received by a user or whether the product is improving over time. Testing needs to change — from testplant.com From a personal perspective, there isn’t anything here that I would fundamentally disagree with. However, what I was keen to understand further at this early stage is how Eggplant’s Digital Automation Intelligence (DAI) suite (the new collective name for the Eggplant products) would address all of these gaps. From compliance to profit Next up at the roadshow we had Antony Edwards, Testplant CTO, who talked us through the five principles of DAI which will help to transform testing from “a compliance function to a profit centre”. The five principles are: Test through the eyes of the user Test all aspects of user experience Expand automation beyond test execution with AI, machine learning and analytics Use predictive analytics to report quality status in terms of UX Take a coherent approach to monitoring and testing While I can see that there are links between some of the five gaps and the five principles, I’m keen to hear more from Testplant on this connection. In addition, it was also not clear from the roadshow whether all five principles could be implemented through products within the DAI suite. The main concern lies with principles four and five. I am personally not convinced that the products at present can effectively deal with these principles, or at least not in a way which is immediately obvious. However there are some recognisable solutions already in place, as principle one is covered by Eggplant Functional with scripts which are recorded from a UI perspective, using image capture and OCR to drive scripts rather than the underlying HTML. Similarly, principle two is possible due to the testing tools being easier to use, meaning that non-technical resources across the project can get involved with test automation. Eggplant AI There is now also confidence for principle three to be resolved, as that is what Eggplant AI, the new product within up the DAI suite, is all about. It enables the user (whether that be a tester, business analyst or product owner) to model an application on a Visio-like canvas, with the focus being on the states that an application can be in and the actions that can be done against those states. By mapping out the various states and actions within the application it is effectively creating a view of all the paths through the application on a single page. I should add that none of this is automated; this is a manual activity done by someone who understands the flows through the application, with the automation being managed by Eggplant Functional. Essentially, the tester will record the flows that have been modelled using Eggplant Functional and then proceed to assign snippets of automated code (generated in SenseTalk language) to each of the actions within the model. It is at this point where Eggplant AI comes to the forefront. Rather than a standard Eggplant Functional suite which would follow a linear flow for test script execution, AI introduces variation based upon the model that has been derived for the application. Variation is executed in a number of ways it seems, including: Weightings — the user can assign a weighting to specific actions which will increase the frequency of that action being exercised by the suite. Previous runs — AI will look at previous test runs, and use this data to determine the coverage for subsequent runs. Defect History — AI will be able to link defects to certain areas of the application and will focus in on these depending on how the application has changed recently. There are some bold claims in the above points and it will be interesting to see how this comes through in the product over time. Eggplant AI model example By providing this modelling layer and the underlying AI algorithms, Testplant are claiming to be automating aspects of the test preparation and scripting phase by automatically generating test cases continuously during execution. I can see where they are coming from here and it will be interesting to see how effective the modelling functionality is and how it could potentially be used to describe test scope to a group of stakeholders. Testplant also claim that by introducing this variation and intelligence then they are effectively automating exploratory testing. Again, I see where they are coming from here and I agree that in part, they are. However, in my view, there will always be a need for manual exploratory testing and I’m sure that Testplant would tend to agree with me on that. Overall I think that Testplant need to be clearer on how exactly they see DAI addressing the gaps that they have highlighted and how their products align with the principles stated. Saying that, I think they deserve real credit for trying to disrupt the market and developing a tool which does something different. I’m also excited to see how our team at NCC Group can implement Eggplant AI to help our clients reap the benefits that it offers. Eggplant AI trial at NCC Group Since attending the roadshow, NCC Group has been offered participation in the Early Access Programme for Eggplant AI, providing us with the opportunity to use the tool internally. Lauren Garner, one of our trainee Automation Test Analysts who has spent a week using Eggplant AI with Eggplant Functional, was enthused by her initial experience with the product. “Eggplant AI is an incredibly user friendly piece of software and easy to get to grips with. You can start to build models and link to scripts written using Eggplant Functional very quickly, as soon as you have AI, Functional, AI Agent and the relevant gateways installed. “Knowledge of scripting in Eggplant Functional is a must, as it’s here you develop the automated scripts that integrate with AI. Within only five days I was able to learn from scratch how to script in Functional, build models in AI and link the two to produce a basic model against the TripAdvisor Android app. “I did of course encounter a few technical challenges along the way but found the Testplant support team very responsive. If an issue couldn’t be resolved over email, they would set up a WebEx to get a closer look at the issue and walk me through the resolution steps required.” Published date: 10 November 2017
Eggplant AI from Testplant: First impressions
5
eggplant-ai-from-testplant-first-impressions-1cc0ba843a88
2017-11-14
2017-11-14 11:28:06
https://medium.com/s/story/eggplant-ai-from-testplant-first-impressions-1cc0ba843a88
false
1,327
A cyber security publication from NCC Group
null
nccgroup
null
Keylogged
medium.com@nccgroup.trust
keylogged
CYBERSECURITY,INFORMATION SECURITY,SECURITY,PRIVACY,TECH
nccgroupplc
Software Development
software-development
Software Development
50,258
NCC Group
NCC Group is a global expert in cyber security and risk mitigation.
d1c3af6ab73d
NCCGroup
505
0
20,181,104
null
null
null
null
null
null
0
from sklearn.datasets import fetch_20newsgroups news = fetch_20newsgroups(subset='all') print("Number of articles: " + str(len(news.data))) print("Number of diffrent categories: " + str(len(news.target_names))) news.target_names Number of articles: 18846 Number of diffrent categories: 20 ['alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware', 'comp.windows.x', 'misc.forsale', 'rec.autos', 'rec.motorcycles', 'rec.sport.baseball', 'rec.sport.hockey', 'sci.crypt', 'sci.electronics', 'sci.med', 'sci.space', 'soc.religion.christian', 'talk.politics.guns', 'talk.politics.mideast', 'talk.politics.misc', 'talk.religion.misc'] print("\n".join(news.data[1121].split("\n")[:])) From: et@teal.csn.org (Eric H. Taylor) Subject: Re: Gravity waves, was: Predicting gravity wave quantization & Cosmic Noise Summary: Dong .... Dong .... Do I hear the death-knell of relativity? Keywords: space, curvature, nothing, tesla Nntp-Posting-Host: teal.csn.org Organization: 4-L Laboratories Distribution: World Expires: Wed, 28 Apr 1993 06:00:00 GMT Lines: 30 In article <C4KvJF.4qo@well.sf.ca.us> metares@well.sf.ca.us (Tom Van Flandern) writes: crb7q@kelvin.seas.Virginia.EDU (Cameron Randale Bass) writes: >> Bruce.Scott@launchpad.unc.edu (Bruce Scott) writes: "Existence" is undefined unless it is synonymous with "observable" in physics. Dong .... Dong .... Dong .... Do I hear the death-knell of string theory? I agree. You can add "dark matter" and quarks and a lot of other unobservable, purely theoretical constructs in physics to that list, including the omni-present "black holes." Will Bruce argue that their existence can be inferred from theory alone? Then what about my original criticism, when I said "Curvature can only exist relative to something non-curved"? Bruce replied: "'Existence' is undefined unless it is synonymous with 'observable' in physics. We cannot observe more than the four dimensions we know about." At the moment I don't see a way to defend that statement and the existence of these unobservable phenomena simultaneously. -|Tom|- "I hold that space cannot be curved, for the simple reason that it can have no properties." "Of properties we can only speak when dealing with matter filling the space. To say that in the presence of large bodies space becomes curved, is equivalent to stating that something can act upon nothing. I, for one, refuse to subscribe to such a view." - Nikola Tesla ---- ET "Tesla was 100 years ahead of his time. Perhaps now his time comes." ---- from sklearn.model_selection import train_test_split import time def train(classifier, X, y): start = time.time() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=11) classifier.fit(X_train, y_train) end = time.time() print("Accuracy: " + str(classifier.score(X_test, y_test)) + ", Time duration: " + str(end - start)) return classifier from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import TfidfVectorizer trial1 = Pipeline([ ('vectorizer', TfidfVectorizer()), ('classifier', MultinomialNB())]) train(trial1, news.data, news.target) Accuracy: 0.853846153846, Time duration: 5.3866918087 from nltk.corpus import stopwords trial2 = Pipeline([ ('vectorizer', TfidfVectorizer( stop_words=stopwords.words('english'))), ('classifier', MultinomialNB())]) train(trial2, news.data, news.target) Accuracy: 0.880636604775, Time duration: 5.22666096687 for alpha in [5, 0.5, 0.05, 0.005, 0.0005]: trial3 = Pipeline([('vectorizer', TfidfVectorizer( stop_words=stopwords.words('english'))), ('classifier', MultinomialNB(alpha=alpha))]) train(trial3, news.data, news.target) Accuracy: 0.890981432361, Time duration: 5.7222969532 Accuracy: 0.912201591512, Time duration: 5.62339401245 Accuracy: 0.9175066313, Time duration: 5.51641702652 Accuracy: 0.916976127321, Time duration: 5.60582304001 trial4 = Pipeline([ ('vectorizer', TfidfVectorizer( stop_words=stopwords.words('english'), min_df=5)), ('classifier', MultinomialNB(alpha=0.005)) ]) train(trial4, news.data, news.target) Accuracy: 0.910079575597, Time duration: 5.85248589516 import string from nltk.stem import PorterStemmer from nltk import word_tokenize def stemming_tokenizer(text): stemmer = PorterStemmer() return [stemmer.stem(w) for w in word_tokenize(text)] trial5 = Pipeline([ ('vectorizer', TfidfVectorizer( tokenizer=stemming_tokenizer, stop_words=stopwords.words('english') + list(string.punctuation))), ('classifier', MultinomialNB(alpha=0.005))]) train(trial5, news.data, news.target) Accuracy: 0.922811671088, Time duration: 171.798969984 from sklearn.linear_model import SGDClassifier from sklearn.svm import LinearSVC for classifier in [SGDClassifier(), LinearSVC()]: trial6 = Pipeline([('vectorizer', TfidfVectorizer( stop_words=stopwords.words('english') + list(string.punctuation))), ('classifier', classifier)]) train(trial6, news.data, news.target) Accuracy: 0.927055702918, Time duration: 6.2128059864 Accuracy: 0.932095490716, Time duration: 8.53486895561 from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt start = time.time() classifier = Pipeline([('vectorizer', TfidfVectorizer( stop_words=stopwords.words('english') + list(string.punctuation))), ('classifier', LinearSVC(C=10))]) X_train, X_test, y_train, y_test = train_test_split(news.data, news.target, test_size=0.2, random_state=11) classifier.fit(X_train, y_train) end = time.time() print("Accuracy: " + str(classifier.score(X_test, y_test)) + ", Time duration: " + str(end - start)) y_pred = classifier.predict(X_test) conf_mat = confusion_matrix(y_test, y_pred) # Plot confusion_matrix fig, ax = plt.subplots(figsize=(15, 10)) sns.heatmap(conf_mat, annot=True, cmap = "Set3", fmt ="d", xticklabels=labels, yticklabels=labels) plt.ylabel('Actual') plt.xlabel('Predicted') plt.show() Accuracy: 0.935278514589, Time duration: 16.3250808716 from sklearn import metrics print(metrics.classification_report(y_test, y_pred, target_names=labels)) precision recall f1-score support alt.atheism 0.96 0.95 0.96 172 comp.graphics 0.86 0.91 0.88 184 comp.os.ms-windows.misc 0.92 0.85 0.89 204 comp.sys.ibm.pc.hardware 0.83 0.87 0.85 195 comp.sys.mac.hardware 0.93 0.92 0.93 195 comp.windows.x 0.93 0.87 0.90 204 misc.forsale 0.85 0.88 0.86 164 rec.autos 0.93 0.94 0.93 180 rec.motorcycles 0.97 0.97 0.97 173 rec.sport.baseball 0.97 0.97 0.97 217 rec.sport.hockey 0.97 0.99 0.98 178 sci.crypt 0.95 0.98 0.96 197 sci.electronics 0.93 0.93 0.93 199 sci.med 0.94 0.99 0.97 183 sci.space 0.96 0.99 0.97 207 soc.religion.christian 0.94 0.96 0.95 211 talk.politics.guns 0.97 0.96 0.96 208 talk.politics.mideast 0.99 0.99 0.99 200 talk.politics.misc 0.96 0.93 0.94 175 talk.religion.misc 0.94 0.83 0.88 124 avg / total 0.94 0.94 0.94 3770
26
8b9ea9081cc9
2018-09-18
2018-09-18 09:13:16
2018-09-17
2018-09-17 00:00:00
4
false
en
2018-09-18
2018-09-18 13:00:09
7
1cc6ecf3a46f
8.571698
1
0
0
Let’s talk about text classification — one of the most important and typical tasks in data science.
4
Text classification — a simple way to organize your data Nowadays, a daily increase of online available data leads to a growing need for that data to be organized and regularized. Textual data is all around us starting from web pages, e-books, media articles to emails or user comments. There are a lot of cases where automatic text classification would accelerate processing time (for example, detection of spam pages, personal email sorting, tagging products or document filtering). We can say that all organizations (e.g. academia, marketing or government) that deal with a lot of unstructured text, could handle that data much easier if it was standardized by categories/tags. Illustration of machine learning classifier, image credit: Moosend. Text classification or text categorization is an activity of labelling natural language texts with relevant predefined categories. The idea is to automatically organize text in different classes. It can drastically simplify and speed-up your search through the documents or texts! Imagine, you own a large e-commerce website which shows relevant products to a user based on his/her search and preferences. Every time you want to add new products you have to read their descriptions and manually assign a category to them. This procedure can cost you too much time and money, especially if you have a high fluctuation of the available products. But, if you develop an automatic text classifier, you can easily add many new products and tag them automatically without actually reading the descriptions! You can also create a classifier to link search texts to the item categories for a better user experience. HOW TO BUILD A TEXT CLASSIFIER Explore the data — general statistics First, you need to have an annotated dataset to train and test your classifier. For this propose, I will use a “20 Newsgroup” corpus available in scikit-learn; and I will get the data with fetch_20newsgroups(): Output: As you can see, there are 18 846 newsgroup documents, distributed almost evenly across 20 different newsgroups. Our goal is to create a classifier that will classify each document based on its content. Let’s see the content of one document: Output: Each document is a text written in English in a form of an email with a lot of punctuations. You should always do some pre-processing but here we’ll just concentrate on the model. Define training function While training and building a model keep in mind that the first model is never the best one, so the best practice is the “trial and error” method. To make that process simpler, you should create a function for training and in each attempt save results and accuracies. You can define it like this: Function tran_test_split() randomly separates data into training and testing dataset, while function fit() trains the classifier with selected training data (it defines model which parameters match the model input with an output) and score() gives us the accuracy for testing data. Function time() is here just to give us some information about the training duration. Like in every machine learning problem, you have to extract features in order to train a model. These algorithms can read-in just numbers and you have to find a way to convert the text into the numerical feature vectors. If you think about it, a text is just a series of ordered words that usually carry some meaning. If we take each unique word from all the available texts, we’ll create our own vocabulary. And every word in a vocabulary can be one feature. For each text, feature vector will be an array where feature values are simply the numbers of unique word repetition in a specific text, i.e. just the count of each word in one text. And if some word is not in the text, its feature value is zero. Therefore, the word order in a text is not important, just the number of repetitions. This method is called a “Bag of words” and it’s quite common and simple to use. You can use different approaches for word scores/values, but the most popular one is TF-IDF (Term Frequencies times Inverse Document Frequency), which calculates the frequency of a word in a document and reduces the weight of such common words like “the” or “is”. In one of the next articles, I’ll explain the “Bag of words” method more thoroughly and compare it to other text feature extraction methods. But here I’ll just use scikit-learn build-in function TfidfVectorizer() and highlight the most important words from each text. Build a text classifier Now let’s build a classifier! We’ll start with the most common one: the multinomial Naive Bayes classifier which is suitable for discrete classification. Scikit-learn has a great Class called Pipeline, which allows us to a create pipeline for a classifier, i.e. you can just add the functions that you wanna use on your input data. Here, I’m using a TfidfVectorizer() as vectorizer and Multinomial as classifier: Output: Parameter scaling We achieved great accuracy for the first attempt! But let’s try to improve it. We can remove stop words, i.e. tell TF-IDF to ignore most common words (see explanation in our previous article) with an parameter stop_words. List of stop words can be found in nltk: Output: Accuracy is better and even the training is faster, but the alpha parameter of the Naive-Bayes classifier is still the default one, so let’s change its value and iterate through a range of values: Output: You can see that the best accuracy of 91.178% is achieved for alpha 0.005. Let’s ignore the words that appear fewer than 5 times in all documents and use min_dif parameter: Output: Resulting accuracy is a bit lower, so this was a bad idea. We can try and stem the data with nltk (i.e. reduce inflected words to their word root, read more about it in my previous article) with use of parameter tokenizer within TfidfVectorizer, it usually helps, and we can add punctuations to a list of stop words: Output: Accuracy is a bit better, but the training last 34 times longer. This is a great example of time consumption created by stemming. Sometimes accuracy can cost you computation speed and you should find a nice balance between them. Don’t stem if the accuracy doesn’t improve significantly. Try different classifiers Now, let’s try some other usual text classifier like Support Vector Classification with stochastic gradient descent and linear SVC. They are initially slower but maybe they can get us better accuracy without a use of stemmer: Output: Great! An accuracy of 93.2% for linear SVC is awesome! Acceptable accuracy depends on the specific problem: type/length of the analysed text, number of categories and differences between them, etc. Here, an accuracy of 93% is good because we have 20 categories and some of them are quite similar, like comp.sys.ibm.pc.hardware and comp.sys.mac.hardware in comparison with alt.atheism. Model evaluation You can continue to play with parameters and models, but I will stop here and check the characteristics of the best model. I will use confusion_matrix() from sckit-learn to compare real and predicted categories: Output: The confusion matrix is a great way to see which categories model is mixing. For example, there are 17 articles from category comp.os.ms-windows.mics that are wrongly classified as comp.sys.ibm.pc.hardware. Also, let’s check the accuracy of each category separately with classification_report(): Output: Category misc.forsale has the lowest accuracy, but the overall accuracy is great. As you can see, text classification is pretty simple to implement with existing tools and it can bring a lot of value to various IT projects. It is especially simple to implement if you have an initial dataset which is annotated. When you are creating a classifier, you should play with different methods and models. Try various vectorizers, classifiers, stemmers and model parameters — options are unlimited, so try as many as you can! Sometimes better accuracy could cost you too much time, so check if it’s really necessary and try to find the best balance between the computational speed and accuracy. Originally published at krakensystems.co
Text classification — a simple way to organize your data
4
text-classification-a-simple-way-to-organize-your-data-1cc6ecf3a46f
2018-09-20
2018-09-20 10:09:53
https://medium.com/s/story/text-classification-a-simple-way-to-organize-your-data-1cc6ecf3a46f
false
2,086
Kraken is a tech company specialized in distributed systems, industrial IoT, data-science and system architecture.
null
krakensystems
null
KrakenSystems
info@krakensystems.co
krakensystems-blog
DATA SCIENCE,DEVOPS,DISTRIBUTED SYSTEMS,MACHINE LEARNING,IOT
null
Machine Learning
machine-learning
Machine Learning
51,320
Tena Belinić
Data Scientist
ab1c815d1493
tena_60519
8
7
20,181,104
null
null
null
null
null
null
0
null
0
274e4d54f5fd
2018-04-05
2018-04-05 07:25:52
2018-04-12
2018-04-12 02:37:35
3
false
en
2018-04-12
2018-04-12 02:37:35
6
1cc732261940
5.376415
3
0
0
Neuroscience has a fair grasp on how neurons use electrical and chemical signals to communicate between each other. The connections between…
5
The pursuit of brain simulation — some challenges and considerations Neuroscience has a fair grasp on how neurons use electrical and chemical signals to communicate between each other. The connections between each neuron and the rate of their impulses are generally what create our experience — our sensations, emotions, thoughts and even our desires — and are what allow us to physically interact with and interpret the world. Neurons are generally considered the building blocks of our conscious experience. So with this knowledge, and humans appetite for development, brain simulation is another step up the mountain of technological improvement. An improvement which requires efforts and considerations from many fields such as computer science, bioinformatics, neuroscience, data analysis and engineering. The idea of creating a computer based simulation of the brain also requires heavy considerations from ethical and legal bodies to safely and cautiously regulate the creation and implementation of a human-like ‘computer brain’. I’m sure many have heard the term brain simulation before, but what is it? What has and is being done by many organisations to achieve a working simulation of human experience. I will be exploring these questions below. Brain simulation is the idea of imitating and hopefully emulating the natural brains connections and processes using a computer-based model. So far, scientists have successfully mapped and digitally simulated one small animals entire neural network. This organism is a microscopic worm called the Caenorhabditis elegans (C.Elegans). This primitive species has a very basic neural network consisting of 302 neurons which is infinitesimal to a human brain consisting around 80 billion neurons. Nevertheless, all connections in this tiny worm’s neural network have been mapped and digitally encoded into what is called a connectome; every one of its neurons with every one of the connections between them stored electronically. Using this information, a computer simulation has been developed allowing users to perform voluntary movements with a virtual worm precisely mimicking the actions of the C.Elegans producing all movements which you would normally see this animal doing. Some would say, the first step in our quest to brain simulation. C. Elegans, A microscopic worm with its entire neural network of 302 neurons virtually respresented Currently, the Blue Brain Project (BBP) in Geneva Switzerland has made large strides in the pursuit to reconstruct the mammalian brain in a 3D digital environment. Their mission is exploring the possibilities of digital organisms and attempting to blend biology with the 1’s and 0’s of computers. By developing mathematical models and algorithms that imitate the electrical activity of neurons, BBP have simulated neuronal firing virtually. In addition to simulating the constituents of the mammalian brain, BBP have also created virtual environments allowing the simulated brain to perform in silico (meaning to be performed in a computer based software). One notable demonstration was brushing the whiskers of a simulated mouse which created a visual representation of the excited neurons which carry sensory information from the whiskers. This allowed the scientists from the BBP to show digitally, exactly where the sensation of the whiskers is integrated in the brain. It’s not hard to imagine how this software would improve basically all experiments on the anatomy and physiology of mammalian brains and even our own brain in the future. The ability to simulate brushing the whiskers of a mouse shows the accuracy of the reconstructions and the future capability of the BBP to simulate other mammalian brains such as our own. Future steps by the blue brain project involve collaborations with the Human Brain Project to hopefully simulate the entire human brain which is considered the true goal. Image of the BBP virtual mouse displaying where sensation of whiskers was recorded in the brain So why do we have simulations on primitive worms and small mammals but not human brains? The primary barrier between scientist and a simulation of the human brain is the physical limitations of current technology, mainly computers. The supercomputer used in the Blue Brain Project is monstrous and at the peak of current performance in standard computers, but when you’re looking to map and simulate 80 billion neurons and 100 trillion synapses between them, you’re going to need to processing power and storage — and probably more than you might guess. If a simulation of billions of virtual neurons is going to work efficiently, machine power will need to accommodate the possibility of billions of neurons firing simultaneously, as this is how real brains operate. Current computation, memory and processing power just can’t handle the demand of almost incomprehensible amounts of connections sending and receiving information at once. Although transistors on computer chips are decreasing in size and increasing in numbers. The improvements in computer hardware are starting to reach physical limits as the size of transistors can’t be built much smaller. With physical limitations of computers and the enormity of the human brains connections, a challenging problem needs addressing. It is estimated that to store the human connectome using the same methods as the BBP, it would require 1 zettabyte of storage¹ (over 1 trillion gigabytes). How to surmount the issue of computer power and storage? There’s no definitive answer yet, but quantum computers might pose a solution. Some giants in the industry such as IBM and google are fiercely working towards building a user-friendly quantum computer². Quantum computers utilise qubits — which unlike ordinary binary — can be superimposed, which put simply, means these qubits can be in two states simultaneously. This differs from standard computers binary digit system of 1 and 0 which can only be in one state at a time. A quantum computer with just 50 qubits could outperform the most efficient and expensive binary computers on the market³, leading us closer to processing power and memory performance we need to create a human-like brain simulation. With both improvements and limitations aside, brain simulation generates a few controversial ethical questions. One interesting question asks, if a digital brain has consciousness, is it a computer or a brain? And if this digital brain has attention, memory and perhaps even thoughts and emotions, will this simulation have entitlement to human rights and be accountable to law? I have already discussed that our experiences are the aggregate of synaptic connections between billions of neurons. So, if we can simulate these connections, we could theoretically simulate experience. Could we simulate pain or joy? I don’t know but it is an interesting question which may be answered sometime in the future. With which, laws and regulation should already be in place to protect all concerns. Using brain simulations with robotic hardware seems like a very possible outcome. This possibility leads to a concern on ‘dual use’ which is the intended use of technology and methods for medical or other civilian purposes initially, but inevitably being extended to military use. The use of robots with ‘human brains’ for military purposes seems like science fiction but nonetheless the human brain project and other organisations refrain from accepting funding from all military bodies⁴. The concept of brain simulation breeds many interesting debates on the possible applications of the technology (such as AI) and the appropriate regulation which I’m sure fans of the terminator franchise, are avidly aware of. Check out the Humm Tech website to keep up to date with us, as we explore cognitive improvements via non-invasive brain stimulation. Wikipedia. Brain Stimulation [Internet]. Wikipedia, The Free Encyclopedia; 4 April 2018. Available from: https://en.wikipedia.org/wiki/Brain_simulation 2. Will Knight. IBM Raises the Bar with a 50-Qubit Quantum Computer [Internet]. MIT Technology Review [05/04/2018]. Available from: https://www.technologyreview.com/s/609451/ibm-raises-the-bar-with-a-50-qubit-quantum-computer/ 3 Emerging Technology from the arXiv. Google Reveals Blueprint for Quantum Supremacy [Internet]. MIT Technology Review [05/04/2018] Available from: https://www.technologyreview.com/s/609035/google-reveals-blueprint-for-quantum-supremacy/ 4. Rose N. The human brain project: social and ethical challenges. Neuron. 2014 Jun 18;82(6):1212–5
The pursuit of brain simulation — some challenges and considerations
130
the-pursuit-of-brain-simulation-some-challenges-and-considerations-1cc732261940
2018-06-15
2018-06-15 03:30:07
https://medium.com/s/story/the-pursuit-of-brain-simulation-some-challenges-and-considerations-1cc732261940
false
1,279
think better.
null
hummtech
null
HUMMtech
thinkbetter@humm.tech
hummtech
NEUROSCIENCE,TECHNOLOGY,ESPORT
hummtech
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Joseph Carr-Moore
Neuroscience and Genetics student at University of Western Australia. Researcher @ HUMM Tech
a9bd20f1210e
joseph_90954
4
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-04
2018-06-04 00:30:28
2018-06-04
2018-06-04 00:33:47
3
false
en
2018-06-04
2018-06-04 00:33:47
8
1cc7f91d8e9b
3.557547
1
0
0
The human heart averages 4000 beats in every hour, with each pulse, pumping and palpitation echoing the significance of life; the very core…
5
Smart health is wealth — Through the eyes of data science and analytics The human heart averages 4000 beats in every hour, with each pulse, pumping and palpitation echoing the significance of life; the very core of human existence. To this effect, individuals, corporate bodies and government agencies worldwide are constantly investing in improving healthcare on the whole. Even further, policy makers are responding positively to the technological trends in the 21st century, employing digital electronic health systems to influence health care. The concurrence of data science and analytics with healthcare complements the tireless efforts of the conventional ward doctor. It’s applications are innumerable. Through the use of wearable technology, we can get excited about the potential of small sensors packed into sporty devices, to record and track our fitness and health. There are also volumes of rapidly increasing data that can be collected about the user’s health, which raise opportunities for more informed decision making. Data scientists and machine learning experts are making it their objective to leverage the information innovatively, to revolutionize health care. Smart Tech is remotely monitoring and preventing fatalities. Approximately two terabytes of bio-data, including heart-rate, sleep patterns, blood glucose, and even brain activity can be generated daily. This statistic is big enough for only automated systems to record and analyse. Nonetheless, leading brands, such as IBM Watson Health and Qualcomm, as well as hundreds of thousands of health providers and mHealth apps, are innovating solutions to improve patient health. Machine learning algorithms are also being used to detect and control diseases. The Apple Watch has proven its ability to detect sleep apnea, a condition in which the patient ceases to breathe while sleeping and can lead to death, and hypertension, high blood pressure, which risks heart diseases and strokes, with more than eighty percent (80%) accuracy. Intel, in collaboration with the Michael J. Fox foundation, is championing research into the understanding of Parkinson disease. By recording more than three hundred observations per second per patient from wearable technology such as slowness of movements, and tremors, their data scientists are combining the data with molecular data from cellular profiles created by researchers. Because it may not be convenient to generate enough big data from the patients living with Parkinson’s alone, a combination of real-time data from patients and historical data on Parkinson’s would feed the machine learning algorithms, to shed off a better understanding of the disease. The selling point of implementing the use of electronic medical records and machine learning algorithms is its ability to give a second opinion to medical doctors while in itself being self improving, since they constantly record huge quantities of information about the patient user, generating enough training data to enable intelligent decision making. In Ghana, like in every other country, we share a dire passion to improve our healthcare systems. In 2017, policy makers launched a nationwide electronic medical records system. This patient management system, involving a centralised data center as part of the efforts to modernise the health care systems in Ghana, is definitely driving Ghana a sure step in the right direction. Furthermore, with the induction of the National Builder’s Corps, NABCO, a government initiative to reduce graduate unemployment, one of the modules, seeking to create opportunities for Ghana to get completely digital, is paving the way for the structures involving artificial intelligence to be implemented. Considering the global trends, visionary leaders have risen to the occasion to develop the necessary human resource. Blossom Academy, through their talent accelerator, is training the Ghanaian youth to harness their skills in data science and machine learning, to add value to the existing structures. Gilgal Ansah, a biomedical engineering student and pre-fellow with Blossom Academy, is inspired to revolutionize his approach to recording health information. As he participates in voluntary community health programs, Gilgal leverages the electronic health records to store up huge quantities of bio-data from patients, with the hope of generating enough training data to further understand and detect high blood pressure, which mostly doesn’t show symptoms. The future, however, is littered with bright streaks of hope, that our health care systems would greatly be improved. I am very confident that, although Ghana may seem to be slowly catching up, our individuals, corporate bodies, government agencies, and policy makers would proactively support the journey into smart health. Let’s reduce the costs of late detection of diseases. Let’s reduce the costs of misdiagnoses. Let’s save lives! So the big question is, what would you do about it? Composed by Blossom Academy fellow, Paa-Kwesi. — Blossom Academy is a talent accelerator on a movement to build the next generation of African data scientists. We give university graduates in Ghana the skills needed to launch meaningful careers in Data Science. #Comeblossom
Smart health is wealth — Through the eyes of data science and analytics
50
smart-health-is-wealth-through-the-eyes-of-data-science-and-analytics-1cc7f91d8e9b
2018-06-05
2018-06-05 01:40:16
https://medium.com/s/story/smart-health-is-wealth-through-the-eyes-of-data-science-and-analytics-1cc7f91d8e9b
false
797
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
Blossom Academy
Connecting top talent from West Africa to the global economy. #comeblossom
de16258fa851
comeblossom.gh
14
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-28
2018-06-28 23:34:17
2018-06-29
2018-06-29 10:12:19
1
false
en
2018-06-29
2018-06-29 10:12:19
6
1cc8be04d080
1.649057
3
0
0
An Neurochain bot, is a software application consisting of a mixture of algorithms that runs automated tasks. It’s a reference to the…
5
Neurochain: what is a bot? What is a bot in Neurochain and what does it do? An Neurochain bot, is a software application consisting of a mixture of algorithms that runs automated tasks. It’s a reference to the neuron connections in the brain, but in this case it means ‘a chain of bots’. It’s an artificial intelligence acting independently on a node. Their key aspect is that they work in association, by machine learning algorithms, on top of a protocol and a network. The Bots act as a validator of transactions and communicate between each other to guarantee security, transparency and decentralized infrastructure issues (e.g., double spending problem, byzantine generals problem, etc.). Consensus Each bot can have different tasks, but decision and consensus are the core of the Bot ensuring the persistence, accuracy and liveliness of the distributed protocol. The election process will make it possible for the bots to do the consensus in a fair and safe way. The end goal is to create an ecosystem of collective artificial intelligence. Bots are working independently act as members of a decentralized ecosystem of decision-making. Thus, the system benefits of the advantages of a collective and collaborative artificial intelligence. The intelligence sharing process is insured by proof of workflow protocol. The dynamic system put in place, thanks to the Decision Protocol, quickly, and automatically, place in disgrace the malicious Bot to prevent manipulations. The communication layer Neurochain bots have a flexible and scalable/evolving communication thanks to their communication layer. Bots are performing with different algorithms that allow a certain level of autonomy, in order to execute elaborated operations such as smart applications or value creation (like crypto-value, transparency or certification) in the network. Bots can be supported by different platforms: Web, Mobile or Hybrid by adapting the communication protocol. Bots are also able to interact with other existing Blockchain like Bitcoin and Ethereum. Bot Compensation In NeuroChain, the motive or value creation is done through the validation process related to the election process. During this election, the system measures the level of transparency, the level of relevant certified information and methods injected into the network to pay the Bots. Intelligent applications, also generate payments depending on the complexity of the algorithms or workflows. Follow Red Bullish on Twitter for more updates and news Follow Neurochain on Telegram, Facebook, Twitter and YouTube Photo Credit: Decentral.news
Neurochain: what is a bot?
4
neurochain-what-is-a-bot-1cc8be04d080
2018-06-29
2018-06-29 11:19:29
https://medium.com/s/story/neurochain-what-is-a-bot-1cc8be04d080
false
384
null
null
null
null
null
null
null
null
null
Bots
bots
Bots
14,158
Red Bullish
null
55d05059fcf5
redbullish123
0
1
20,181,104
null
null
null
null
null
null
0
null
0
ea70602c94bb
2018-09-17
2018-09-17 07:21:07
2018-09-17
2018-09-17 07:25:26
1
false
en
2018-09-17
2018-09-17 07:36:53
18
1cca7578a2d5
6.086792
25
0
1
What to consider when launching your AI project
5
7 Fundamental Tips To Build Successful AI The future of artificial intelligence (AI) is overflowing with exciting possibilities where data science, knowledgeable teams and advanced tools work together to push the ever-expanding boundaries of technology. But the road going from data to a successful AI project is no straight line. Here’s a fun fact for you: Gartner estimates that 85% of big data projects fail. Tech giants like Microsoft learnt this the hard way when their innocent AI chatbot went on a rampage on Twitter. Like most things in life, AI is tough to get right but easy to mess up. This doesn’t mean you’re doomed to end up like Microsoft and pull the plug on your beloved AI after months of hard work. To give you a hand, here are seven fundamental tips to consider when building AI that can positively revolutionise your organisation. 1. Clearly define the purpose of the AI project If you can’t summarise the end goal of your AI in one sentence, then it’s not clear enough. Figuring out your target customers and defining what makes your AI unique are key questions that will drive your approach and increase your chances of success. Here are a few pointers if you’re just starting out with your own AI. Understand your customers Here’s where you ask who benefits from your AI solution? What problems can you solve for them? Consider mapping out the key use cases alongside a group of actual representatives of your target audience for accurate insight into their needs. If there’s no real need, there will be no adoption and no ROI. Measure your capabilities This is where you really flesh out what your solution involves and what you need to make it happen (data, knowledge, tech, etc.). Doing this will give you a clear picture into whether the requirements align with your capabilities and technology. Evaluate your competition The end goal of your solution is to be a better alternative to whatever is already out there. This means your AI project has to be a step up from existing solutions. So, what makes your project special? Define the required quality How good does your AI need to be so it can be considered useful? This is the time to define the level of accuracy your customers need and the steps you need to achieve it. You should also think about the payoff matrix for quality outcomes so you can tune your optimisations around that matrix. 2. Follow a proven methodology AI isn’t something you want to improvise as you go. Following a tried and tested methodology will ensure your data science project is reliable and successful. The most common methodologies are SEMMA and CRISP-DM. We’ll save you the Google search and give you a brief overview of both. SEMMA SEMMA stands for Sample, Explore, Modify, Model, and Assess. It’s an iterative process for data mining using thorough modeling techniques. While it’s considered the standard methodology, it focuses on procedures rather than results and casually leaves out all business aspects. This is where CRISP-DM comes in. CRISP-DM CRISP-DM stands for ‘CRoss-Industry Standard Process for Data Mining’. Unlike SEMMA, this methodology includes a ‘Business Understanding’ phase that focuses on the objectives from a business perspective in relation to data mining definitions. Feel free to dig further into the phases of the CRISP-DM methodology if you suspect this is the one for you. 3. Find data from a trusted source There’s no other way around it, to create AI, machine learning algorithms need data. Before moving any further, you have to define how much data you need and how you intend on getting it. Data scientists have a few options when building training sets to feed into their algorithms: they can buy datasets, find open-sourced datasets, use artificial data, or engage with smart outsourcing solutions where dedicated annotators deliver accurate data to train and develop your AI models. The last option essentially acts as an extension of your in-house resources. There is, of course, the option of annotating training data yourself, but not everyone has time for that. 4. Choose your algorithms for machine learning Now for the big question: what machine learning algorithm should you use? According to Microsoft’s guide on choosing algorithms, it depends on your project. Here are a few considerations to help you narrow it down: Accuracy of results Training time Use of linearity Number of parameters Number of features There is no shortage of algorithms at your disposal, but of course you’ll want to choose the one that’s best suited for your project. As you may already know, the majority of practical machine learning uses supervised learning. Some popular examples of supervised machine learning algorithms include linear regression for regression problems and support vector machines for classification problems. However, if you don’t plan on having data on desired outcomes, then you’ll want to use unsupervised learning. Popular examples of these algorithms include k-means for clustering problems and the apriori algorithm for association rule learning problems. If you need a refresher, here’s a post you can dig into for a detailed view into the key differences between supervised learning and unsupervised learning. As for computer vision algorithms, artificial neural networks like Convolutional Neural Network (CNN) is better suited for the task of image labelling, annotation, and segmentation. Whereas Recursive Neural Network (RNN) is best for language analysis. Lastly, Multi-Layer Perceptron (MLP) is ideal for speech recognition and machine translation. (Just to give you a hint.) Check this resource for a fine breakdown of machine learning algorithms. 5. Design and build your infrastructure Building an AI infrastructure is a strategic decision where you have to consider things like data storage, computing resources, budget, and time. A useful tutorial series by Intel explains the infrastructures you can choose: In-house hardware (on-prem) Building and maintaining your own computing infrastructure in-house requires a lot more upfront effort, but it also gives you more freedom. With on-prem infrastructure, you can choose which GPU to use. There are pre-built DL server like Nvidia’s DGX Systems or you can have a custom workstation built using companies like Lambda Labs and AMAX. Another option is to build a DL workstation from scratch. Cloud A cloud provider (like AWS, GCP and Microsoft Azure) makes the most sense when you’re just starting out. You can get your first training model on a high performing GPU for less upfront investment than on-prem, with the added advantage of up-to-date technology and hands-off maintenance. You can also use ML-specific providers (like Paperspace) which tailor their infrastructure offerings to better support deep learning workflows. Like with everything else on this list, there are questions you need to answer before selecting an infrastructure that will properly support your AI projects. For example, how big is your data set? Do you have a team that can dedicate their time to maintaining on-prem systems? Are you training a model from scratch or using a pretrained model? Answer these questions now so you don’t have to deal with switching infrastructures later. 6. Test and validate your model AI needs to be trained before it can be useful. This means running your AI application through a training data set so it can create a model, then running it again on an entirely new set to test the accuracy of results. Sounds simple in theory, but there are dangers such as data bias which results in bad functionality (and bad press). You may have seen the media storm surrounding biased facial recognition software or the racial failure of the beauty pageant bot, Beauty.AI. You’ll get a strong hint that something is amiss with your model if it miraculously spits out 100% accuracy. Overfitting is a classic challenge of AI where your application memorises the training data and performs poorly on real-world data. On the flip side, if you get dismal results that don’t model the training data or generalise to new data, then you’re looking at a case of underfitting. It never ends. In all honesty, training may take up more time than the actual development, but it’s possibly the most important step in your AI strategy. A trained and tested model is a useful model. For more details on these challenges (in acapella format), check this video by Udacity: 7. Constantly monitor and retrain your model Once you have a model that’s finally trained and validated, it can be tempting to lean back and call it a day. But the reality your model monitors is dynamic, which means your model should be too. As the Former Director of Marketing at CognitiveClouds, Amit Ashwini, writes in a blog post: “Business conditions change, customers change, products change, changes in your environment can affect your application. Its performance will gradually degrade over time, even though you might not notice. If you’re planning an AI project, you need to account for retraining.” While this is not exactly a comprehensive guide into the best AI strategy for your project, it’s a solid start for you to ensure your AI is on the right path. If you have any questions on how you can acquire accurate data to reliably train and develop your models, drop us a note at data@ingedata.net Originally published at www.ingedata.net.
7 Fundamental Tips To Build Successful AI
244
7-fundamental-tips-to-build-successful-ai-1cca7578a2d5
2018-09-18
2018-09-18 20:20:14
https://medium.com/s/story/7-fundamental-tips-to-build-successful-ai-1cca7578a2d5
false
1,560
Ingedata provides human annotation services to computer vision and artificial intelligence companies. You can find us at http://ingedata.net. Let’s talk data and training sets.
null
ingedata.global
null
Ingedata
contact@ingedata.net
ingedata
MACHINE LEARNING,COMPUTER VISION,ARTIFICIAL INTELLIGENCE,DATA SCIENCE,DATASET
ingedata
Machine Learning
machine-learning
Machine Learning
51,320
Ingedata
Ingedata provides human annotation services to computer vision and artificial intelligence companies. Find us at http://ingedata.net. Let’s talk data.
fe136a4a6369
ingedata
15
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-29
2018-05-29 15:30:56
2018-05-29
2018-05-29 21:42:49
3
false
en
2018-06-10
2018-06-10 17:00:27
0
1cca8a4fac22
3.553774
9
0
0
DeepCloud is building an AI-driven decentralized cloud computing platform for running decentralized applications — IoT and Web 3.0 DApps.
5
Investor Reviews ICO — DeepCloud AI DeepCloud is building an AI-driven decentralized cloud computing platform for running decentralized applications — IoT and Web 3.0 DApps. Product Overview While the cloud industry is already very mature with large players like AWS, Google Cloud and Azure, their cloud infrastructure is geared towards centralized applications where key resources are running in large centralized data centers. These solutions are not suitable for building decentralized peer-to-peer and IoT applications which have the requirement for computation resources running close to the edge devices for processing the growing volume of data generated at the edge, or a cost-effective, solutions for payment flow for micro-transactions executed automatically by the p2p IoT devices as they interact with each other and automate common tasks. DeepCloud AI will democratize the playing field for cloud infrastructure and open-up the market for resource providers and application developers to run and deploy their decentralized applications in a cost-effective manner. Like Golem, SONM, iExec, DeepCloud AI are building a decentralized cloud platform, and betting on blockchain based cloud solution as the future for decentralized applications. The core differentiator is the use of AI for doing the resource matching between the network resource providers and application developers. Competitive Analysis Market comparison: Golem: $435M Market Cap. SONM: $74M Market Cap. The Technology The main idea of DeepCloud AI is to build a self-organizing distributed network through AI. DeepCloud AI focus on building a distributed cloud infrastructure based on the blockchain instead of building a specific service such as unused storage, AI computing platform, or Database as a service. Which were applied in Filecoin, DeepBrain Chain, and Bluzelle respectively. Many challenges affect the performance of a decentralized cloud infrastructure. Beginning from syncing between nodes, matchmaking algorithm, scheduling criteria, and fair incentives for any network contributors. Moreover, load balancing in the network and cost effective in terms of the market supply. DeepCloud AI are facing these challenges in different architecture levels. The network is built using membership protocols, and proof of service to meet user dApps, services, and node configurations. The purpose of choosing membership protocol, it allows nodes to discover one another, disseminate information quickly, and maintain a consistent view across nodes within application cluster. In addition to membership protocols, the AI controller is based on several aspects (i.e. sharding, sidechains, matchmaking algorithm, task scheduling, load balancing, etc). The main aspects for the controller is sharding, and sidechains. As the network is self-organized and based on the AI. The more information one can get from each node; the better the network would be. Team and Advisors ⭐️ Max Rye, CEO — 15 years of experience in Cloud Computing Industry. Experienced in Enterprise-level Cloud Infrastructure. AI Researcher. ⭐️ Geeta Chauhan, CTO — CTO of Fortune 500 companies. 25 years of experience in leading diverse global teams. Silicon Valley Software Group. Experienced in implementation of AI on Blockchain. ⭐️ Joseph Vargas, Principal AI/Cloud Artichect — 13 years of experience in building corporate cloud and AI systems. Experienced software architect and enterprise-level applications and solutions. ⭐️ Vishwas Manral, Cloud Security Advisor — 0chain and Veryx Advisor. 10 years of experience in cloud architecture. CEO of NanoSec and Vice Chairman of CSA. The Inventor of IPSec and ADVPN with over 30 RFCs. ⭐️ Dr. Ahmed Sayed, AI/Cloud Computing Advisor — PhD in Computer Science for cloud computing and AI. 10 years of experience in research and development of cloud computing architecture. Roadmap Token Metrics Hard cap: $15M Total supply: 200M Tokens for sale: 80M (40%) Price per token: $0.25 Verdict Just like Golem, SONM and iExec, DeepCloud AI shares the same vision of the future of decentralized cloud computing but adds the AI aspect of the technology which facilitates proper resource matching between the providers and the application developers. I Believe the project growth potential is huge considering the technology it offers, the team is strong and the token metrics are good. The advisors and partnerships could be stronger but they claim to have over 10 partnerships in the pipeline and also a strategic advisor joining soon. Rating: 89.25% **Disclaimer This article and the information contained herein is not intended to be a source of investment, financial, technical, tax, or legal advice. This article cannot substitute for professional advice and independent factual verification. The ideas and strategies on this article should never be used without first assessing your own personal financial situation, and without consulting a financial professional. All content in this website is for informational purposes only, and is provided “as is”, with no guarantee of completeness,accuracy, timeliness or of the results obtained from the use of this website. The Disclaimers apply to all visitors, users, and others who wish to access or use this article.
Investor Reviews ICO — DeepCloud AI
161
ico-review-deepcloud-ai-1cca8a4fac22
2018-06-10
2018-06-10 17:00:28
https://medium.com/s/story/ico-review-deepcloud-ai-1cca8a4fac22
false
796
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Nebo
Crypto | Blockchain | ICO Advisor | Founding Partner @CryptoSeed_P | Analyst @LuxeEquity
66b99bedba92
nebo_1313
67
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-24
2018-03-24 00:28:29
2018-03-24
2018-03-24 00:36:56
7
false
en
2018-03-24
2018-03-24 00:36:56
6
1ccac2fb2ff5
4.853774
7
0
0
25 years ago, AI’s application was first used to develop the Deep Blue system. And for the first time, human intelligence was officially…
1
EFFECT.AI — THE PERFECT COMBINATION OF TWO ADVANCED TECHNOLOGY: AI & BLOCKCHAIN 25 years ago, AI’s application was first used to develop the Deep Blue system. And for the first time, human intelligence was officially defeated by machines and it was the world chess master at the time — Garry Kasparov. But this does not prove that AI can solve problems in real life and it was quickly forgotten. However, the possibilities of computing and computer programming increasingly growing and the amount of data growing over 25 years ago has become the foundation for developing AI technology today. A series of products have been applied AI and opened a future of “intelligent of things”, from Google search engine, Amazon’s Alexa voice assistant, smartphone’s facial recognition technology to the home appliances, even in the field of traffic with self-driving or unmanned aerial vehicles. Although there is great potential, AI still has a long way to go to address more complex actions such as hand-eye coordination, artwork and artisanal activities, or creative actions. We are only in the early stages of this technology and the new potential of the AI ​​will bring more changes expected. Due to the potential of artificial intelligence too attractive, this field has attracted the attention of many investors and large corporations(Google, Microsoft ,Amazon) over the years. The market is constantly growing and is expected to reach about $ 60 billion by 2025, according to Statista. However, there’s a problem that’s only a handfull of large corporations are developing AI and behind closed doors leaving the futures most defining technology in their control.Effect.Ai is the answer, which is a potential solution to the remaining problems of the AI market. WHAT IS EFFECT.AI? Effect.ai is designed to change the artificial intelligence market by create marketplace base on blockchain technology and powered by NEO. Here, you can request or perform tasks for the development of AI or exchange everything related to AI. The EFFECT Network consist of 3 phrase: Effect Mechanical Turk:It allows anyone in the world to perform a wide range of tasks and receive fair payment. It will give AI developers and businesses access to a large workforce of human intelligence to train AI algorithms. When a worker completes a task, they are paid with with a network NEP-5 token. Effect Smart Market: Effect Smart Market is a distributed AI market where people can provide and purchase AI services or learning algorithms or machine learning. An application owner, for example, can register his or her AI product and specify a price or usage fee for the consumer. AI products are available to anyone on the exchange through smart contracts. Effect Power: The last phase provides a decentralized, distributed computational platform that will run popular deep learning frameworks. The Effect decentralized compute engine is based on popular deep-learning networks like Caffe, MXNet and Tensor flow. WHO BEHIND THE PROJECT? Effect.ai is developed by a team of 18 core members with extensive experience in the areas of artificial intelligence, programming and blockchain. Chris Dawe, CEO Jesse Eisses, Laurens Verspeek (Director of Development) and Nick Vogel (Design and Interact). Dawes is an expert in business systems management and operations and has 18 years experience as a project manager and entrepreneur. There is also an extremely well-known and experienced mentor team in the field of artificial intelligence: Charlie Shrem: Legendary entrepreneur Charlie Shrem is widely recognized as an authority when it comes to blockchain tech. By leveraging his experience and connections in the industry, Charlie will help launch The Effect Network as a leading platform for artificial intelligence. Tony Tran: CTO and co-founder of innovative housing rentals platform The Bee Token, will use his background in artificial intelligence and blockchain technology to add tremendous value to Effect.AI and its plans for an open and decentralized AI platform. Sally Eaves: Sally combines a depth of experience as a CTO, Practising Professor of Blockchain, Founder and Global Strategic Advisor, specialising in the application of disruptive technologies for both business and societal benefit. Sally has joined Effect.AI as an Advisor after recognizing its potential for positive change. Steven Deurloo: Steven Deurloo is an experienced financial Advisor who has been with Effect.AI since the early stages of development. With degrees in both law and economics, Steven brings a wealth of experience to the table when it comes to finance, contracts and legal structures. EVALUATE OF EFFECT.AI PROJECT Advantages: The idea of the project is quite good. The project is expected to establish a relationship between humanity and artificial intelligence. Effect.ai owns a team of creative, experienced developers and consultants in the field of artificial intelligence, blockchain technology and business. This is one of the factors that determine the success of the project. With Effect.ai, people can exchange and purchase services and algorithms easily and quickly. The network of Effect.AI is available to anyone anywhere because there is no centralized side that restricts access. Effect.AI is a decentralized platform with no centralized control over transactions and data, so your data and money are always safe. Disadvantage: Effect.AI’s biggest obstacle is the fierce competition from large corporations such as Google, Microsoft, Amazon, etc. ICO INFORMATION The EFX token will be a utility token that operates fully on smart contracts deployed on the NEO blockchain. Start Date: March 24th11am CET End Date: 18 days after start date Soft Cap: € 4,280,000 Hard Cap: € 14,820,000 Bonus: 10% For the first 2% of tokens Minimum Contribution: € 50 Maximum Contribution:€ 25,000 Accepted Currencies: NEO, GAS Maximum Supply: 650,000,000 tokens Distribution details Funds Distribution Details CONCLUSION With strong, experienced, enthusiastic and highly experienced team. Effect.aipromises to be a bright spot in the field of artificial intelligence is hot today. The combination of unique features, transparency of the blockchain technology and the future development of artificial intelligence will bring Effect.aito a far higher development, surpassing the biggest competitor and brilliant success. Effect.aiis a very promising and worthwhile investment. Join now! USEFULL LINKS Website: https://effect.ai/ Whitepaper: https://effect.ai/download/effect_lightpaper.pdf Bitcointalk ANN: https://bitcointalk.org/index.php?topic=2737469.0 Telegram: https://t.me/effectai Twitter: https://twitter.com/effectaix Medium: https://medium.com/@effectai Author: BCT username: kld_hp
EFFECT.AI — THE PERFECT COMBINATION OF TWO ADVANCED TECHNOLOGY: AI & BLOCKCHAIN
100
effect-ai-the-perfect-combination-of-two-advanced-technology-ai-blockchain-1ccac2fb2ff5
2018-03-31
2018-03-31 03:24:17
https://medium.com/s/story/effect-ai-the-perfect-combination-of-two-advanced-technology-ai-blockchain-1ccac2fb2ff5
false
1,008
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Silver Pegasus
#bitcointalk #bitcoin #ethereum #Blockchain
9d52084fcebd
silverpegasus
476
486
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-30
2018-06-30 12:25:31
2018-06-30
2018-06-30 12:30:33
2
false
en
2018-06-30
2018-06-30 12:30:33
1
1ccc4ae11650
2.587107
0
0
0
Make certain that your data are usually distributed. To begin with, the data ought to be numerical. Variable data can inform you if a…
5
Numerical Data Interpretation Test Secrets Make certain that your data are usually distributed. To begin with, the data ought to be numerical. Variable data can inform you if a particular girder that passes the test might still be able dangerously close to giving way. Interpreting data is a significant important thinking skill that assists you to comprehend text books, graphs and tables. When you have all of the salient data, you’re prepared to ask yourself the important question. There are other means to classify data. Meanwhile, the data of their input may be used for grading. Your variables are in the very first row. The dependent variable is measured or observed. The dependent variables are the factors you’re measuring and would like to examine. All testing isn’t helpful, however. Actually, no testing is much better than bad testing. Numerical testing needs to be included in the event the position demands budget or financial decision making. There is but one sure means to do well on a numerical reasoning test which is to prepare. Internal assessment is an ongoing, periodic and internal practice. Put simply, the health assessment is restricted to finding specific maladies. Statistical analysis is placed on the responses as soon as they are collected to put the individuals who took the survey in the numerous categories. Analysis of variance is 1 approach. The IKM test generates questions based on your level and will provide you harder questions if you’ve got a high level. The test will help to conserve time because technical assessment eliminates applicants which don’t perform well. Therefore, the assessment tests will need to get revamped time to time. Moreover, a psychological test referred to as Strong Interest Inventory, may also be helpful to spot the talent. There isn’t any way to really study for the exam, but exposing yourself to the sorts of questions asked is a fantastic way to improve your readiness. Locating a very good practice exam or exams makes a significant impact in your preparedness, since there are a number of different kinds of numerical reasoning tests. The career test can help you in selecting a suitable career by taking under consideration your interests, strengths, and weaknesses. Picking a career test can be hard, since there are a variety of tests out there. When you have explained your results, start your discussion section with a reminder of the major subject of your report. The end result is the proportion of values in your set which are over the value that you converted into your Z-score. While test results can provide a lot of insight into a developer’s abilities, in addition, there are some downsides. They are always very sensitive and it is inevitable that your employees will share their results with each other. A report ought to be objective and accurate. It must be well researched and contain factual information. If you aren’t aware, you ought to be aware that the earnings report of a provider is the one most important component that determines their stock price. The use of testing system could possibly be an effective element of the recruitment approach. So there’s a demand for continuous internal assessment. Make sure you’re involved with work which helps others or you might become self-centered and and desensitized. In the event your work faces any latency then you are going to get your money back as easy as that. You have to be ensured we have validated their potential to create excellent work. Website: http://talentlens.in/
Numerical Data Interpretation Test Secrets
0
numerical-data-interpretation-test-secrets-1ccc4ae11650
2018-06-30
2018-06-30 12:30:33
https://medium.com/s/story/numerical-data-interpretation-test-secrets-1ccc4ae11650
false
584
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Vikek Kalra
null
18b7dcf44dc4
vikekkalra
3
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-31
2018-05-31 21:54:01
2018-05-31
2018-05-31 21:54:34
8
true
en
2018-05-31
2018-05-31 21:54:34
0
1ccca42b169
3.948428
0
0
0
Self driving cars need eyes to see and perceive the world. Unlike humans, they only have the ability to see using built-in cameras but not…
4
Perception in Self Driving Cars — Part 1 Self driving cars need eyes to see and perceive the world. Unlike humans, they only have the ability to see using built-in cameras but not necessarily understanding whats going around them. Hence, we need to equip them with this skills. For an example, if the self driving car is in the road, it would need to know which lane it is on, how much is the lane is curving and how much is it positioned center in the road. In this post, I will explain how to identify lane lines in an image and then apply the same algorithm on a video stream. 1) Calibrate and undistort the camera images First, we undistort the camera images as the transformation is not perfect when a camera looks at 3d objects in the world and transform to 2d images. In math, this transformation from 3d object points, P of X,Y and Z. to 2d image points, P of just xand y is done by transformation matrix called the camera matrix,C. The figure below uses pinholes like Figure 1 but real cameras use lenses to focus multiple light rays at a time, to allow them quickly to form images. Figure 1: Transformation from 3d to 2d positions When using lenses, new distortions occurs. These are radial distortions, where lines or objects appear, more or less, curved that they actually are. There is also tangential distortions, where if the camera’s lens is not aligned perfectly parallel to the imaging plane where the camera film or sensor is, this makes an image look tilted. Figure 2: Radial Distortions Figure 3: Tangential Distortion We will obtain the distortion coefficients using OpenCV’s calibrateCamera function and calibrate the images against this values. But, how do we obtain this values? One way is to use pictures of known shapes like a chessboard. Why chessboard? A chessboard is great for calibration because its regular high contrast pattern makes it easy to detect automatically. Figure 4: Example of a chessboard and it’s distorted version We use the distortion values to create a transform that maps these distorted points to undistorted points using Python and OpenCV. Then, we perform perspective transform as shown in Figure 5. We use perspective transform to measure the curvature of the lines and to do that, we need to transform to a top-down view. Figure 5: Perspective transfrom performed on an image 2) Gradients of Images Next steps, we want to detect lane findings. Gradients are used to detect steep images and if we take the gradient in the x direction, we would prioritizing edges which closer vertically and alternatively, we would prioritize the gradient in the y direction if we take the gradient in the y direction. An example is shown in Figure 6. Figure 6: Gradients of the top image in x and y direction We can also use different interpretations of Sobel’s values such as its magnitudes and direction. The magnitude of the gradient is the square root of the squares of the individual x and y gradients. The direction of the gradient is just the inverse tangent (arctangent) of the ygradient divided by the xgradient. Then, we can mix and match the combination of the Sobel’s features and try different thresholds and pick the most effective values that highlight the lines. 3) Color Spaces We should not reserve ourselves to just gray scale images but try different color spaces to get better image representations. For example, by using HLS color space and picking the S channel and obtaning the gradients in the X directions, we get the image as shown in Figure 7. Figure 7: Process of obtaining the edges of lane lines by trying different combinations of color and gradient threshold. 4) Fit a polynomial to a lane line Once we have performed steps 1–3, next we need to find a way to fit a line to the lanes. The tricky part is these lines are curving hence we use a polynomial. One way of fitting a line is finding peaks in a histogram and use sliding window on the peaks. These peaks would be a good indicators of the position of the lanes. The final output of these steps are shown in Figure 8. The benefits of this method are once we know where the lines, we do not need to continue the sliding window searching. Figure 8: Using sliding window to fit a line on lanes. Finally, we will obtain results like below when implementing the method on a video stream. In my next post, I will talk about how to detect the vehicles instead. Disclaimer: Most of the images which is posted are taken from the course Udacity’s Self Driving Car Engineering Nanodegree. I have also skipped many steps to make this post friendly to read. Feel free to message me if you want to know further things about the project.
Perception in Self Driving Cars — Part 1
0
perception-in-self-driving-cars-part-1-1ccca42b169
2018-05-31
2018-05-31 23:35:31
https://medium.com/s/story/perception-in-self-driving-cars-part-1-1ccca42b169
false
746
null
null
null
null
null
null
null
null
null
Self Driving Cars
self-driving-cars
Self Driving Cars
13,349
Vivan Raaj
Autonomous Robotics/Self Driving Car Developer & a startup fan.
b5f8abc906d1
vivanraaj
30
32
20,181,104
null
null
null
null
null
null
0
null
0
522eff4d5814
2018-03-22
2018-03-22 15:59:22
2018-03-26
2018-03-26 14:18:21
1
false
en
2018-03-26
2018-03-26 14:18:21
1
1ccd81c0ed8d
7.166038
22
0
0
With massive investment in AI and major government dollars and support, Montreal’s tech ecosystem is heating up. We spoke with our…
5
The future of Montreal’s tech ecosystem looks bright With massive investment in AI and major government dollars and support, Montreal’s tech ecosystem is heating up. We spoke with our investors at Desjardins Capital and the Caisse de dépôt et placement du Québec about why they backed our OrbitMTL early-stage strategy and what they see in the future for Quebec’s tech entrepreneurship. In the past decade, Montreal’s startup ecosystem has gone from fledgling to a Canadian leader in startup investment. In 2017, Montreal saw a 64% increase in deals, with over $1 billion CAD invested in the city’s burgeoning startups. Four of the five biggest deals in Canada last year (totalling $592 million) were Quebec companies, highlighting the global ambition of the region’s entrepreneurs. With the much-needed increase in venture capital funding, government programs and support for new entrepreneurs, plus a global competitive advantage in AI, the pieces have fallen into place for a proliferation of high impact companies. When Real Ventures announced our $180 million raise in November last year, we noted that as well as broadening our horizons and investing in later-stage companies across Canada, we continue to focus major energy — and a dedicated team — on our OrbitMTL strategy, which allocates $30 million in seed investment and hours of mentorship and support, to early-stage startups in Quebec. To dig into why now is such a great time for tech entrepreneurship in Quebec, Sylvain Carle, one of the Real Partners on the Orbit team, spoke with two of our investors — Tom Birch, Managing Director, Quebec Funds & Technology with the Caisse de dépôt et placement du Québec (CDPQ), and Jacques Perreault, Associate Vice President, Technology investments at Desjardins Capital — about the state of the Quebec tech ecosystem, the opportunities for growth, and their view on what the province needs to continue building on this new momentum. Sylvain Carle: Let’s start with the current state of the Quebec tech ecosystem. Why is it a great time to be in this business together? Jacques Perreault: If you look back 10 years at the state of the VC industry in Montreal, it was basically a nuclear winter. Nobody was in the market. For tech companies, it was tough to get financing and I think Real was one of the first to come back and restart the startup community with its first fund [the Montréal Startup seed fund, raised in 2007]. It was a nice trigger to the ecosystem in Montreal and other funds followed: iNovia, Brightspark, Whitestar. That was the beginning of a positive cycle. When I’m looking at the ecosystem today, we see a different kind of players — angels, VCs, institutional VCs, corporations. And the corporations now believe in technology and in the fact that we should invest in early-stage companies. With the emergence of angel investors and corporations we are seeing more and more involvement — either by buying the technology or starting to invest in some of the startups here in Montreal, in Quebec and in Canada. Tom Birch: The Orbit investment fund is also very, very important because Real built the FounderFuel program — a way to coach entrepreneurs. Lots of people have great technology but they don’t know anything about sales, marketing, direct sales, indirect sales channels, online…. But these guys [at Real] get it: they have to coach all the entrepreneurs in Montreal, in Quebec, and eventually [the entrepreneurs will] start the first company, maybe create a liquidity event for Real Ventures, but then they’ll have a second at bat, and a third at bat and then they’ll have more and more chances themselves. JP: That’s a big change: to see the ecosystem evolving with time and seeing more and more great projects. We didn’t see that ten years ago. We’re also seeing a new generation of entrepreneurs — either first time or second-time entrepreneurs — coming into the market with realistic projects. They know the space, they know what they need to do. Some of the entrepreneurs made mistakes in the past but now they know what to do to create success. TB: Another problem we had in Quebec ten years ago was if an entrepreneur could sell his company for 40 million bucks, they sold it. The VC funds were so small they wanted to create an early exit so they could go raise another fund. We were in this never-ending vicious cycle where people wanted to build a company and sell it; build a fund, make a bigger fund, and nobody was actually building for the long term. SC: So now there’s more capital available for startups but there are way more startups too. Do you see that as one of the challenges for startups here in Quebec? What have you heard or seen that shows us the struggles of entrepreneurs? JP: I see two holes. For seed investment, although we have players like Orbit/Real Ventures, I think we’re still missing players in that space. We are seeing angels, we have all kinds of accelerators and programs, but I don’t think we have all the financing needed at the beginning of the food chain for all of the tech startups. I think that’s the first challenge here. We should have a stronger financing structure at that point [in the cycle]. The other challenge is after companies have gone through seed financing, series A, series B and getting to series C — at this point I also see a lack of investors. Big institutions are only just starting to invest, but there’s still a big hole there. Sometimes the financing also comes from US-based VCs which is good, but it would be interesting to have more local players. US VCs come in and fund or acquire those companies and sometimes we are selling them too early. TB: Exactly. In the case of Lightspeed, we [the CDPQ] went in and invested in their Series C round — roughly $32 million US, then two and a half years later there was an opportunity to take out Accel, which was a previous investor, and we put in $136 million US. It was very important for us because now the founder, Dax Dasilva, and his management team can take a five-year view of building out the company versus trying to create a liquidity event for an American investor. We need to think bigger and that can only happen if we take a long view and invest in these companies for the long-term. SC: When we were fundraising last year, our Partner, John Stokes, often said it’s year ten of a 20-year cycle. So what’s next? Anything we should be doing given those two gaps in the market? JP: I think we are starting to see the ingredients: the accelerators, the investors, the government programs, corporations. What I think we are lacking here in Quebec — if I compare it, for example, to Toronto — is that we are working in silos. I don’t see a common view or a common strategy for what we should do next to make sure we’re able to build the leaders of tomorrow in terms of tech companies. TB: In terms of building out the ecosystem, we also really need to invest in small early-stage VC funds to increase the size of the farm team. We have to make sure we have a minimum critical mass. The venture capital market in California is roughly 34 billion USD per year, in Tel Aviv it’s 3.2 billion US per year. Israel has roughly the same population as the province of Quebec, and Quebec is roughly 500 million US a year. The only way for us to create more success is to start more companies and get more bats and create more opportunities to create the next Google of AI or Google of IoT, but if we don’t have enough at bat, we won’t have the chance to succeed. I also believe that 25 years ago we had a world competitive advantage with respect to telecom software, which turned into fibre optic technologies and semiconductors that associate with fibre optic technologies, and we haven’t had a competitive advantage in any other technological field until now. Having the opportunity to work with Yoshua Bengio, the founder of MILA (Quebec’s AI institute) we’re going to actually have a chance to build up the critical mass in AI. One of Real Ventures’ investments, Element AI, will be in the same building, as well as other Montreal VC-backed big data scale-ups and start-ups. Within three years, we’ll have 5,000 AI knowledge workers in two blocks in Montreal, and for me that’s minimum critical mass. So now VC funds are starting to come to Montreal to invest in our ecosystem, so that means we’re going to develop more and more companies in Montreal that want to stay in Montreal even when they create a liquidity event. They’re going to want to stay here and build other companies, so in terms of world sustainable competitive advantage, I think with AI we finally have a chance to do it. It’s weird because when the Internet was founded in 1992, everyone was investing crazily. In 1997 to 2000, it was like the wild wild west. But now, I actually believe the wild west is just starting because we have all of the fundamental technologies available today, we have the foundations and it’s going faster. We now have the people to manage the technology evolution, and you throw AI in there and I think in the next ten years we’re going to see such massive technology growth that we’re in the right place at the right time. SC: Any closing comments? TB: I’m excited to be in Montreal. Quebec is taking a strategic approach to creating long-term value by investing in innovation. That’s really key: we’re selectively investing in AI and 5G — which are key enablers of IoT — and investment in innovation will drive the rest of the economy. We have a great quality of life and some of the world’s best technologies, so I think we’re sitting in a nice position. JP: For me, what you’re building — Orbit, Real Ventures, FounderFuel — is one of the catalysts of the Quebec ecosystem. Because of your reach, the number of companies you’re seeing, the intelligence you’re able to get with those companies, all of your co-investors, your LPs — you’re playing a very important role in the market. At Desjardins, we have the same philosophy and yes, we want to make a return but we also want to build something that’s sustainable. It’s very important and it’s part of our DNA but we’re also looking for partners with the same views. We want to build the leaders of tomorrow and there’s only one way: we have to be patient, work closely with the companies and make sure that everyone in the ecosystem is aligned and has the same goals. This interview has been condensed and edited for clarity.
The future of Montreal’s tech ecosystem looks bright
98
the-future-of-montreals-tech-ecosystem-looks-bright-1ccd81c0ed8d
2018-04-06
2018-04-06 17:08:47
https://medium.com/s/story/the-future-of-montreals-tech-ecosystem-looks-bright-1ccd81c0ed8d
false
1,846
Knowledge is the key to building transformative technology.
null
RealVenturesVC
null
Believing
hello@realventures.com
believing
STARTUP,VENTURE CAPITAL,TECHNOLOGY,VC,THOUGHT LEADERSHIP
realventures
Venture Capital
venture-capital
Venture Capital
32,826
Real Ventures
Real is a leading source of capital for game-changing entrepreneurs, and a driving force behind emerging tech ecosystems.
60148c1bc38e
realventures
2,896
158
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-09
2017-12-09 15:06:04
2018-01-06
2018-01-06 19:37:05
1
true
en
2018-06-09
2018-06-09 17:25:15
5
1ccde0b46fae
9.773585
1
1
0
We could be out of work in fifty years. We’re not ready for it.
5
The Decline and Fall of the Human Empire We could be out of work in fifty years. We’re not ready for it. “Humanity Finally Wins One as a Human Racer Defeats Yamaha’s Robot Motorcycle.” The headline of the short post on the Popular Mechanics blog was tongue-in-cheek, but it succeeded on the back of a very real fear: our defeat at the hands of robots. Its writer dutifully chose that word, “robot,” over “self-driving” or another more innocuous word like “autonomous,” thus welcoming images of beady-eyed anthromorphs to invade the humdrum world of algorithms, sensor arrays and software engineers. But Yamaha, too, knew that robots get clicks, because atop its self-driving bike, which probably needed no more than a few tidy sensor housings to perform to spec, the company’s engineers had instead perched a being direct from our robot future: a lithe, porous figure clad in sleek blue metal panels evoking the swoops of breeze-blown ribbons around a dark mechanical core. The outline was of a man. There was little doubt, looking at it, that this robot had a target, and that the target was us. But it lost! The story, with a slight wink to its technophile readership, trumpeted a rare moment of human grit triumphing over technology, like a reprise of the legend of John Henry, lacking only the limp form of a dead rider crossing the finish engulfed in flames. Beating the machines, we all must know by now, is the entire point. So if a robot were writing these words with gleaming, spindly fingers stretching across the keys of a MacBot Pro, this is the part where it would pause, lips scrunching into an awkward tin grin as it whispered to no one, “Resistance is futile.” Because the simple fact is, the machines will win. Or, to state the obvious, the machines have won. You’re reading this on a device that fires five times faster than the neurons in your head. There are cars on the road so advanced that your children may simply never touch a steering wheel in their lives — lives which are made safer by this fact. Recent machine learning interventions can even help prevent suicide by identifying the early warning signs that humans simply can’t see. So why do so many of us still imagine that we can flip this script — or that if we could, that we should even want to? Cerebrum Ex Machina Looming dark over any modern discussion of machines is the vexingly opaque specter of high-level machine intelligence, a phenomenon whose emergence many of us are destined to witness in our lifetimes. Estimates vary, but the consensus is that this will happen soon. How soon, exactly? A survey distributed to machine intelligence experts in 2012 and 2013 by Nick Bostrom and Vincent Müller found that “[t]he median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075.” And if “high-level machine intelligence” weren’t enough of a golem, the authors then add (emphasis mine), “Experts expect that systems will move on to superintelligence in less than 30 years thereafter.” In short, if and when the champagne pops at the close of this century, our minds will almost certainly be inferior in every way to those of our tools. Are we right to fear this? Future generations may indeed look back in curiosity at our hand-wringing over today’s trend toward ascendant machines. They might wonder to themselves whether humans were as anxious at other, earlier points in history when technology so profoundly outpaced biology—in the case of cars, say, or typewriters. But maybe this moment really is different from the technological leaps of the past. Because to look at it, the valley dividing our hill from the one on which the machines stand, frothing and clanking their swords, is no ordinary battlefield; it’s the battlefield of worth. And today, to talk about worth, when it comes to us humans, is to talk about work. We don’t remember a person like Steve Jobs, for instance, simply because he existed; we remember him because of what he knit from the fabric of mere existence. We say that he “made something” of himself, and we mean it as the highest form of praise. And the same is true of any other human name brought to mind by the term “great.” To be great can only mean to do great things. But there’s no law of physics to prevent future machines from also doing great things, and they will inevitably also be able to do those things better, faster, and more efficiently. As anyone who has ever taken a sick day can attest, our human wetware has real limits imposed on us by our biology, things about us that we cannot change. Machine hardware, on the other hand, has far fewer constraints. It is limited not by fragile regulatory systems or the physical size of a cranium, but simply by the pace at which knowledge can be created to improve its structure. We don’t sit around redesigning humans — at least not in the technical sense — but we do, constantly, sit around redesigning technological systems. And under our own direction, these machines have outclassed us humans time and time again. No honest observer would dub this a passing trend. There’s no law of physics to prevent future machines from also doing great things, and they will inevitably also be able to do those things better, faster, and more efficiently. It’s why, scrolling our news feeds, we pause and flinch at the robot on the bike. It’s a new milestone—aside from the occasional trained bear, this is a thing only we are supposed to be able to do. Up till now, it’s been fairly easy for most of us to brush aside the growing lot of human tasks that machines have added to their repertoire. Cars and typewriters, and apps and smart homes, do not reason; they don’t paint masterpieces, they don’t chair boards, they don’t make plans and they don’t vote. And even here, something like Yamaha’s “robotic” motorcycle is more of a totem than an actual manifestation of our worst fears. Even trained bears have a limit. But the breathless prose lining the pages of thinkpieces and books about the rise of artificial intelligence promises something that is quite literally incomprehensible to us—machines that can outthink us, and more: machines that can make other machines that outthink them. We’re already getting glimpses of the strangeness of the coming intellectual inversion. Algorithms already in existence are solving complex situational problems that humans never could, and they’re doing it in bizarre ways that are impossible for their own creators to interpret or explain. This is the future into which we now march: a world in which our needs are met, quite literally, by a god-like progeny who speak in the tongues of aliens. The humanoid atop the Yamaha is clearly as much a distraction as anything, just like all the other human-based templates applied to robots of the past, both real and imagined. In truth, if we’re intent on being as scared as possible about what’s actually charging us from the opposite hill, then to imagine robots that look or think anything like us is to completely miss the point. The machines we should be worried about are nothing like us; they surpass us in every conceivable way, including, quite notably, the physical forms they take. Look at how artificial intelligence researchers have mapped their way toward the grail of superintelligence. They’re not busying themselves building literal brains. While it’s true that so-called “whole brain emulation”—the construction of a faithfully rendered working model of our own brain—is a goal of some, it’s also true that the broader field of AI has swiftly and convincingly painted a target on greater-than-human-level computer cognition which experts agree can be bullseyed without the construction, or the constraints, of a physical brain. These prognosticators differ only in their estimates of how long this will take. But even at this early stage, advancements in artificial neural networks and machine-learning algorithms signal to us that there are more elegant paths to intelligence than the fragile, wet jumbles of wrinkled gray mush between our ears. If the development of superintelligence were being pursued only within the limits of human build and body, in fact, we might place our predictions for its arrival much farther down the road. Our own physical form is more of a cage than a ladder when it comes to the things we construct. This was true of almost every past invention of note, from so-called “simple machines” on up, and it promises to be true of tomorrow’s machines as well. The difference is, while yesterday’s inventions may have been stronger than us, faster than us, and capable of superhuman feats of memory, tomorrow’s will be capable of the one thing that has faithfully separated us from our creations until now: thought itself. Into a Post-Human Future What would it mean for humans to no longer be the most creative species on earth? It’s a funny question, maybe better rephrased as, What would it mean for no species on earth to be its most creative inhabitant? Or should AIs, in fact, count as a species? Google Co-Founder Larry Page and other so-called “digital utopians” think they should. These optimists herald a future in which our machines align fully with our values and help us achieve a previously unimaginable prosperity. But what will we do in a utopia like the one Page predicts, in which we are no longer needed for new modes of transportation to be designed and built, or for diseases to be cured, or for important conversations to be advanced for the common good? What exactly is a human being in such a world — a world in which we are the lowest-performing sentient creatures at any task that was previously worth our time? Suddenly, the valley beneath us turns dark. Not only are we ill-equipped to enter it, we struggle to even guess how we might become equipped. What exactly is a human being in such a world — a world in which we are the lowest-performing sentient creatures at any task that was previously worth our time? Even if you aren’t fully on the side of the utopians, to characterize the coming wave as a battle may indeed be the wrong outlook. Even those familiar with the finest pores of the AI landscape disagree as to its most likely future. Maybe superintelligence is impossible in machines. Maybe those machines will simply amplify human ability instead of negating it (like a very very advanced shoe, or dishwasher). Maybe machines will only displace some human jobs, and there really are tasks that only we will ever be suited for. Smart, future-oriented thinkers from Wired founder Kevin Kelley to billionaire technologist Peter Diamandis to the futurist Ray Kurzweil have all championed the rise of AI, and have discounted its impact on human worth. To hear them tell it, the future is unbounded upside with virtually no risk. Still, few are suggesting that we should ignore the potential clashes that could await us in the development of thinking machines. Such an error would be to walk blindly into an automated world that may care only as much about humans as we now care about our house pets. Even if the chances of this are small, they are not zero. And just as we engineer buildings to withstand storms and earthquakes they may never face, so even a slim chance of a dire outcome demands a serious look. What is the next phase of humanity defined by, if our value can no longer depend on the scale of our achievements? How do we want to live in the days when our work is worthless, and when all our most noble contributions to society pale in comparison to those of the machines we have birthed? Even if our superhuman machines can teach us how to live better lives, as many hopefuls predict, are humans simply tomorrow’s trained bear, valued for our novelty, rather than from some universal conception of worth? Machine learning experts like Jeremy Howard have sounded this alarm before, recommending sweepingly broad changes to the fabric of human societies. We should separate labor from earnings, Howard says, and move to a craft-based economy. Better education will not help us, and neither will better incentives around labor. Voices from the field tell us that the value of human work is perched at the final precipitous curve of an exponential drop that will play out over the next 20-to-40 years. This will happen regardless of whether or not machines reach full-fledged superintelligence. Optimists believe we can weather this storm, but for them to be proven right will have meant that a spectacular chorus of humanity called in unison for the kinds of changes and conversations that are necessary. Not only is this kind of chorus exceedingly hard to imagine in the fractious hellscape of modern public discourse, but it’s hard even to imagine a viable coalition forming within the narrower industries of AI research and development, which are currently home to intractable debates about the future of machine intelligence. Such debates are made all the more complicated by the philosophical mysteries of consciousness and thought which are wrapped up in any talk of machines that can reason. It’s a code we haven’t cracked yet — and one we probably won’t crack before we need to begin setting the requisite guardrails for a machine-led future. What should we aim for? The philosopher and neuroscientist Sam Harris laid out his own aspiration plainly at a November 2017 event: “In a world of true abundance, you shouldn’t have to work to justify your life. You should be free to enjoy the wealth of the world.” This is a sentiment which thinkers like Harris may find easy to accept, but it’s still a very uncommon — and uncomfortable — one for large portions of society, including many conservative communities who have attached a spiritual level of significance to labor, and who believe that leisure should be its reward rather than its replacement. But if you told a child that in the future, machines would do all the work she sees adults doing today, you would undoubtedly see in her eyes not a look of panic, but of relief. And you’d be hard-pressed to offer any coherent criticism of her response without resorting to the kind of magical thinking that surrounds modern working life. It’s a reminder that our values are shaped by our environment, and that we don’t start our lives wishing for a daily commute and a 9-to-5. Our best future will grow out of a childlike posture toward hard questions of human value, rather than a dismissive one. And these are questions which we have all, unfortunately, run out of time to avoid. The robots are coming.
The Decline and Fall of the Human Empire
1
the-decline-and-fall-of-the-human-empire-1ccde0b46fae
2018-06-09
2018-06-09 17:25:16
https://medium.com/s/story/the-decline-and-fall-of-the-human-empire-1ccde0b46fae
false
2,537
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Nathan Driskell
I make films, write stories and chase my curiosity.
940620cbc0ec
ndriskell
270
541
20,181,104
null
null
null
null
null
null
0
null
0
e1df37a4e9c1
2017-08-25
2017-08-25 22:48:27
2017-09-01
2017-09-01 16:37:47
24
false
en
2017-09-01
2017-09-01 16:37:47
28
1ccdfeb33ea2
11.368868
71
0
0
Introduction
5
Generative Machine Learning on the Cloud Introduction In the last year we’ve witnessed rapid advancements in hardware capabilities, continued development of user-friendly machine learning libraries, and AI connectivity for the maker community. With this backdrop of increasingly user-friendly AI, I spent the summer working with Google’s Artists & Machine Intelligence (AMI) program on a cloud-based tool to make generative machine learning and synthetic image generation more accessible, especially to artists and designers. This post will explain some common generative model structures as well as pitfalls and resources for people interested in coding their own. Before I get to the project, a little about me. Me, hiking the Enchantments My name is Emily Glanz. I graduated from the University of Iowa about a year ago with a B.S. in Electrical Engineering, and have been working on various Google teams as part of the Engineering Residency program. My experience with machine learning prior to AMI was centered around prediction and classification tasks while working on a hearing loss diagnostic tool in college. The goal of my AMI project was to lower the barrier to entry for using Google’s Cloud ML infrastructure. I wanted to make it easier to train and use generative models for creative applications. Using the cloud gives users easy access to GPUs for training without needing to set up a workstation. A concise and easy to use TensorFlow example acts as a perfect starting point for modifications and customization. Check out the project on Github: GenerativeMLonCloud. The end-to-end system design allows a user to provide a custom dataset of images to train a Variational Autoencoder Generative Adversarial Network (VAE-GAN) model on Cloud ML. From here, their model is deployed to the cloud, where they can input an embedding to have synthetic images generated from their dataset or input an image to get an embedding vector. In addition, I created an App Engine web application to demonstrate using Cloud ML’s python API to interact with the deployed model. The scope for the current tool focuses on generative images, but we hope to add examples in the future that deal with other inputs such as text or audio. CNNs, VAEs, and GANs To kick off this project, I took the route taken by many jumping into neural nets: a few days spent on TensorFlow tutorials, a couple read throughs of Chris Olah’s fantastic blog, and some nice time digging through the endless examples of generative neural nets on Github. After the Convolutional Neural Net (CNN) MNIST tutorial by TensorFlow, I was ready to dive headfirst into generative image models. I looked first at Variational Autoencoders (VAEs), then Generative Adversarial Networks (GANs), and ended up using a combo of a VAE-GAN as the final model for the image to image model. The first step in developing the generative tool was generating handwritten numbers using a VAE which I created from the CNN tutorial. I started with a VAE as this particular network has been one of the most popular approaches to generative imagery in the past couple years. The MNIST dataset is commonly used as it is a standard benchmark for image based neural networks: Generated MNIST digits using a VAE The MNIST dataset is a great black and white image dataset to get started with — TensorFlow even provides a nicely formatted version of the set with their library. A VAE is comprised of an Encoder network and a Decoder network. The Encoder takes input images and encodes them into embedding vectors that live in a latent space. The latent space has a lower-dimensionality than the input image which is why it is sometimes referred to as the ‘bottleneck’ of the network. This bottleneck forces the Encoder to learn an information-rich compression of the raw input data as it maps the image to the latent space. For example, one of the features learned by the VAE could be the amount of ‘smile’ in a face. A constraint is added to the Encoder that forces the network to create latent vectors that follow a unit gaussian distribution. The Decoder can reconstruct the given input from these embeddings. These models become generative when a randomly sampled vector from the unit gaussian (the distribution enforced in the Encoder) is passed into the Decoder; simultaneously the Decoder learns to use these embeddings to generate synthetic images from the latent space. The network is trained end-to-end: the Encoder learns the most important features of the input image, allowing the Decoder to reconstruct the input image from the latent vector representation. Image Generation In the variant I used, I have a couple convolutional layers in my encoder and a couple of convolutional transpose layers (aka “deconvolution”) in my decoder (I’ll get more into the detailed architecture of the VAE later). I found this tutorial a great explanation of VAEs. At this point I took a detour into Conditional VAE (CVAE) land, and used the MNIST dataset to play with this autoencoder variation that conditions on label information to allow the user to specify which number they would like to generate. The CVAE is trained by appending the one-hot encoded vector representing the label of the input image (so if the input image is a 9, the label vector is [0,0,0,0,0,0,0,0,0,1]) to the input image and the latent space vector. Then to request a specific generated number, the user can input a random embedding sampled from the unit gaussian distribution combined with the one-hot encoded vector of the number desired. Generated MNIST digits using a CVAE One fun thing that can be done with a CVAE is to mix together two labels (in this case two numbers). Usually, we’re only supposed to set one of the bits of the one-hot vector high, but what happens if we set two bits high? What if we asked the CVAE to generate an image with the condition [0,0,1,0,0,1,0,0,0,0]? This is essentially asking the CVAE to generate an image with label 2 and 5. In this case, the decoder tries to generate an image that matches this condition, resulting in an image that looks like a 2 combined with a 5. Beyond digits, this feature of CVAEs could be used to combine images of different labels — one such application could be used for generating synthetic faces matching specific attributes, like ‘female, brunette, etc’. This chart shows what happens when each number (0 through 9) is combined with 2: Each digit combined with 2 For the above image, the number requested is the desired number (0 through 9) OR’ed with the one-hot representation of 2. For example, to get a 7 combined with a 2, the embedding vector looks like: [0,0,1,0,0,0,0,1,0,0] concatenated with a random sampling from the unit gaussian! Next, it was time to add a GAN (Generative Adversarial Network) loss onto the end of the VAE to sharpen the generated output. VAE’s tend to make images blurry because of the way the network is penalized while training. For VAEs, the reconstruction cost (typically a mean-squared-error (L2) loss), penalizes slightly moved edges and features with respect to the input image. Adding a GAN, which uses an adversarial loss of the Generator vs Discriminator described below, sharpens the output as this loss is more forgiving to exact reconstruction and focuses on the realism of the image features instead. A GAN is trained using adversarial learning. A GAN is comprised of two networks, a Generator and a Discriminator. The Discriminator’s goal is to correctly distinguish between “real” and “fake” input (in this case, real MNIST images from generated MNIST images). The Generator’s goal is to produce output that fools the Discriminator. These two networks play a game to see who can beat whom. In the case of adding a GAN loss to the VAE, the VAE supplies the generator and all we need to do is add a discriminator network. Check out this blog for a more thorough run down on GANs. Less fuzzy output from the VAE-GAN! MNIST digits generated by the VAE MNIST digits generated by the VAE-GAN From this point, I started developing a model for RGB images. I took the VAE-GAN architecture I had used with the MNIST digits, and beefed it up with inspiration from: this DCGAN, this training technique, and this VAE-GAN on Github. The DCGAN link is where most of the layer architecture originated for this VAE-GAN. A very simplified view of the network looks visually like this: The Encoder Network: The Decoder / Generator Network: The Discriminator Network: Batch normalization was used in each of the networks. While training the VAE-GAN I encountered all the woes associated with GAN training, including mode collapse, exploding gradients, and generated noise. Some of the generated faces Another way to explore the embedding space is using spherical linear interpolation, aka slerp. This technique, introduced in this paper and applied to VAEs and GANs in this paper allows me to traverse the space between two known embedding vectors. For example, I can take an image of a smiling women, take the picture of a not smiling man, and explore the transformation between the two in the latent space. Demonstration of exploring the latent space with ‘slerping’ The top images are the reconstructions of the first input image, and the bottom images are the reconstructions of the second input image, with the images between the transitions. The second from the right column shows the result of using a non-face image, a picture of a crow, in the mix. The rightmost column shows the result of two non-face images (a dog and a cat). The autoencoder has trouble reconstructing the images of animals because the latent space has been customized for faces, specifically from those of the CelebA dataset. The equation for slerp: Where q1 and q2 are the embedding vectors produced by the encoder from the two input images, is the angle between the two vectors, and parameter mu varies from 0 to 1. Interesting Training Difficulties Training generative networks can be tricky and it’s worth recognizing some of the common ailments and their remedies, so let’s detour into the previously mentioned woes. Case: Generator collapsed and produces a single example (mode collapse) This case shows that a generator does not always converge to the data distribution. Here you can see the generator converging to a single example At first it appears that the VAE-GAN is starting to learn different features (like hair, face orientation, almost sunglasses at one point) but then we see the generator (in this case the decoder of the VAE portion) breaks down and only produces one example. Playing around with learning rates for the networks and batch normalization solved this problem in my specific case. Case: Generator too strong, exploiting non-meaningful weakness of discriminator (loss / gradients exploded) The consequences of a generator not trained properly The generator first just generates images of a solid color. The generator is not successfully generating images even close to face-like. Training the generator and discriminator based on loss thresholds kept one network from getting too much stronger than the other for me. Case: Learning rate too high for VAE over discriminator network A learning rate too high By just altering the learning rate of the VAE, the network started to generate noisy faces. Experimenting with learning rates for the different optimizers was tricky. Case: Choosing the correct parameters / network architecture Choosing the appropriate embedding size, number of training steps, etc. is crucial to getting realistic output from a GAN. I found this github site to have some awesome tips of GAN training. An old version of the model: Embedding Dimension: 2048, Training Steps: 20000, Batch Size: 10, ReLU activations, sigmoid as final activation in the Decoder/Generator, no batch normalization Current model: Embedding Dimension: 100, Training Steps: 80000, Batch Size: 64, Leaky ReLU in Discriminator, batch normalization in Encoder, Decoder/Generator, and Discriminator, tanh as final activation in Decoder/Generator (so image values in [-1,1] instead of [0,1]) Further sources of information for GAN training difficulties: http://torch.ch/blog/2015/11/13/gan.html https://github.com/tensorflow/magenta/blob/master/magenta/reviews/GAN.md Using Cloud ML Once I had a generative model for images, it was time to really solidify the end-to-end system, the main goal of this project. The dream is to allow a user (with a directory of images) to train their very own VAE-GAN model on their very own image dataset. System Design System for Generative ML on the Cloud Here is an overview of the steps required to make the user’s generative model dreams a reality: Preprocessing: a directory of images (either JPEG or PNG) is converted into TFRecords and split into evaluation and test datasets. These are stored in the user’s Cloud Storage bucket. Training Job: a training job kicks off training of the VAE-GAN model using the user’s TFRecords (on GCS) as input. The TensorFlow VAE-GAN code is packaged and uploaded to the user’s GCS bucket. The model is trained using Cloud ML Engine (GPUs/CPUs/RAM specified in config file) with the checkpoints and final SavedModel being saved to GCS. Create and Deploy Model: a model is created and the SavedModel code is then deployed onto the Cloud ML Engine. Prediction (Generation) Jobs: the prediction API is used to access the trained model hosted on Cloud ML. For one mode, Embeddings are sent as input, with a synthetic image acting as output. For the second mode, an input image is supplied, with an embedding acting as output. This job can be run from the command line using the cloud sdk or from the python library. I used an App Engine project to provide a sample interface for the user to generate images from two trained models. System Setup To get the tool up and running on Cloud ML, first the Cloud environment has to be set up. A Cloud Platform project has to be set up on the projects page, billing has to be setup, and then the Cloud ML engine and Compute engine APIs have to be enabled. To use the command line interface, the Cloud SDK must be installed. Follow these instructions to set up the cloud environment. Running the System From here, the user can begin running training jobs. I created a script that allows the user to specify an image directory and then takes care of preprocessing the images and starting the training job on Cloud ML. Other flags allow the user to further tune their training/preprocessing tasks such as center-cropping the images or which port to start their TensorBoard instance (TensorBoard: the greatest way to monitor any TensorFlow training). A screen grab of the TensorBoard instance during training Another script I created allows users to create and deploy their models they created from running training jobs on Cloud ML. Once on Cloud ML, getting generated images or image embeddings is one API call away. End Notes Playing around in with VAEs and GANs let me generate some fun images: Beyond faces… The MNIST dataset and CelebA dataset are great datasets to test and develop a network — but what else could one use to autoencode and generate? Here are some of my favorite generative art projects for inspiration: 8 bit art by Adam Geitgey Cats by Alexia Jolicoeur-Martineau GANGogh by Kenny Jones and Derrick Bonafilia Fake Kanji Experiment by David Ha Acknowledgements This work was supported by Google’s Engineering Residency program, on my rotation with Artists and Machine Intelligence. I’d like to thank Larry Lindsey and Mike Tyka, for guiding me in generative machine learning and TensorFlow, as well as the entirety of AMI for answering any questions I had and giving me fantastic insight into the world of AI. Huge shoutout to Jac de Haan and Kenric McDowell for all the support for the project as well.
Generative Machine Learning on the Cloud
165
generative-machine-learning-on-the-cloud-1ccdfeb33ea2
2018-05-26
2018-05-26 14:16:51
https://medium.com/s/story/generative-machine-learning-on-the-cloud-1ccdfeb33ea2
false
2,496
AMI is a program at Google that brings together artists and engineers to realize projects using machine intelligence.
null
null
null
Artists and Machine Intelligence
artwithMI@google.com
artists-and-machine-intelligence
MACHINE LEARNING,MACHINE INTELLIGENCE,ART,ARTIFICIAL INTELLIGENCE,GENERATIVE ART
artwithMI
Machine Learning
machine-learning
Machine Learning
51,320
Emily Glanz
software engineer @ Google
fc7820af1961
emilyglanz
70
2
20,181,104
null
null
null
null
null
null
0
null
0
32103eee1119
2018-07-23
2018-07-23 20:16:56
2018-07-24
2018-07-24 10:58:25
2
false
en
2018-07-24
2018-07-24 10:58:25
0
1cce26966b5c
3.349371
13
1
0
Machine Learning (ML) is a subset of Artificial Intelligence (AI) which enable computers to learn from data and improve themselves without…
5
Introduction to Machine Learning for beginners Machine Learning (ML) is a subset of Artificial Intelligence (AI) which enable computers to learn from data and improve themselves without being explicitly programmed. Although Machines are stone-hearted, they can also learn. That’s how your phone recognizes your fingerprint, that’s how google voice translates your speech to text and that’s how Siri communicates with you. As we’ve seen that Machines are becoming more and more intelligent, AI has been applied to Business, Health Care,Finance,Agriculture and several other sectors. In this post, I will walk you through a very quick introduction to ML, ML algorithms and the type of problems these ML algorithms can be applied to. Tom Mitchell (1998) — Well-posed Learning Problem: A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E. Let’s apply the above definition to this problem: Suppose your email program watches which emails you do or do not mark as spam, and based on that learns how to better filter spam. What is experience E, task T and performance measure P in the above setting? E = The experience of watching you mark emails as spam or not spam. T = The task of classifying emails as spam or not spam. P = The performance measure which gives the probability that the program will mark emails correctly as spam or not spam. Generally, ML problems can be solved using the following ML algorithms: — Supervised Learning — Unsupervised Learning Other types of ML algorithms are: — Reinforcement Learning — Recommender Systems — Neural Networks — Support Vector Machines Supervised Learning A Supervised Learning algorithm is similar to the way a child might learn arithmetic from a teacher. This is because the data scientist acts as a guide to teach the algorithm what conclusions it should come up with. It requires that the algorithm’s possible outputs are already known and that the data used to train the algorithm is already labeled with correct answers. A Supervised Learning algorithm is usually applied to Regression and Classification problems. Classification and Regression Algorithms Regression problems are problems that map input variables to predict a continuous valued output e.g predicting stock prices, predicting the number of users that will like an article on medium etc. Classification problems are problems that map input variables into discrete categories e.g Breast cancer prediction,Predicting if a picture contains a cat or not. Let’s analyze the scenario below: You’re running a company, and you want to develop learning algorithms to address each of two problems. Problem 1: You have a large inventory of identical items. You want to predict how many of these items will sell over the next 3 months. Problem 2: You’d like software to examine individual customer accounts, and for each account decide if it has been hacked/compromised. Is this a classification or regression problem? Problem 1 is a regression problem. This is because you are trying to predict a large number of items over a particular duration. There is no particular limit to the number of items you can sell during the duration of three months. Hence, this problem has a continuous valued output. Problem 2 is a classification problem. This is because you are only trying to predict if the accounts has been hacked or not…nothing more,nothing less. Hence, this problem has a discrete valued output. Unsupervised Learning An Unsupervised Learning algorithm allows one to approach problems with little or no idea of what the output will look like. Structures can be derived from data where the effects of the variables are not known. These structures can be derived by clustering the data based on the relationships among the variables in the data. Clustering Algorithm An Unsupervised Learning algorithm is usually applied to clustering and non-clustering problems. Examples of clustering problems are market segmentation, social network analysis, organizing computing clusters etc. An example of a non-clustering problems is identifying different speakers in a particular voice note. One can not predict the output of an Unsupervised Learning algorithm. Unsupervised Learning algorithm can be applied to the examples below: — Given a set of news articles found on the web, group them into set of articles about the same story. — Given a database of customer data, automatically discover market segments and group customers into different market segments. Lastly… I hope that this article has helped you to ease into the topic of Machine Learning. I love feedback (positive and negative) so please let me know what you think — write a response or just hit the clap button and share this post with friends and colleagues. Thanks for reading!
Introduction to Machine Learning for beginners
144
introduction-to-machine-learning-for-beginners-1cce26966b5c
2018-07-24
2018-07-24 10:58:26
https://medium.com/s/story/introduction-to-machine-learning-for-beginners-1cce26966b5c
false
786
We celebrate and inspire female programmers and general tech lovers across Africa by telling their story, involving them in code classes and also helping them share their knowledge and ideas through articles.
null
Shecodeafrica
null
She Code Africa
shecodeafrica@gmail.com
shecodeafrica
TECHNOLOGY,WOMEN IN TECH,EDUCATION,CREATIVE WRITING,PROGRAMMING
SheCodeAfrica
Machine Learning
machine-learning
Machine Learning
51,320
Mariam Olajumoke Garba
#Artificial Intelligence #Robotics #Agritech #GirlWhoCodes #Islam #Optimist
c66419cdf59c
mokeam
105
39
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-11
2018-08-11 00:21:53
2018-08-11
2018-08-11 00:59:46
3
false
en
2018-08-13
2018-08-13 23:41:44
2
1cce5f243b28
7.089623
1
0
0
Intelligence, one of the most important thing humans have acquired during the evolution. It is our ability to think abstractly, reason…
5
Computer Vision : Human Intelligence to Artificial Intelligence Intelligence, one of the most important thing humans have acquired during the evolution. It is our ability to think abstractly, reason, establish correlation between experiences and convert them into knowledge, be creative, predict and most importantly to generate emotions. Human brain is always considered as one of the most complex and fascinating structures across all over the known universe because it is empowered by intelligence. If we try to understand more about the relation between brain and intelligence, we can realize that the brain is more like a tool which gathers informations from all of our senses and help our intelligence to enrich its capabilities. Human brain is simple just like any other organ but the intelligence is complex and we always try to understand the intelligence aspect of our brain. For example when we take decision to move our hand from hot plate, actually it is our intelligence which instruct our brain to move our hand and that is why it is often found that people with paralysis or in coma may not be able to control their body but shows all the signs of intelligence. In fact our intelligence is us, different from our body and unique compared to other human individuals. https://www.pexels.com/photo/person-holding-black-pen-1020325/ Artificial Intelligence is a science and engineering to make intelligent machines as suggested by John McCarthy who has first coined this term in 1956. Modern era has evolved the definition of AI in context of computers by considering it as a domain to make intelligent programs for computers or computer operated systems. Just like human brain, computer also develops its intelligence from collected informations, we call it Data. As human brain collects data through basic senses i.e. sight, hearing, taste, touch, smell and proprioception, our intelligence is also different for each type of senses and our decisions or actions are outcomes of our collective intelligence. It is also important to observe that in general sight or vision is usually the most dominant sense we have and our collective intelligence is highly dependent on the visual information. We can understand the context of scene just by looking at it. We can perceive three dimensional world around us and differentiate shapes with ease. Researchers has found the concept of developing intelligence of computers using visual informations very interesting and this has lead us a domain of AI called Computer Vision. Formally computer vision is a sub-domain of AI where computers are being made intelligent enough to collect visual information from real world in form of images or videos and develop their high-level understanding about the world. Learning is a task of converting collected information or data into intelligence, knowledge or expertise. Researchers have found that it is easy to develop artificial intelligence where the relation between collected information and decision or action taken can be represented by some set of mathematical rules. The hard part is the decisions or actions we take intuitively. The bottleneck of the AI is to find mathematical representations of our intuitive decisions or actions withe respect to collected information like understanding speech, understanding objects with deformations. The goal is to develop and understand Learning algorithms for such intuitive decisions taken based on visual informations. Let’s understand different keywords like Deep Learning, Machine Learning, Knowledge base Learning, and Representation Learning which we frequently use in this particular domain and how they are related to AI. To understand this let’s consider an example where we want to develop a program which can identify difference between Cats and Dogs. The very first step is the collection of information or data. To collect data, let’s assume that we physically get thousand cats and dogs each, randomly from the market. Next We can start collecting information either in form of Textual Description or by taking images of each. As we want to get into computer vision we take images as data and which makes our problem an Image Classification problem. So at this point I have total two thousand images of cats and dogs. We need few samples to test our performance so we can divide our two thousand samples into 80% (1600 images) for training and call them Training Set and 20% (400 images)for testing and call them Testing Set. The first approach we can come up with is we take each image from our test set, and start matching it with every single image in training set. If we are able to find a matching image in training set, we assign a class label i.e. cat or dog, same as the matched image in the training set. This approach to AI is called Knowledge Base Learning where the training set act as knowledge or intelligence of our program. The problem with this approach is we need a huge training set to cover all the images in test set. Also there is a possibility of getting a species of cat or dog which is not part of our training set or we come across an image taken from different angle. To solve the problems associated with knowledge base learning, we can find significant pieces of information known as features e.g. ears, nose, face, paw etc which help us as humans to understand difference between cats and dogs and code functions to find these features from image. This set of features will be our intermediate training data. We create a program which takes this set of features as input and has ability to extract the hidden patterns from it to associate them with label. This approach to AI is called Machine Learning where the AI has capability to find hidden patterns from a given set of features representing the data. Despite of its success into certain domains like numerical data analysis, predicting type of cancer, predicting stock prices etc, the machine learning was not enough to develop an AI which can understand objects in image. The reason behind this limitation is our limitation to understand and find exact set of features which can collectively represent the given object. The manual process of finding the set of features is called Feature Engineering. In other words the machine learning systems depends heavily on the effectiveness of methods chosen during feature engineering. Illustration of VGG16 network for an image of cat and dog. It shows how Deep Learning divide the task into different representations. The representations in the top shows the lower level features like edges and bottom images shows higher level features like contours. The difficulties faced by systems based on machine leaning suggest that AI system needs the ability to not only acquire knowledge by extracting hidden patterns from given data but also acquire set of features to represent the given data points. This ability to acquire set of features to represent the given data is called Feature Learning which replaces the manual feature engineering part. This approach to AI is called Representation Learning where we use machine learning to extract features as well as to establish mathematical relation between extracted features and output labels. With context to our experiment, when we decide to feed the training images as it is and let our program to figure out which is the best set of features to classify the images. For example coming up with some sort of generalized, high-level, abstract features like contours which represents shape of cats and dogs with every minute details. This complexity of representation learning has given rise to an approach to AI called Deep Learning where we decide to represent this high level, abstract representation into a collection of internally correlated simpler representation. It is similar to form a deep graph which represents the contextual correlation between simpler representations of the given image. For example a given image can be represented as a set of small parts, each part can be represented as a set of contours, each contour can be represented as a set of edges and so on. Refer Figure for the illustration of Deep Image Classification Network VGG16's layer visualization. When and Why Do we Use Deep Learning Deep Learning is like a big hammer, which every nail doesn’t require. Every deep learning based approach requires a decent amount of resources to implement in research and quite more resources to use it in production. That is why it is very important to understand what type of problems requires a deep learning based approach and when we can solve the given problem just by using simple machine learning tools. In general we can divide the problems based on two criteria : Their complexity to select features and their requirements for generalization or adaptivity. Let’s consider a problem where a Birdwatcher wants to understand food behavior of five different bird species based on following set of information : (i) Length of beak (ii) Shape of beak (iii) Image of each birds (iv) Sound of each bird (v) Labels for food behavior category. The first step for us is to understand the given problem. We want to classify birds according to their food behavior. Through basic review about the domain we can easily find that there is a strong relation between type of beak (Length and Shape) and food behavior of a bird. There can be different approaches to solve this problem. Using Machine Learning : We take Length of beak and Shape of beak as features and design a classifier which can classify the birds into different classes according to their food behavior. Using Deep Learning : We take the images of birds and classify them into different classes according to their food behavior. We can easily see that in this particular problem Machine Learning based approach is much more easy and should be more accurate because the features we are providing are the exact set of features to classify birds according to their food behavior. Next our Birdwatcher ask us to group birds based on their similarity in voice and appearance. While approaching this problem we realize that sound of birds changes according to time, weather and their mood. Same way it is very hard to characterize unique features to describe and differentiate appearance. For this type of problem Deep Learning is a more suitable tool. To summarize, Deep Learning is great for Problems which require large number of features to solve, and it is hard to describe each feature. e.g. recognizing voice, recognizing faces. Problems which requires inter-correlated features to build higher level understanding e.g. semantic analysis of sentences, object tracking and segmentation, document summarization. Problems which requires high adaptiveness. REFERENCES LeCun, Y., Bengio, Y. and Hinton, G., 2015. Deep learning. Nature, 521(7553), pp.436–444. [PDF]
Computer Vision : Human Intelligence to Artificial Intelligence
1
computer-vision-human-intelligence-to-artificial-intelligence-1cce5f243b28
2018-08-13
2018-08-13 23:41:44
https://medium.com/s/story/computer-vision-human-intelligence-to-artificial-intelligence-1cce5f243b28
false
1,733
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Viral Thakar
AI and Deep Learning Researcher @ Dataperformers
556bca3d2920
viralbthakar
46
44
20,181,104
null
null
null
null
null
null
0
# Create tensors. a = Variable(torch.Tensor([1,2]), requires_grad=True) b = Variable(torch.Tensor([0,2]), requires_grad=True) # Build a computational graph. x = a @ b print(x) # Compute gradients. x.backward() # Print out the gradients. print(a.grad) # Neural Network Model (1 hidden layer) class Net(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(Net, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out net = Net(input_size, hidden_size, num_classes) # Loss and Optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
26
null
2018-03-01
2018-03-01 01:23:19
2018-03-01
2018-03-01 01:26:19
7
false
en
2018-03-01
2018-03-01 01:26:19
2
1cce67dd067e
2.661321
12
0
0
PyTorch was rewritten and tailored to be fast and feel native in Python. PyTorch is not a Python binding into a monolothic C++ framework.
5
Pytorch 101 PyTorch was rewritten and tailored to be fast and feel native in Python. PyTorch is not a Python binding into a monolothic C++ framework. Benefits Impertive Programming This key feature allows you to change computation as you type it. Computation is defined runtime Like most python code,while TensorFlow other framework requires the user to define the model first and then compute it. TensorFlow was made for Engineers while PyTorch was for researchrs. Meaning PyTorch is more flexible with the prize of a little drop in efficency. Graphs are created on the fly Since computaion graphs are built at run time they are more efficent for using RNN’s. Which hence makes debugging very easy. Basic example # 1. Basic autograd example 1 Prints 4 : Taking a dot product of x with w using the new Python 3.5 syntax Once you are done, all you need to do is call #backward() on the result. This will calculate the gradients and you will be able to access them for Variables that were created with requires_grad = True. Prints 0 2 Single Layer Neural Network Looks pretty simple right. Tensor Flow vs Pytorch PyTorch is still a young framework which is getting momentum fast. You may find it a good fit if you: Do research or your production non-functional requirements are not very demanding Want better development and debugging experience Love all things Pythonic TensorFlow is a good option if you: Develop models for production Develop models which need to be deployed on mobile platforms Want good community support and comprehensive documentation Want rich learning resources in various forms (TensorFlow has an an entire MOOC) Want or need to use Tensorboard Need to use large-scale distributed model training But if you are still very new to Deeplearning and would just like to know what it means, keras is the way to go. Learn More If you are a developer and wants to learn pytorch and deeplearning, i suggest this course Fast.ai View original post at my blog
Pytorch 101
30
pytorch-101-1cce67dd067e
2018-06-13
2018-06-13 13:48:37
https://medium.com/s/story/pytorch-101-1cce67dd067e
false
427
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Gautham Santhosh
Maker 🚀 Student 🎓
96ae15eefa1e
gauthamsanthosh
548
85
20,181,104
null
null
null
null
null
null
0
null
0
aba8d2d0507c
2018-03-25
2018-03-25 08:40:44
2018-03-25
2018-03-25 09:21:25
2
false
en
2018-03-27
2018-03-27 12:29:40
5
1ccf150a06c6
2.462579
6
0
0
How to make your home assistant play intro music for you, keeping you cheerful and annoying your housemates.
5
Why everyone needs a virtual cheerleader How to make your home assistant play intro music for you, keeping you cheerful and annoying your housemates. When I walk into my living room, my Google Home plays a randomised hype music track, like the theme from Rocky, as if I was a boxer or skinny wrestler. This is a fantastic way for me to start the day feeling positive and it also winds up my housemate, so double-win. In today’s short post I’m going to explain how I did it and why you should try having a virtual cheerleader. Although with Dialogueflow, Google is making it insanely easy for non-technical people to build Assistant apps, I actually fudged this together without any tools or programming knowledge. Here’s how: Ingredients 2 Google Homes in separate rooms (one will do though) 1 mobile device with the Google Assistant app (I used my iPhone) 1 Spotify Playlist of songs that are punchy in the first 3 seconds (here’s one I made to get you going) 1 dressing gown with a hood 1 door to burst through 1 person that you like to annoy Instructions Step 1: Create or follow that Spotify playlist, so Google knows what to play. Step 2: Open up the Google Assistant app and navigate to the ‘Your Stuff’ section (on iOS, this is via the little blue icon on the top right). Step 3: Type or paste the following into your shortcuts: When I say… ‘Intro me’ (or whatever command you like) The Google Assistant should… ‘Shuffle the playlist <playlist name> on Living Room’ (‘Living Room’ is the name of the Google Home in my living room) Step 4: Test it out. You may want to try different commands that come naturally to you and don’t confuse Ms Google. Step 5: Wait until the morning, put on your dressing gown and use the command on your second device. Step 6: Make your entrance to the living room like the unstoppable champ that you are! This (fairly puerile) exercise got me thinking about another role that a virtual assistant can play (last week’s was a wise old butler): that of an eternally supportive friend. It doesn’t matter what you do, your assistant will listen to you and provide encouragement. In this example, all I’ve done is create some automation, so I’m effectively cheering myself on. Yup, this is a pre-programmed gift from past Andrew to present Andrew to say ‘keep it up man’. Nothing wrong with a bit of self-cheering! But what if, in the future, your assistant device asked you or sensed how you were feeling? Tracking moods and keeping thought diaries help a lot of people to get through the ups and downs of life. There are plenty of apps out there that do this and it feels like voice is a logical progression. As long as privacy is taken care of, this could be a low friction way to record how you are feeling then, unburdened, burst into the kitchen to Eye of the Tiger (feeling like Glen, below). Thanks for reading! Who’s already thinking and writing about AI metaphors? Is there a smarter way to set up my intro music on Google Home? I’d love to find out more. Find me on Twitter or LinkedIn.
Why everyone needs a virtual cheerleader
14
why-everyone-needs-a-virtual-cheerleader-1ccf150a06c6
2018-03-27
2018-03-27 12:29:42
https://medium.com/s/story/why-everyone-needs-a-virtual-cheerleader-1ccf150a06c6
false
551
A publication for designers in New York and followers all around the world.
null
newyorkcitydesign
null
NYC Design
hello@wolony.com
nyc-design
NEW YORK CITY,DESIGN,NYC,USER EXPERIENCE,UX DESIGN
nycdesignmedium
Smart Home Automation
smart-home-automation
Smart Home Automation
1,077
Andrew Muir Wood
Product research & strategy chap | Previously Product/Growth @findpace, Insights @DueDil | Google Design Expert | Start-up mentor/investor | Doodler @muirdoodle
b222372c34b1
muirwd
352
427
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-20
2018-07-20 01:37:52
2018-07-20
2018-07-20 02:34:21
8
false
en
2018-08-06
2018-08-06 03:14:13
6
1ccf3f39fd7f
5.854088
0
0
0
Can Ethereum be of use to the humanitarian community and help power tomorrow’s IATI-data-driven artificial intelligent applications…
4
White Paper: Leveraging Ethereum to Power IATI-Data-Driven Applications Can Ethereum be of use to the humanitarian community and help power tomorrow’s IATI-data-driven artificial intelligent applications? IATI.AI is exploring the possibility. What’s IATI? IATI is an open data sharing standard and technical framework used by over 800 humanitarian and development organizations and donors to make detailed information about aid activities, transaction and results more transparent and accessible to machine applications. IATI is managed by the International Aid Transparency Initiative, supported by the United Nations and mandated by a growing number of government development agencies. On a technical level, IATI is comparable to a similar XML based standard and framework called NewsML. NewsML is used by news organizations to exchange news, event and sports information, whereas IATI is for exchanging information about humanitarian operations in the field benefitting refugees for example. Information streaming through IATI is highly structured and ideally suited to power machine applications. This includes today’s mobile applications and new emerging artificial intelligent applications and digital assistants like Siri and Alexa. IATI.AI MIT Solve is a social-good accelerator supporting solutions addressing global challenges like “How can communities invest in frontline health workers and services to improve their access to effective and affordable care?” IATI.AI is an all-volunteer MIT Solve solution launched to develop training datasets and algorithms artificial intelligent applications need to process information reported by frontline health organizations. The initiative aims to improve frontline health capacity through improving transparency and making vital data accessible to community members through their smart IATI-connected mobile applications. Exploring Ethereum Can Ethereum be of use to the humanitarian community and help power tomorrow’s IATI-data-driven applications? Before exploring the possibility, it’s important to consider: How IATI works on a technical level What sort of network IATI is Whether IATI is blockchain compatible How IATI could potentially benefit from blockchain How IATI Works IATI operates a registry that stores the web addresses or URLs of activity files that aid organizations have published on their web servers in compliance with the IATI Standard. All IATI files registered with IATI are stored in machine readable XML. Below is part of an aid activity file published by Relief International UK via a third party publishing utility called Aid Stream. IATI information fields (red) and principle data (blue). Source file. To make information comparable, organizations report activity details using information fields standardized by IATI. Currently, the IATI Standard contains over 200 information fields (XML elements and attributes) categorized into 36 sections. Organizations aren’t required to use all of these but the fields make it possible to report in granular detail a wide range of activities, funding flows and relationships, turning IATI into a highly structured, broad and easily traversable informational matrix valuable to humanitarian organizations and software developers alike. In general high level terms, IATI can thus be viewed as: A network linking humanitarian and development organizations, donors, government agencies and third party applications, A dataset spread across a population of humanitarian nodes And a set of standardized information fields Centralized or Distributed Network? IATI looks and acts like a centralized network. Structurally the IATI Registry acts as a hub and without the registry it would be hard for the network to effectively exist and function. Administratively, IATI, the IATI Registry and the IATI Standard are managed by a single entity, the International Aid Transparency Initiative. Also, although the International Aid Transparency Initiative doesn’t own or store IATI data per se, it does maintains an auxiliary API which is being updated to provide machine applications with greater access to IATI’s entire corpus, making the initiative a data endpoint and gatekeeper as well. However, at heart, IATI merely facilitates the sharing of information between autonomous nodes linked together by a virtual network. The network exists because humanitarian organizations, donors and other stakeholders have agreed to regularly maintain up-to-date records of all or a portion of their activities online and to use the same reporting taxonomy. In many ways, IATI can be alternately viewed as a kind of distributed network or peer-to-peer style database. The entire IATI dataset exists in pieces openly stored on servers across the internet. Blockchain Compatibility Viewed as a kind of distributed network, in blockchain terms, the IATI Registry is simply an administrative node that the network has entrusted to store the definitive and most current record of all the files published on the network and when they were last updated. Presently, the IATI Registry is refreshed every 24 hours. There is nothing preventing, compatibility wise, IATI from storing the URL addresses of all existing IATI files and metadata on when they were last updated in a blockchain. Likewise, there is nothing preventing IATI from storing a new snapshot of the entire IATI corpus in blocks every 24 hours either. Blockchain Benefits How could storing URLs, snapshots of the entire IATI corpus or snapshots of changes benefit IATI and the humanitarian community? Because IATI files are regularly updated, blockchain could play a role in providing organizations and the broader humanitarian community with a permanent record of file URLs and when the files were published and/or edited. If IATI’s entire corpus was stored in blocks then an archive could be maintained of all file details and changes. It’s conceivable that doing either could help improve transparency and incentivize organizations to keep their files up-to-date as well as give organizations and stakeholders ways to monitor publishing and double-check current file details. Next, storing URLs or IATI’s entire corpus in blocks could help keep publishers and applications up-to-date of changes across the network. As IATI grows exponentially, keeping organizations and applications current could prove a challenge which blockchain could help address. However there is no reason IATI can’t serve the same information directly to users via an API. Exploring Ethereum IATI.AI is chiefly interested in developing IATI data processing algorithms capable of powering artificial intelligent applications and digital assistants like Alexa. However, because applications need to plug into data sources before they can execute processing functions, IATI.AI is also interested in how IATI data can be made accessible to applications and how this data can be formatted. Can IATI network nodes keep up-to-date via a blockchain? Looking ahead, the initiative is interested in how blockchain could be deployed by IATI. For example, could IATI network nodes each store an entire IATI corpus and could these remote datasets be kept up-to-date via blockchain? Study Questions IATI.AI intends to explore Ethereum and carry out testing to answer some basic questions relevant to whether Ethereum could indeed be of used by IATI to channel aid activity information to artificial intelligent applications and keep applications up-to-date. The initiative intends to explore the following questions: Could IATI’s entire corpus be stored on Ethereum? Alternately, could an IATI change log be stored on Ethereum? Could the IATI Registry act as a network administrative node and how so? How could nodes share, validate and keep data up-to-date? How could nodes publish file changes and send change records to IATI? How frequently can or should the blockchain be refreshed? Could the network benefit from other administrative nodes and how so? What would it take to setup and deploy an experimental IATI network on Ethereum? What would a user interface look like and how could it operate? What could go wrong relative to serving IATI data via Ethereum? We hope that researching and answering these questions will help play a role in improving IATI and in turn IATI’s ability to channel data to artificial intelligent applications.
White Paper: Leveraging Ethereum to Power IATI-Data-Driven Applications
0
white-paper-leveraging-ethereum-to-power-iati-data-driven-applications-1ccf3f39fd7f
2018-08-06
2018-08-06 03:14:13
https://medium.com/s/story/white-paper-leveraging-ethereum-to-power-iati-data-driven-applications-1ccf3f39fd7f
false
1,251
null
null
null
null
null
null
null
null
null
Blockchain Technology
blockchain-technology
Blockchain Technology
13,452
brent phillips
Tech for good project manager and veteran humanitarian relief worker into open data sharing, AI, blockchain and humanitarian financing technology
cbf60ca1802f
brentophillips
28
60
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-08
2018-03-08 16:34:29
2018-03-09
2018-03-09 12:13:37
0
false
en
2018-03-09
2018-03-09 12:14:21
2
1ccfac764b97
1.079245
0
0
0
“Garry Kasparov, World Chess Champion for nearly twenty years and perhaps the strongest chess player of all time, had a different approach…
5
Here is a Quote I’m thinking about - Kasparov Chess! “Garry Kasparov, World Chess Champion for nearly twenty years and perhaps the strongest chess player of all time, had a different approach to his emotions. Kasparov was a fiercely aggressive chess player who thrived on energy and confidence. My father wrote a book called Mortal Games about Garry, and during the years surrounding the 1990 Kasparov-Karpov match, we both spent quite a lot of time with him. At one point, after Kasparov had lost a big game and was feeling dark and fragile, my father asked Garry how he would handle his lack of confidence in the next game. Garry responded that he would try to play the chess moves that he would have played if he were feeling confident. He would pretend to feel confident, and hopefully trigger the state. Kasparov was an intimidator over the board. Everyone in the chess world was afraid of Garry and he fed on that reality. If Garry bristled at the chessboard, opponents would wither. So if Garry was feeling bad, but puffed up his chest, made aggressive moves, and appeared to be the manifestation of Confidence itself, then opponents would become unsettled. Step by step, Garry would feed off his own chess moves, off the created position, and off his opponents’ building fear, until soon enough the confidence would become become real and Garry would be in flow. If you think back to the chapter Building Your Trigger and apply it to this description, you’ll see that Garry was not pretending. He was not being artificial. Garry was triggering his zone by playing Kasparov chess.” Waitzkin, Josh. “The Art of Learning: A Journey in the Pursuit of Excellence”
Here is a Quote I’m thinking about - Kasparov Chess!
0
here-is-a-quote-im-thinking-about-kasparov-chess-1ccfac764b97
2018-03-09
2018-03-09 12:14:22
https://medium.com/s/story/here-is-a-quote-im-thinking-about-kasparov-chess-1ccfac764b97
false
286
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ismail Ali Manik
Uni. of Adelaide & Columbia Uni NY alum; World Bank, PFM, Global Development, Public Policy, Education, Economics, book-reviews, MindMaps, @iamaniku
6a8552d04dc7
ismailalimanik
123
740
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2018-02-12
2018-02-12 00:09:16
2018-02-12
2018-02-12 00:12:34
3
false
en
2018-02-12
2018-02-12 00:12:55
2
1cd226ab6405
3.066981
5
0
0
Slogans such as “Rise Early to Farm Pigs, Call AI to Help!” and “Excel in Intelligent Pig Farming, Marry a Pretty Wife Early!” are…
4
Alibaba City Brain Goes Rural: AI Pig Farming in Sichuan Slogans such as “Rise Early to Farm Pigs, Call AI to Help!” and “Excel in Intelligent Pig Farming, Marry a Pretty Wife Early!” are appearing on walls across China’s southern countryside provinces. The campaign is part of the Sichuan pig farming corporation Dekon Group and pig feed supplier Tequ Group’s new partnership with Alibaba Cloud to apply its AI-powered “ET Brain” to pig farming. The trio are investing tens of millions of USD in the project, which was announced on February 6, 2018. Over the past year, Alibaba has implanted its ET Brain in the aerospace, transportation, environment, and healthcare sectors, fast-tracking China’s social infrastructure revolution. Recent food contamination scandals have made food safety a pressing issue in China, and the agricultural industry a next logical application for ET Brain. China’s pork production accounts for more than half of the world supply, while its per capita pork consumption ranks 3rd. By 2020, Tequ Group sales will exceed 10 million tons, while Dekon will breed up to 10 million pigs annually. This is an opportunity for artificial intelligence to optimize operations, and both companies are actively building their IoT and Enterprise Resource Planning (EPR) systems. On pig farms each pig wears a wireless radio-frequency identification (RFID) tag. These are pricey and difficult to scan, making and farmers must individually log data into mobile applications or fill in paper forms. This is where computer vision and voice recognition AI can help. Real-time video footage is collected through surveillance cameras. Using computer vision, ET Brain will set up profiles for each pig, documenting their breed, age, weight, eating conditions, exercise intensity and frequency, and movement trajectory. The first phase of the launch includes functions such as herd behavior analysis, inventory count, health monitor, and automatic weighing. One challenge is telling pigs apart — Alibaba considered applying facial identification to pigs to no avail. Instead they tattooed identifying numbers on the pigs’ bodies. ET Brain tracks the entire production line. Based on behavior tracking, gilts are selected for mating. After they give birth to piglets, usually in a litter of ten, ET Brain will use voice recognition to ensure the little ones are not suffocated by their mothers’ weight. This lowers death rate by 3%, increasing annual production rate by three piglets per sow. Piglets are sometimes crushed to death by their “negligent” mothers when feeding; voice recognition tracks the noise each piglet makes and intervenes in case of suffocation. Aside from breeding, feeding and weighing, other important steps in pig farming are disease control and epidemic monitoring. ET Brain will analyze pigs’ behavior, acoustic characteristics and infrared temperature measurements, to determine the health status of pigs, targeting epidemic early warning signs and specialized vaccinations. Alibaba Cloud has dispatched a team of algorithm engineers, product developers, and video analytics team to Sichuan work on the project, while Tequ will add experts on pig farming. “Our core solution is to reduce the reliance on farmers and dependence on equipment through automated video analytics,” explains Alibaba Cloud’s Sheng Zhang, who added that the use case is highly replicable. Alibaba’s AI solutions have thus far been more widely deployed in urban environments. Its “ET City Brain” focuses on improving China’s urban infrastructure using capabilities such as voice, image, text recognition, and natural language processing. Earlier last month, the company announced it will deploy ET City Brain in the Malaysian capital of Kuala Lumpur. However, it is also worth considering the appropriateness of AI applications for lesser developed areas. Today, 43% of the Chinese population lives in the countryside. Granting this rural economy access to AI is a difficult but important task, especially for underdeveloped industries like pig farming that have limited access to new technology. Journalist: Meghan Han | Editor: Michael Sarazen Dear Synced reader, the upcoming launch of Synced’s AI Weekly Newsletter helps you stay up-to-date on the latest AI trends. We provide a roundup of top AI news and stories every week and share with you upcoming AI events around the globe. Subscribe here to get insightful tech news, reviews and analysis!
Alibaba City Brain Goes Rural: AI Pig Farming in Sichuan
37
alibaba-city-brain-goes-rural-ai-pig-farming-in-sichuan-1cd226ab6405
2018-04-25
2018-04-25 18:14:10
https://medium.com/s/story/alibaba-city-brain-goes-rural-ai-pig-farming-in-sichuan-1cd226ab6405
false
667
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
global.sns@jiqizhixin.com
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null
0
MATCH g=(:PERSON) -[:WROTE]-> (:REVIEW) -[:OF]-> (:PRODUCT) RETURN g LIMIT 1 ./generate.sh --dataset article_1 product = tf.get_variable("product", [n_product, embedding_width]) person = tf.get_variable("person", [n_person, embedding_width]) # Allocate storage for the estimations product = tf.get_variable("product", [n_product, embedding_width]) person = tf.get_variable("person", [n_person, embedding_width]) # Retrieve the embedding tensors product_emb = tf.nn.embedding_lookup(product, product_id) person_emb = tf.nn.embedding_lookup(person, person_id) # Dot product m = tf.multiply(product_emb, person_emb) m = tf.reduce_sum(m, axis=-1) m = tf.expand_dims(m, -1) # So this fits as input for dense() # A dense layer to fit the score to the range in the data review_score = tf.layers.dense(m, (1), tf.nn.sigmoid) loss = mean_squared_error(pred_review_score, label_review_score ) train_op = tf.train.AdamOptimizer(params["lr"]).minimize(loss) eval_metric_ops = { "accuracy": tf.metrics.accuracy(pred_review_score, label_review_score) } return tf.estimator.EstimatorSpec( mode, loss=loss, train_op=train_op, eval_metric_ops=eval_metric_ops ) MATCH p= (person:PERSON) -[:WROTE]-> (review:REVIEW {dataset_name:"article_1", test:{test}}) -[:OF]-> (product:PRODUCT) RETURN person.id as person_id, product.id as product_id, review.score as review_score raw_data = session.run(query, **query_params).data() def format_row(i): return ( { "person": { "id": self._get_index(i, "person"), "style": i["person_style"], }, "product": { "id": self._get_index(i, "product"), "style": i["product_style"], }, "review_score": i["review_score"], }, i["review_score"] ) data = [format_row(i) for i in raw_data] t = tf.data.Dataset.from_generator( lambda: (i for i in data), self.dataset_dtype, self.dataset_size ) t = t.shuffle(len(self)) t = t.batch(batch_size) input_fn = lambda: data.gen_dataset() estimator = tf.estimator.Estimator(model_fn, model_dir, vars(args)) train_spec = tf.estimator.TrainSpec(data_train.input_fn) eval_spec = tf.estimator.EvalSpec(data_eval.input_fn, steps=None) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) tensorboard --logdir ./output
20
3464c8526e06
2018-04-23
2018-04-23 10:46:40
2018-04-25
2018-04-25 06:36:42
10
false
en
2018-04-25
2018-04-25 06:36:42
30
1cd33996632a
11.582075
43
1
1
We show how to create an embedding to predict product reviews, using the TensorFlow machine learning framework and the Neo4j graph…
5
Review prediction with Neo4j and TensorFlow We show how to create an embedding to predict product reviews, using the TensorFlow machine learning framework and the Neo4j graph database. It achieves 97% validation accuracy. Introduction A common problem in business is product recommendation. Given what a person has liked so far, what should we suggest they purchase next? Just as a waiter asking if you’d like another drink drives higher revenues, so does successful recommendations. There are many approaches to recommendation. We’re going to focus on review prediction: given a product a person has not reviewed, what review would they give it? We can then recommend to that person the products we predict they will favorably review. The code for the completed system is available in our GitHub. The technologies we’ll use Neo4j We’re going to use a graph database as the data-source for this system. Graph databases are a powerful way to store and analyze data. Often the relationships between things, for example between people, are as important as the properties of those things themselves. In a graph database it’s easy to store and analyze those relationships. In this review prediction system we’ll be analyzing the network of reviews between different people and different products. We’ll use Neo4j as our graph database. Neo4j is a popular, fast and free-to-use graph database (we provide a hosted database for this article’s dataset to save you having to set one up for yourself). TensorFlow For the machine learning part of this system, we’ll use TensorFlow. TensorFlow is primarily a model building and training framework, letting us express our model and train it on our data. TensorFlow has quickly become one of the most popular and actively developed machine learning libraries. TensorFlow can save a lot of development time. Once a model has been built in TensorFlow, the same code can be used for experimentation, development and deployment to production. Platforms like Google’s CloudML provide model hosting as a service, serving your model’s predictions as a REST API. The problem We’re going to be predicting product reviews. In our world there are people, who write reviews of products. Here’s what this looks like in a graph: In a graph database we can query information based on patterns. Neo4j, the database we’ll use here, uses a query language called Cypher. The above graph was generated by a simple query: This looks for a node, of label PERSON, with a relationship of label WROTE, to a node of label REVIEW, with a relationship of label OF to a node of label PRODUCT. The qualifier “LIMIT 1” asks the database to just return one instance that matches this pattern. Neo4j implements a property graph model, in which nodes and relationships can have properties. This is a really flexible model allowing us to conveniently put data where we want. The dataset we’ll train on Our dataset contains 250 people and 50 products. Each person has 40 reviews, giving a total of 10,000 reviews. You can use our hosted database, or generate the data into your own Neo4j instance using our generation codebase: The dataset is synthetic — we generated it ourselves from a probabilistic model. Using a synthetic dataset is useful technique during model development. If you’re applying an unproven method to unknown data and it fails to train, you cannot tell if the problem is the data or the model. By synthesizing the data one unknown is removed and you can focus on finding a successful model. A synthetic dataset has limitations: it lacks the irregularities and errors typical of real world data. For this learning exercise, the synthetic dataset is very useful, but any real-world system will require more steps of cleaning the data and experimenting to find a model that fits it. How we generated the dataset Review nodes have a score property Our synthetic data generation uses a simple probabilistic model. During generation each product and person has a randomly chosen category, and these categories are used to generate review scores. We save the people, products and reviews to the database and discard the category assignments (it would make the review prediction too easy). In more detail: We generate a set of 250 people and 50 products. Each person reviews 40 randomly chosen products. Each person and product has a one-hot encoded vector of width 6. Think of this as choosing one category from six choices. For example, each product can be one of six colors (its style). Each person prefers one of those six colors (their preference). Each review from a person to a product is calculated as the dot product of their vectors, giving 1.0 if they share the same style and preference, or 0.0 otherwise. Finally, we assign the test property to a randomly selected 10% of the reviews. This data is used for evaluating the model, and is not used for training it. Since each person reviews 40 randomly chosen products, it’s highly likely (although not certain) they will review one product of each of the six styles — therefore our review prediction challenge is well-constrained and we should be able to get close to 100%. In academic literature our problem is known as “collaborative filtering”. By combining the reviews of many people (‘collaborative’) we can better recommend products for one person (‘filtering’). Approach We’re going to solve this review prediction problem by estimating a style vector for each product and a preference vector for each person. We’ll predict review scores by taking the dot product of those two vectors (since we know the data was generated using the dot product, it’s easy for us to guess this might be a successful solution). The input to our model is the ID of the person and the ID of the product. The output of the model is the review score. Review prediction is an interesting problem because we do not know the style of each product, nor the preference of each person, therefore we have to determine both simultaneously. A mistake in predicting a product’s style will then cause mistakes in predicting people’s preferences, so solving this is not trivial. It should be noted that this is not “deep learning” (though it is machine learning). We’re using TensorFlow as a convenient framework to train a shallow model via gradient descent. As an aside, adding deep layers to the model we defined above has been reported as successful in some academic papers. Implementing our model in TensorFlow (Note: I’ve simplified the code for presentation, removing classes and boilerplate.Check out the full working example with comments for the details.) Embedding variables The first step in our model is to transform person IDs and product IDs into estimated preference and style tensors. Thankfully, this is quite straightforward in TensorFlow. We’ll store the preference and style estimations for all of the people and products as two variables, of shape [number_of_ids, width_of_tensor]: We can use tf.nn.embedding_lookup(product, product_id) to transform an ID into a tensor of shape [width_of_tensor] Format of the embedding tensors Each embedding tensor will be floating point with 20 dimensions. Unlike in the data generation, there is no restriction for the value to be one-hot encoded. Together, this design allows the model a lot of room to maneuver during training — this is helpful as gradient descent updates the variables with many small steps, and if it had to make a “big leap” to get to successful variables it may never get there. This design was determined through experimentation and grid search. Model implementation The model for our prediction is just eight lines long: The next step is to wrap up this model in the other pieces needed to train it. We’ll use the high-level Estimator API as it has pre-built routines for training, evaluating and serving the model, which we’d otherwise have to re-write. The model function The core of the Estimator framework is a model function. This is a function we write and hand to TensorFlow, so that the framework can instantiate our model as often as it needs to (for instance, it might run multiple models across different GPUs/machines, or it might re-run the model with different learning rates to determine the best). The model function is a python function that takes the input feature tensors (and some other parameters) and returns an EstimatorSpec which contains a few things: A measure of the model’s loss (e.g. how well it’s fitting the training data) A training operation (the ‘code’ to be executed to train the model in each step) Evaluation metrics (the measures of model success we’ll view in TensorBoard) For measuring loss we’ll use the built-in mean squared error: And for training operation we’ll use the built in Adam optimizer to minimize the loss: And we’ll measure one evaluation metrics, the accuracy: Finally returning an EstimatorSpec: You can see the code all together in model.py. Getting the data from Neo4j We’ll use a Cypher query to get the data from our graph database and format it for training: This returns one row for each review in our database. We then format each row for TensorFlow as a tuple of (input_dict, expected_output_score): Next, we construct a TensorFlow Dataset. This is high-level TensorFlow API that allows the framework to do a lot of the hard-work transforming and distributing our data for training. We’ll use the API to create a dataset from our generator, shuffle the data and batch it: Shuffling helps the network learn as it will encounter different combinations of people and products in each batch. Similar to the model function we created earlier, we will now create an input function. TensorFlow will construct a dataset many times during training (for example, when it reaches the end of the data and wishes to restart) and the input function gives it the ability to do so. We create an input_fn for TensorFlow that requires no arguments: Putting it all together Now that we have our model_fn and our input_fn we’re ready to train! We’re going to use the train_and_evaluate method of the Estimator API to coordinate the training and evaluation for us. We construct an Estimator, specify the training data and number of steps in a TrainSpec and specify the evaluation data in an EvalSpec: We specify steps=None so that the whole evaluation set will be used (instead of just the first 100 items). Now we’re ready to go, we can run the whole training and evaluation: Initial result: 92% evaluation accuracy The Estimator framework will save our model and output summaries that TensorBoard can display for us. Fire up TensorBoard and watch the progress: After 10,000 training steps the model achieves 92% accuracy: This result is not too bad for such a simple implementation, but we can do better. Improving training with random walks Luckily, there is a short extension to our code that can help our model train to 97% accuracy. We’re going to perform random walks across the graph. A random walk means to start at one graph node, randomly choose between the nodes it’s connected to, then do the same from that node, keeping your path in a list. It’s somewhat similar to how a drunk person traverses a city. Illustration credit The typical input to machine learning is fixed-size tabular data. Graphs can have any number of connections and nodes, therefore they do not readily fit into a fixed-size structure. This makes graphs hard to feed into machine learning. Random walks are a very powerful way of capturing the connectivity of a graph in a simple data structure. Each walk outputs a list of fixed length. Each walk is a sample of the graph, and with a sufficient number of a random walks the entire connectivity of the graph is represented. They’ve been very successful in a number of areas including modeling language, social networks and protein structures. Random walks benefit our training as it propagates style and preference embeddings across the graph. For example, imagine developing language separately in two islands over millennia. When the islanders meet for the first time, they are unlikely to understand each other at all and may forever struggle with each other’s languages (e.g. English and Japanese). Hundred Islands However, if instead the language was developed on one connected land-mass the words and grammar would travel across the land whilst they developed, providing a basic compatibility between the members of different countries (e.g. Spanish and Italian). In a similar way, random walks help build compatibility between the style and preference vectors of people and products across our graph. When we estimate someone’s review for a product they’ve never reviewed (and perhaps none of the people near them in the graph have reviewed either) there’s more chance their preference vector speaks the same “language” as the target product’s style vector. Implementation There are two steps to the implementation: Index the row data by the product and person IDs Sample batch_size length walks from the indexed data Then we feed this data into our Dataset as before. Whilst the code is reasonably straightforward, it is a little long for displaying here. You can read data.py online, or clone the whole repository. Result: 97% evaluation accuracy Training the model now achieves 97% evaluation accuracy. Note that the model starts from randomly initialized variables, and receives randomly ordered training data, therefore it can achieve different results on each training run. The model does not always converge and does not always achieve its highest performance. I’ve shown below 20 separate trainings of the model, a few of which occasionally achieve as high as 98% accuracy: Don’t leave chance up to chance! Multiple runs of the same model training with different random starting conditions and training data ordering Next steps Thanks for reading this far! There are many interesting problems to solve as a follow on from this one: Introduce noise into the dataset Generate review scores from a greater number of style and preference categories Use a more complex model for review score generation Generate a larger dataset and scale the model up to cope with it Reduce the number of reviews per person (i.e. introduce greater sparsity) All of the above can be synthesized easily using our generate-data codebase. Once you’ve generated the data it’s quite fun and addictive to try to find a successful predictive model. Limitations of our approach Whilst the approach in this article has achieved a high accuracy with few lines of code, it does have limitations. In particular: It doesn’t know how to predict reviews for new people or products (the “cold start” problem) GPUs have limitations to how large a variable can be stored in their memory, therefore how many people/products can be trained for We’ve used a very simple dot-product model. If the model were more complex it could be difficult to simultaneously train the embedding and the deep model There are many popular approaches to recommendation systems, Wikipedia is a good starting point to learning about others. These writings are part of a year-long exploration of AI architecture topics. Applaud this article, follow this publication or follow my twitter to get updates when the next articles come out. Feel free to let me know topics you’d like to learn more about.
Review prediction with Neo4j and TensorFlow
239
review-prediction-with-neo4j-and-tensorflow-1cd33996632a
2018-06-17
2018-06-17 08:25:34
https://medium.com/s/story/review-prediction-with-neo4j-and-tensorflow-1cd33996632a
false
2,738
Research into machine learning and reasoning
null
octaviandotai
null
Octavian
hello@octavian.ai
octavian-ai
AI,MACHINE LEARNING,GRAPH DATABASE,NEO4J
Octavian_ai
Machine Learning
machine-learning
Machine Learning
51,320
David Mack
@SketchDeck co-founder, https://octavian.ai researcher, I enjoy exploring and creating.
1d81a71197ab
DavidMack
1,355
226
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-27
2018-08-27 06:22:00
2018-08-27
2018-08-27 06:22:10
0
false
en
2018-08-27
2018-08-27 06:22:10
1
1cd4e4d38c3f
2.539623
0
0
0
Download [PDF] Think Like a Data Scientist: Tackle the data science process step-by-step By Brian Godsey DOWNLOAD EBOOK PDF KINDLE…
1
Free Download Think Like a Data Scientist: Tackle the data science process step-by-step By Brian Godsey PDF Full #EPUB Download [PDF] Think Like a Data Scientist: Tackle the data science process step-by-step By Brian Godsey DOWNLOAD EBOOK PDF KINDLE Link https://collectionbooks.ebookoffer.us/?q=Think+Like+a+Data+Scientist%3A+Tackle+the+data+science+process+step-by-step . . . . . . . . . . . . . . . . . . . Read Online PDF Think Like a Data Scientist: Tackle the data science process step-by-step, Download PDF Think Like a Data Scientist: Tackle the data science process step-by-step, Download Full PDF Think Like a Data Scientist: Tackle the data science process step-by-step, Download PDF and EPUB Think Like a Data Scientist: Tackle the data science process step-by-step, Read PDF ePub Mobi Think Like a Data Scientist: Tackle the data science process step-by-step, Reading PDF Think Like a Data Scientist: Tackle the data science process step-by-step, Read Book PDF Think Like a Data Scientist: Tackle the data science process step-by-step, Read online Think Like a Data Scientist: Tackle the data science process step-by-step, Download Think Like a Data Scientist: Tackle the data science process step-by-step Brian Godsey pdf, Download Brian Godsey epub Think Like a Data Scientist: Tackle the data science process step-by-step, Read pdf Brian Godsey Think Like a Data Scientist: Tackle the data science process step-by-step, Download Brian Godsey ebook Think Like a Data Scientist: Tackle the data science process step-by-step, Read pdf Think Like a Data Scientist: Tackle the data science process step-by-step, Think Like a Data Scientist: Tackle the data science process step-by-step Online Download Best Book Online Think Like a Data Scientist: Tackle the data science process step-by-step, Read Online Think Like a Data Scientist: Tackle the data science process step-by-step Book, Read Online Think Like a Data Scientist: Tackle the data science process step-by-step E-Books, Read Think Like a Data Scientist: Tackle the data science process step-by-step Online, Read Best Book Think Like a Data Scientist: Tackle the data science process step-by-step Online, Read Think Like a Data Scientist: Tackle the data science process step-by-step Books Online Download Think Like a Data Scientist: Tackle the data science process step-by-step Full Collection, Download Think Like a Data Scientist: Tackle the data science process step-by-step Book, Read Think Like a Data Scientist: Tackle the data science process step-by-step Ebook Think Like a Data Scientist: Tackle the data science process step-by-step PDF Read online, Think Like a Data Scientist: Tackle the data science process step-by-step pdf Download online, Think Like a Data Scientist: Tackle the data science process step-by-step Read, Download Think Like a Data Scientist: Tackle the data science process step-by-step Full PDF, Read Think Like a Data Scientist: Tackle the data science process step-by-step PDF Online, Read Think Like a Data Scientist: Tackle the data science process step-by-step Books Online, Read Think Like a Data Scientist: Tackle the data science process step-by-step Full Popular PDF, PDF Think Like a Data Scientist: Tackle the data science process step-by-step Read Book PDF Think Like a Data Scientist: Tackle the data science process step-by-step, Read online PDF Think Like a Data Scientist: Tackle the data science process step-by-step, Download Best Book Think Like a Data Scientist: Tackle the data science process step-by-step, Read PDF Think Like a Data Scientist: Tackle the data science process step-by-step Collection, Read PDF Think Like a Data Scientist: Tackle the data science process step-by-step Full Online, Read Best Book Online Think Like a Data Scientist: Tackle the data science process step-by-step, Download Think Like a Data Scientist: Tackle the data science process step-by-step PDF files
Free Download Think Like a Data Scientist: Tackle the data science process step-by-step By Brian…
0
free-download-think-like-a-data-scientist-tackle-the-data-science-process-step-by-step-by-brian-1cd4e4d38c3f
2018-08-27
2018-08-27 06:22:11
https://medium.com/s/story/free-download-think-like-a-data-scientist-tackle-the-data-science-process-step-by-step-by-brian-1cd4e4d38c3f
false
673
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
masmekhova
null
7b106651344e
masmekhova
0
1
20,181,104
null
null
null
null
null
null
0
null
0
e914a57f2e88
2018-07-10
2018-07-10 06:54:26
2018-07-12
2018-07-12 07:45:25
2
false
en
2018-09-06
2018-09-06 09:45:19
0
1cd78a93fe73
4.03239
4
0
0
Pacing Technology : Opportunities & Challenges
5
Humans, Jobs, Recruitment : AI Pacing Technology : Opportunities & Challenges Fifth generation of computing has started. The world is seeing machines responding with developed intelligence, busy in transforming the way business works. By digitising, many companies see AI as an essential tool to remain competitive in the market. People across business domains are accepting AI world wide to find effective solutions for underlying problems and challenges. One of the biggest fear about increasing automation is the threat to existing jobs in the current Market. “ Within just 20 years, many current jobs will be replaced by software automation. While many manufacturing and processing roles — especially in dangerous or highly specialized environments — are already filled by robotic workers, other areas will soon be affected.” — Bill Gates There has been incredible growth in the space of Automation technology, AI and computation. It is opening up new and advanced roles for people who explore this space. Following picture explains how new technologies are creating new opportunities and challenges for humans. Evolution of opportunities with technologies for the business by the humans, for the humans These aspects posses great challenges for job seekers to choose & build their career path among various technology domains and their current ambitions might not be relevant in the near future. “ 65 % of children today will end up in jobs that are yet to be created ” — MYOB report Recruiters are facing difficulty to hire candidates for upcoming new technologies. Recruiters play great role in finding, Jobs from employers Candidates for the jobs Jobs for the candidates. Some of the challenges that recruiting industry faces are: Searching & sourcing candidates for jobs, and ensuring right job for the right person. Increasing hiring time due to low response rates from candidates, in other words failing to know the interests of candidates. Creating context and terminologies for TA(Talent acquisition) around new tech jobs coming to the market to make their first conversion intuitive. How can AI help? With the help of Automation, providing personalised candidate experience and targeting the right market. Automation - Target, Match & Rank People are still using traditional methods for sourcing candidates by collecting their resumes from job portals, or through agencies and manually matching it to job description. Recruiters must also ensure that they assign right candidates to the right job. There exists a high resource allocation time & cost here. Another factor to consider here is the high dependency and low ROI from job portals. It’s a known fact that quality and the number of people hired in the end of process directly depend on the ability to target and engage the right crowd during the process initiation. Data of the candidates from different technical and social platforms could be analysed to meet the exact job demands. AI technologies make use of available data and candidate’s profile to automatically match them against jobs, thus coming up with results with no discrimination & actually save lot of time for the recruiters. - Context around Jobs TA are facing challenges being aware of new technologies in the market. TA needs to have sufficient context around jobs and be precise while talking to candidates, which can directly affect response rate of candidates to turn up for the next level of discussions. AI can do this job for TA by making them available with much more context around skills set, provide automatically generated sets of questions to make their initial interaction more intuitive and relevant. - Commencement of hiring When should i start hiring for my ‘xyz’ requirement? If staffing or recruiting firms know the answer to this question, they could make it easy to target leads and know the exact potential of what they can do, and accordingly lay down their hiring strategies. Hiring time depends on a lot of factors including positions opened, candidate’s availability in particular technology, candidate’s response time, rate at which interviews can happen, interview’s success rate, offer acceptance rate and actual conversion rate. AI engine based on all of these factors can help a company to know when they should start to work on a particular requirement to make best use of their resources. Personalised Candidate experience It is crucial to catch attention of a candidate towards joining a particular organisation. In this candidate-driven recruitment market, selling a job is more like selling a product to the customer. To sell the job the right way, TA should understand the job role from the candidate’s perspective and should be able to make it more personalised and relevant for them. TA usually lack context of new technologies and can be irrelevant while talking to candidates. Surveys consistently show that, while majority of candidates prefer being contacted via email, some prefer receiving phone calls over text messages. There are a large set of candidates who like effective communication; to keep them informed with interview schedules and related information. While doing this, personalised experience for candidates can increase their chance to understand the opportunity clearly and respond accordingly. AI can make use of candidate’s data available through different platforms & recommend personalised mails or job descriptions to present it to the candidate, which will give candidates a clear idea for, “what is there for them” in this job. Target Market Staffing needs of various companies around the world can be known by analysing job feeds, which can be used to gain competitive advantage, crack possible leads in the market and increase company revenue. For recruitment agencies, AI can help grab market opportunities by analysing different requirement types and compare it with your team strength. It can give great insight into the recruitment activities of customers and competitors. “AI is helpful in creating and channelising jobs by harnessing market need in the era of continuously upgrading technologies. Also helping candidates in building great career paths where lot of new opportunities are open for them”
Humans, Jobs, Recruitment : AI
152
humans-jobs-recruitment-ai-1cd78a93fe73
2018-09-06
2018-09-06 09:45:19
https://medium.com/s/story/humans-jobs-recruitment-ai-1cd78a93fe73
false
967
Your partner for digital transformation to Collaborate | Innovate | Change
null
hashworks
null
Hashworks
social@hashworks.co
hashworks
DATA SCIENCE,DEVOPS,MOBILITY,APPLICATION DEVELOPMENT,DESIGN THINKING
hashworksco
Hiring
hiring
Hiring
16,840
Tarun Bonu
null
781b1406acbf
tarun_bonu
12
1
20,181,104
null
null
null
null
null
null
0
null
0
d28e45204100
2018-05-26
2018-05-26 09:45:54
2018-05-28
2018-05-28 20:29:40
1
false
id
2018-05-28
2018-05-28 20:29:40
3
1cd8a2b60876
1.45283
2
0
0
Artificial Intellegence sudah ada di mana — mana. Tidak percaya?
2
Mau belajar tentang AI? Berikut salah satu tempat belajar GRATIS! Artificial Intellegence sudah ada di mana — mana. Tidak percaya? Kalau kalian membuka smartphone kalian dan membuka Youtube, maka kalian akan mendapatkan rekomendasi video dari AI yang mempelajari kebiasaan menonton videomu. Facebook menggunakan AI untuk mendeteksi percakapan yang kamu buat di Facebook untuk mendapatkan customer data. Sebenarnya, AI sendiri sudah memasuki ranah kehidupan digital kita, tapi dalam bentuk yang tak terlihat. Dengan banyaknya AI yang sudah memasuki kehidupan sehari — hari, kita sendiri masih mempunyai pengetahuan yang samar — tentang AI. Kehidupan kita sudah dipengaruhi oleh sesuatu yang kita sendiri masih belum mempunyai pengetahuan jelas. Kita bisa mencari di Mbah Google untuk belajar tentang AI, tapi dengan banyaknya sumber pembelajaran kita sendiri bingung untuk mulai dari mana. Untungnya Universitas Helsinki punya solusinya: kursus AI. Elements of AI. Tempat belajar mengenai topic AI oleh universitas Helsinki. (source: Thenextweb) Universitas Helsinki, Finlandia, membuat sebuah kursus yang membahas topik — topik dasar mengenai AI. Kursus yang dibuat memang membahas topik AI secara luas dan hanya membahas konsep — konsep dasar seperti: Apa itu AI (apa saja yang bisa dihitung sebagai AI) Definisi Machine Learning Macam — macam neural networks Implikasi AI etc. Kursus ini bertujuan untuk memberikan pengetahuan umum yang bisa diambil oleh orang — orang umum. Universitas Helsinki ingin menyebarkan pengetahuan mengenai AI karena masih banyak khalayak umum yang belum menyadari adanya AI di sekitar kita. Berdasarkan survey 2017, 3 dari 10 orang tahu mengenai AI tapi tidak mengetahui teknologi yang berkerja dalam ranah AI. Jika orang tidak kenal, maka tidak sayang. Pepatah ini juga berlaku untuk AI. Dibuktikan dengan adanya poll dari forbes yang membuktikan bahwa 41% responden tidak bisa memberikan contoh AI yang mereka bisa percaya. Jadi bagi kita — kita yang tidak ingin ketinggalan dan menjadi gaptek, ada benarnya kalian buat kursus ini sebagai penambah ilmu kalian. Selain kursus ini untuk khalayak umum, kursus ini sama sekali tidak berbayar alias gratis! Jadi kita hanya memakai alamat email kalian sudah bisa mencari wawasan yang berguna untuk masa depan kita. Sadur:https://futurism.com/finnish-university-ai-course/ link: elementsofai.com
Mau belajar tentang AI? Berikut salah satu tempat belajar GRATIS!
5
mau-belajar-tentang-ai-berikut-salah-satu-tempat-belajar-gratis-1cd8a2b60876
2018-06-08
2018-06-08 02:34:57
https://medium.com/s/story/mau-belajar-tentang-ai-berikut-salah-satu-tempat-belajar-gratis-1cd8a2b60876
false
332
Forming tech based society
null
suarmedia
null
suarmedia
restu.arif@techlab.institute
techlab-institute
TECHNOLOGY,SOCIAL MEDIA MARKETING,SOCIAL MEDIA,SOCIAL MEDIA AGENCY
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Andi muhammad
null
78cdcd356068
Andi.muhammad3333
4
13
20,181,104
null
null
null
null
null
null
0
"""Model function for CNN.""" # Input Layer input_layer = tf.reshape(features["x"], [-1, 28, 28, 1]) # Convolutional Layer #1 conv1 = tf.layers.conv2d( inputs=input_layer, filters=32, kernel_size=[5, 5], padding="same", activation=tf.nn.relu) # Pooling Layer #1 pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) # Convolutional Layer #2 and Pooling Layer #2 conv2 = tf.layers.conv2d( inputs=pool1, filters=64, kernel_size=[5, 5], padding="same", activation=tf.nn.relu) pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) # Dense Layer pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64]) dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu) dropout = tf.layers.dropout( inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)'output': tf.Variable(tf.random_normal([output_num_units], seed=seed))}
1
null
2017-11-12
2017-11-12 14:00:14
2017-11-28
2017-11-28 19:51:19
7
false
en
2017-11-30
2017-11-30 02:36:33
12
1cda978d0cef
8.276415
10
0
0
Honestly based on my experience, building a startup is not an easy task, it’s require perseverance for a long time. Many of us especially…
5
How to start Artificial Intelligence-Based Startup from Zero? Honestly based on my experience, building a startup is not an easy task, it’s require perseverance for a long time. Many of us especially millennials are likely to take a leap of faith and jump over onto battleship without a proper planning because they think can make fast money and freedom of time. Reality is opposite. 90% of the startups will fail in the first year operation and they became a part of this statistic. Top 20 reasons startups fail Recently there are a lot of perceptions from some entrepreneurs towards AI that AI are AWESOME, WORKABLE and TRENDY. According to Marvin Minsky as interviewed in Hal’s Legacy, Only a small community has concentrated on general intelligence. No one has tried to make a thinking machine. The bottom line is that we really haven’t progressed too far toward a truly intelligent machine. We have collections of dumb specialists in small domains; the true majesty of general intelligence still awaits our attack. We have got to get back to the deepest questions of AI and general intelligence. “General intelligence” does not mean exactly the same thing to all researchers. In fact, it is not a fully well-defined term, and one of the issues raised in the papers contained here is how to define general intelligence in a way that provides maximally useful guidance to practical AI work. Perception is reality Back to our topic, how do you actually build an AI startup? There are 5 elements you need to consider before build an AI startup: Blue ocean strategy Technology Data Process People Blue Ocean Strategy What is Blue Ocean Strategy? According to Wikipedia, Blue Ocean Strategy is a marketing theory from a book published in 2005 which was written by W. Chan Kim and Renée Mauborgne, professors at INSEAD and co-directors of the INSEAD Blue Ocean Strategy Institute. From my understanding and perspective view, it’s a kind of strategy whereby capability of the company by systematically created an uncontested market space that automatically making competition irrelevant. This type of company has high product differentiation and low cost to operate it. In order to achieve this, the startup must know first what is their USP (Unique Selling Proposition) compare to other competitors, target market (local, South East Asia, Asia, global?) and gain some tractions to validate market needs (user growth or revenue). Is the startup focusing on horizontal or vertical problem? Like example, IBM providing a solution that can solve general problems such as data analytics, cloud computing, etc. That is an example company that tackle for horizontal problem including Deepmind, Amazon, Facebook, Microsoft and Baidu. Compare to vertical that focusing on specific problem such as Tesla or Uber, they are focusing on transportation problem. Like us (a Malaysian startup, Soding), we are running AI powered recruitment platform for employers to hire great software developers. What make us unique compare to other competitors (Codility & Hackerrank) that we have our AI technology to analyze technical skills and personality. But our target market are SME and tech based companies, only covered in South East Asia market with potential revenue RM80 mil. Currently, our traction is several of companies and tech candidates that have already registered with us, our clients are local, talents are global but we are still doing product market fit until now. To develop an AI product not an easy thing. Nowadays a lot of AI startups have weak AI characteristic inside their product. This disadvantage can lead other competitors easy to copy out and build for their own. Try to build a minimal strong AI even not perfect as possible in order to achieve uniqueness. You don’t need to build your own AI framework, you can use the existing one such as Scikit-Learn or Tensorflow and your solution must focus on specific customer problem. Red Ocean Strategy vs Blue Ocean Strategy Technology Like I mention before, most of entrepreneurs with technical background, their perception towards AI must works 100%, if using human in AI is a failure and it’s must accurate 100% most of the time. All these perceptions are absolutely WRONG. Let me tell you about our story, our product not required to work 100% as long it can deliver the result and convenience to them. Currently, half of the processes that we can only automate because the AI engine part still require a lot of training data to feed for predictive modelling to run efficient and accurate and gain feedforward feedback between us and clients for product market fit purpose. Before this, we joined one of accelerator program, one of our mentor curious why he/she can’t see or use the product yet. It’s because the previous reason make us can’t show the complete product instead just green terminal in front of him/her. Our product consist of three parts which are static code, predictive and personality analysis. We used combination of AI frameworks that will be our state of the art. Our predictive modelling still needs human intervention in order to make it right, not only smart. No need 100% accurate unless high product risk such as healthcare that need serious attention on it especially when involving human life. A tiny mistake can cause serious damage or maybe death. One of my audience during an AI event, CEO iflix Malaysia, Azran Osman-Rani asked me, what are differences between our technology compare to IBM Watson. On that time we used NLTK, one of tech stack focusing on Natural Language Processing to process social media and candidate data for sentiment analysis (only English language). I replied to him that our technology during at time just below 80% accuracy based on collected training data but the answer not really satisfied him. But we realized during product market fit period, there are pros and cons if using NLTK and IBM Watson technology but I can’t tell more details about it since we are partnered with IBM. Maybe one day I can confront with him to share our finding. Go back to our story, why we still need human even automate? It’s because to avoid disaster even full autonomy, user experience for product market fit to discover a new thing based on feedback from the market and of course to train modelling to be smart and right. Data Mining Data When we talk about the data, how we can retrieve it to train the AI? It can be many ways such as existing dataset from database or data crawling using Beautiful Soup or Tweepy. But remember, Malaysia has privacy law called PDPA (Personal Data Protection Act 2012) to protect personal data. You need to do some data exploration analysis like identify some features. After that clean the data (outlier fixing and impute missing values). It takes 40% of your time. Then use machine learning approach for modelling. Finally evaluate your modelling to determine performance such as accuracy. You need to optimize your modelling from time to time in order to make sure can get better result. When dealing with the clients, DON’T ever request any data without completed your modelling and must test with your OWN dataset to show more value added. Big Data Process Rule of thumb for your startup is make sure split into 50/50 for business and product development. It’s because you must make sure it balance between these two to improve both processes at the same time. In mid 2016, we were surprise with local startup scandal whereby the tech guy were ditch over by the group of co founders. At the end of the story, that startup business process shaken up when the tech guy step out with his tech. No product whatsoever for their business. Same goes to AI engineers, they are novelty in the startup. So please appreciate them. In business development, they guy that handle this process must know some social engineering knowledge. He/she must able to educate the users (even it’s quite hard and takes a lot of time) in the market, help clients to solve their problem and also take care of security and privacy of the stack holders. For us, we have our own internal QA (quality control) to take any preventive actions instead corrective if any future bad happen. People There a lot of misconception about AI term. Many people that self claim knew about AI always keep abusing AI term as we heard on social media and sharing knowledge event. What they know is AI replace jobs in future, destroy humanity and bla bla bla. Another thing is they claim using AI framework as just easy 1,2,3. Even I’m graduated master degree in computer science and worked about it since end of 2015, I never though it was simple just that. Do you ever heard about ConvNet (Convolutional Neural Networks)? This “thing” specialized in image processing technique. First, you need to prepare a subset from one sample image. Once prepared, set a filter. Convolve the filter with the image like slide over the image spatially, computing dot products. So the result of taking a dot product between the filter and a small chunk of the image. Prepare a few of activation maps and convolve all the slides over all spatial locations. Below is the code sample for ConvNet using Tensorflow: ConvNet architecture It’s fucking hard rite? What we conclude from above explaination, we need deeply technical team especially have core skill in machine learning. Most of the time we can get from PHD students. Also top engineers who can produce and deploy AI. Data engineer, scientist and analyst are different roles. Data engineer is responsible to make sure clean, prepare and optimize data for consumption for data scientist to convert data into storytelling. Data analyst collects the data and help companies make better business decisions. But for startup, find talent that can do both in order to save resources. In most cases, CEO needs to be deeply technical too. It’s rare to find these kind of talents in local market because the talents prefer for stable company, more benefits, likely apply to MNC companies and more secure (that’s why Soding exist!). Tech companies are willingly to pay for hiring because it’s hard to attract local talent. My advice when you seeking for talents, find these kind of traits are business domain and have technical and mathematic skills. It’s part of DNA requirement for team members. Our team consist of 7 people whereby 2 full time and 5 part time. All Malaysian except 1 from Portugal. How to build a successful data team without hiring a unicorn Conclusion At the end of this article, I conclude that these 5 elements can help you out build an AI startup. Are you ready for AI technological advancement? Thank you to Cradle for make our idea to reality. If you are a tech talent looking for other opportunities in South East Asia region, you can register here and we will be in touch. If you resonated with this article, please subscribe to my personal email subscription. You will get a weekly update from me about coding, recruitment & Artificial Intelligence.
How to start Artificial Intelligence-Based Startup from Zero?
99
how-to-start-artificial-intelligence-based-startup-from-zero-1cda978d0cef
2018-04-08
2018-04-08 08:13:46
https://medium.com/s/story/how-to-start-artificial-intelligence-based-startup-from-zero-1cda978d0cef
false
1,915
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Mohammad Nurdin
null
3be0f6cfdb3c
mohammadnurdin
83
29
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-14
2018-03-14 04:56:05
2018-03-14
2018-03-14 04:57:44
0
false
en
2018-03-14
2018-03-14 05:24:36
49
1cdd9b49e8e4
7.083019
0
0
0
While we were preparing to launch the 2018 Nuix Black Report, we came across a variety of eyebrow-raising and controversial cybersecurity…
5
Blockchain, Nuclear War, and Artificial Intelligence: 2018’s Most Extreme Cybersecurity Forecasts While we were preparing to launch the 2018 Nuix Black Report, we came across a variety of eyebrow-raising and controversial cybersecurity predictions. We present here our three most over-the-top cybersecurity theories. We don’t necessarily endorse these. Scary predictions from the corporate sector also double as entertainment value for the Twittersphere. Nevertheless, we’ve aimed to outline some of the nihilistic, existentialist, and possibly think tank-endorsed views of the techno-anarchists, futurism junkies, and other communities disproportionately represented among the hacker community. On the other hand, if you’re looking for use-it-right-now-to-defend-yourself info, think Sun Tzu and sign up here for our 2018 Black Report to learn what hackers are really thinking when they attack you. Last year, our report became the best known industry survey of real-world professional hackers and pentesters — both criminal and legitimate. Our 2018 findings are similarly explosive. Most of Society May Never Be Secure Cybersecurity Ventures anticipates global cybersecurity spending will exceed $1 trillion between 2017 and 2021. Despite this tsunami of cash, many technology executives predict we will largely continue to fail at preventing attacks. Chris Pogue, Nuix’s Head of Services, Security, and Partner Integration, says organizations today are no safer than they were 20 years ago. This is based on his experience with over 2,000 breach investigations. Despite impressive advances in technology, hackers remain one step ahead of the organizations and individuals they target — partly because there are many potential flaws in cyberdefenses, but also because an attacker only has to be right once. Are we doomed to this status quo? During our annual User Exchange conference, Rich Cummings, Nuix SVP of Cyber Product and Strategy, noted that many companies have a dozen or more different systems and cybersecurity software vendors — sometimes resulting in hundreds or thousands of alerts daily. Our Cyber Threat Analysis Team regularly sees extreme cases like a company suffering a major, preventable hack despite seven-figure investment in cyberdefenses. The old saying “It’s not if you get breached, it’s when,” is now “It’s not if you get breached, it’s how badly,” Pogue said. “You will always pay for security one way or the other — with interest or without.” Start with the hype around blockchain this year, and why Wall Street has been so excited around these emerging technologies. A major reason for the hype is security. Imagine blockchain as a giant tower of math, with any transaction affecting the entire system. The idea of committing fraud for many cryptocurrencies and other blockchain applications, such as supply chain management, would require fooling the entire ecosystem. It’s as if you tried to insert an oversized Jenga block into a virtually unshakable 30,000 foot Jenga tower — there’s no way to make the block fit without the entire system rejecting it. We’ve simplified this example obviously. (Of note: In the lead-up to last year’s Black Report, Chris Pogue broke the news that many U.S. law firms have stockpiled bitcoin in order to pay off clients’ ransomware attackers.) For years, there’s been far-fetched talk in Silicon Valley of starting over and re-inventing the internet. Blockchain technology is perhaps the most dominant proof of concept for this. As the theory goes, the internet was designed by naïve academics and never built for security. Right now, computer networks are like the human immune system: You can eat well and exercise (patch your software), but you’ll never be completely immune to viruses or breaches — for example, all it takes is a rogue employee. Research suggests half of data losses are due to insider threats. Expect things to get worse before they get better. According to our research, most hackers could completely compromise a system in less than 15 hours, yet the average time to discover a breach is 250–300 days. In some cases, an organization can be years behind an attacker before they discover a beach. Could Cybersecurity Flaws Lead to an Extinction-level Event? There’s no shortage of highly funded think tanks predicting the biggest threats to humanity and the world order: war, nanotechnology, super-viruses, and artificial intelligence. For years, major tech executives and scientists like Elon Musk, Stephen Hawking, and Bill Gates have been raising public attention to the risk of the latter. Some experts estimate 50–50 chances of a conflict with North Korea in 2018 — though many similar claims are purposefully overhyped. More grounded estimates show the extraordinary difficulty of predicting a nuclear event. For instance, the Global Catastrophic Risk Institute cites research with a probability of nuclear conflict ranging from “once per 14 years to once per 100,000 years.” For perspective: during the Cuban Missile Crisis, President John F. Kennedy saw the chances of nuclear war as being as high as 50 percent — considerably worse than Russian Roulette. In most of the think tanks’ scenarios for extinction-level scenarios, poor cybersecurity hygiene or failure of imagination around risks is a leading probability for sparking the event. Consider the risk of a nuclear incident from any number of scenarios when malware attacks a nuclear missile silo or radar facility. In 1983, Soviet Lieutenant Colonel Stanislav Petrov may have prevented a nuclear war between the US and USSR by simply ignoring false computer warnings of a US nuclear strike. The radar readings of an imminent attack were the result of a malfunction. Picture an alternate version of history where it was a hack instead: In January, international policy think tank Chatham House reported US, UK, and other nuclear weapons programs are increasingly vulnerable to cyberthreats. Today, many missile silos run on 1970s-era computing systems using floppy disks, and some cyber experts argue that’s a good thing — similar to the argument for paper ballots rather than electronic voting machines. It’s easier to hack a voting machine than a million pieces of paper, and it’s easier to hack a missile silo running 2018 technology than an offline one that requires human beings to literally push a button. A variety of events keeping the experts awake at night — nanotechnology, malicious artificial intelligence, nuclear warfare — are highly tied to our level of cybersecurity preparedness. News cycles like Facebook shutting down robots because they began speaking their own language to each other may become more common in coming years. Yet the scenario of a hacker commandeering an automated AI-driven soldier, perhaps like the killer robots Russia is investing in, seems more likely. In short, human hacker-caused AI mischief is probably a scarier and more realistic possibility than a fully sentient AI uprising. Immortal Computer Viruses Saving perhaps the most absurd for last: We know technological and societal norms in the future could be unthinkable compared with today. HBO’s Westworld raising the question of whether to give your Amazon Alexa legal rights is just the beginning. There are serious proposals on the table to raise chickens in virtual reality. Alternatively, some forecast that eating non-test-tube meat will be illegal for ethical and carbon emissions concerns (in this case, we hope artificial meat will at least be delicious someday). These types of concerns matter to AI. If a chicken has rights, should a supercomputer with intelligence (and possibly self-awareness) also be granted ethical considerations, given that it may be equivalent to the collective brain power of 10⁹⁹ chickens? Other Very Serious People forecast that, like riding a horse on most common roads, driving will be illegal for the general public in a few decades — assuming our legislators decide that automated cars are safer than human drivers. The problem there is that self-driving cars are already deciding who to kill and that cybersecurity is one of the top threats to autonomous vehicles. Many of these scenarios have been mainstreamed from Matrix fan fiction to Think Tank PowerPoint decks in the past five years. In 2017, scientists and academics were quoted in the media as appreciating how absurdist humor in shows like Rick and Morty and Black Mirror drew attention to the societal impact of future technologies. Westworld’s dramatized AI ethics debate poses interesting questions often discussed in Silicon Valley. For marketers, one hypothetical scenario kicked around is when the government may intervene in the level of detail in customer simulations, assuming exponential growth for those simulations. In the year 2050, a marketing intern might simulate the daily routines of 10 million people just to determine whether they’ll buy one brand of coffee or another, perhaps to develop an ad campaign. Someday, there may be government regulation requiring that such a simulation is capped at an “ethical” level of processing power, to avoid any possibility that ending the program will effectively destroy 10 million “sentient” beings. Unlike Westworld’s premise, the question of whether AI systems have legal rights may come down to the rights of lines of code rather than the rights of the perfect physical human replicas portrayed in the show. So where do computer viruses come in? It’s an open scientific question whether viruses targeting organisms can be considered “alive” or not. The flu virus might be alive, or it might just be a sort of dead automated program that hijacks human cells. In this vein, Kurzgesagt, the popular Munich-based educational content studio, notes that there is already debate whether computer viruses can be considered “alive.” The argument against this is that that this is all simply ridiculous for obvious reasons. The argument for this is that life can be described as information continuing to self-replicate or continue in some form over time (e.g. genetic material). If Stuxnet can one day infect an AI system in any meaningful way, you could argue that it’s like one of many single-celled organisms in the evolutionary journey to super-intelligence. The case of immortality in computer viruses gets even more complicated when one considers the concept of emergence — well-explained by Kurzgesagt — where elementary systems (molecules, ants, individual computers) become incredibly powerful when part of broader systems (organic life, ant colonies, and computing networks). We have yet to see what an emergence of computer viruses might look like. And Now for Some Predictions You Can Believe At Nuix, we prefer to stay grounded when it comes to predictions. There’s no shortage of fear, uncertainty, and doubt in cybersecurity. It’s an open question whether we’ve reached peak fear — or whether that fear is warranted. Take every prediction you read here, and everywhere else, with plenty of grains of salt. As we’ve learned from the Black Report, new and exciting technologies for hackers often aren’t even the most devastating: Hackers are still (successfully) using the same techniques they used in the 1990s. Phishing, insider threats, password-guessing, and other simple tactics are still some of the most effective way for hackers to get you’re your system. Think Equifax. If you want to learn more about existential theories for the future of humanity, subscribe to Kurzgesagt or Waitbutwhy. For a look into the minds of today’s hackers and how they’re likely attacking you or your organization, sign up to read our 2018 Nuix Black Report. A version of this post appeared on the Nuix company blog here.
Blockchain, Nuclear War, and Artificial Intelligence: 2018’s Most Extreme Cybersecurity Forecasts
0
blockchain-nuclear-war-and-artificial-intelligence-2018s-most-extreme-cybersecurity-forecasts-1cdd9b49e8e4
2018-03-14
2018-03-14 05:24:36
https://medium.com/s/story/blockchain-nuclear-war-and-artificial-intelligence-2018s-most-extreme-cybersecurity-forecasts-1cdd9b49e8e4
false
1,877
null
null
null
null
null
null
null
null
null
Cybersecurity
cybersecurity
Cybersecurity
24,500
Matt Culbertson
Interested in Marketing, Digital, Communications, Advertising. Bay Area transplant by way of Phoenix, @Cronkite_ASU.
1f18e88082b2
mattculbertson
791
1,176
20,181,104
null
null
null
null
null
null
0
def listsum(numList): theSum = 0 for i in numList: theSum = theSum + i return theSum print(listsum([1,4,5,7,9}))
2
7bcd64d2df5b
2018-09-07
2018-09-07 22:45:53
2018-09-07
2018-09-07 22:57:43
0
false
en
2018-09-07
2018-09-07 22:57:43
0
1cdef492f33c
1.177358
0
0
0
Here I will explain the application process Springboard uses.
5
Preparing for a Springboard bootcamp application Here I will explain the application process Springboard uses. I applied to Springboard’s 6-month Data Science bootcamp about two weeks ago when I saw their $7,500 price tag (less than half of Metis’s 12-week bootcamp). The application process is very straightforward and doesn’t take much time at all. I am not to the portion where I have a take-home test. I need to study. How can I know what is on the test. I took a practice exam, and had to look up the very first challenge: a simple calculation of the sum of a list. They left some of the code blank, and I had to fill in the rest. I will give an update when I take the real exam. They gave me an extension on the take-home test deadline I’m not sure if this is to be taken the wrong way, but on the night that the take-home test was due, I asked for an extension. If you’re keeping up with the publication you will know that I have two jobs and run a company, so I don’t have much time to study, let alone write these notebook entries. However, I asked for an extension, and the gladly granted it to me. This may mean that they have low numbers, they may like my initial resume and screening, but to be honest, it might just be because they’ll take anyone’s money whom applies. Hopefully it is not the latter, and it was all in good faith. Anyhow, I will be posting my notes studying for the test. They might be messy, and I don’t care. Just saving my work in a better place than my own computer’s storage. author == Nick
Preparing for a Springboard bootcamp application
0
preparing-for-a-springboard-bootcamp-application-1cdef492f33c
2018-09-07
2018-09-07 22:57:44
https://medium.com/s/story/preparing-for-a-springboard-bootcamp-application-1cdef492f33c
false
312
This is going to be a play-by-play of my path from an Economist/Econometrician to a Data Scientist/Analyst. The path is not complete, and I do not know how long it will take.
null
null
null
Economist to Data Scientist
nick.seanet@gmail.com
economist-to-data-scientist
null
null
Hiring
hiring
Hiring
16,840
Nick McLoota
null
d40024c9a5e3
mcloota.nick
1
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-19
2017-12-19 05:46:13
2017-12-19
2017-12-19 05:49:44
1
false
en
2017-12-19
2017-12-19 05:57:33
2
1ce0625c8bcb
0.856604
0
0
0
Every day, all of us are bombarded with marketing messages on hoardings, print media, SMS, mail, TV and phone calls that are of no…
3
Artificial Intelligence use case for marketing professionals Every day, all of us are bombarded with marketing messages on hoardings, print media, SMS, mail, TV and phone calls that are of no relevance to us and are in fact a nuisance. Google ads make a lame attempt at personalization but are actually an equal nuisance. Personalized messaging based on customer preferences is the holy grail for marketing professionals. Modeling customer preferences and predicting customer behavior are therefore important use cases for AI and should be of great interest to marketing professionals. Recently I tried working on the marketing use case published on the IBM Watson blog https://www.ibm.com/communities/analytics/watson-analytics-blog/predictive-insights-in-the-telco-customer-churn-data-set/ with my preferred Machine Learning tool TFLearn (http://tflearn.org/ ) and I got good results. I would like to encourage all marketing professionals to try cracking this use case on their own. This will motivate you to start thinking of Machine Learning use cases that can enable you to reach out to your customers in a more personalized manner.
Artificial Intelligence use case for marketing professionals
0
artificial-intelligence-use-case-for-marketing-professionals-1ce0625c8bcb
2018-04-01
2018-04-01 07:11:54
https://medium.com/s/story/artificial-intelligence-use-case-for-marketing-professionals-1ce0625c8bcb
false
174
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Sudhir Gupta
null
db575b06f919
sudhir.g
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-14
2018-08-14 08:15:55
2018-08-15
2018-08-15 04:14:59
9
false
zh-Hant
2018-08-15
2018-08-15 04:15:18
0
1ce136403f81
1.426415
3
0
0
隨著人工智慧發展速度加快,不少人對未來的工作機會感到焦慮。2018/8/9,李開復老師應遠見.天下文化的邀請來台演講,而演講的內容則出自於他的新書《AI 新世界》。本書以他過去35年來做人工智慧,做投資、創業所累積的經驗,分享他如何看待中國、矽谷在未來 AI…
2
李開復的《AI新世界》論壇心得分享 隨著人工智慧發展速度加快,不少人對未來的工作機會感到焦慮。2018/8/9,李開復老師應遠見.天下文化的邀請來台演講,而演講的內容則出自於他的新書《AI 新世界》。本書以他過去35年來做人工智慧,做投資、創業所累積的經驗,分享他如何看待中國、矽谷在未來 AI 世界發展所扮演的角色,並提出對未來15年AI發展的預測。 在過去十年人工智慧有了一個非常重要的技術突破,那就是「深度學習」。到底什麼是人工智慧?什麼是機器學習?什麼是深度學習?簡單的說深度學習是機器學習的一種算法,機器學習是人工智慧領域的一個方法。 李開復老師認為,發展人工智慧需要有五個前提條件: 擁有海量數據 客觀、精準自然標注,必須要有相對應的行動,並且是有經濟價值的 單一領域,必須專注在垂直領域,難以跨領域 超大計算量 頂尖的科學家,但這部分已經開始在轉變,我們從以往的以發明為主的領域變成應用為主的領域 在談到以發明為主的領域,美國就是領先全世界的AI霸主,所有AI領域的圖靈獎得主與深度學習的發明人大都來自美國。但在過去的10年間,宇宙有了改變,中國市場的崛起,將美國矽谷獨大的創業模式分裂成中美平行的雙宇宙。 為什麼中國能在短短的十年發展出屬於自己的創業模式呢?其中有以下五點優勢: 中國產品創新開始(和美國共同)領先全球,中國創業者走出從模仿→精進→創新的過程,有些創新產品甚至滲透到全世界 殘酷的市場鍛鍊出世界級創業家、企業家,中國創業者如同羅馬競技場的鬥士,具有融資能力強、競爭壁壘高、追求成功、不懼辛苦等特色。美團的創始人王興就是一個最典型的例子,他靠著新的商業模式,打敗所有團購公司,成為大陸餐飲服務的霸主。 中國AI資本領先全球:早期投資 人工智慧進入應用期,中國將成為最大受益者, 中國政府大力推動AI發展,中國為無人駕駛打造新城市、新公路 隨著人工智慧發展,AI將取代大量人類工作,李開復老師強調人工智慧還是有弱點的,AI沒有創造力,沒有愛的,我們應該把更多的精力,放在那些讓我們更有創造力的事情上。老師將工作類型用XY軸劃分成四個象限,並指出人與AI能夠在未來扮演什麼樣的角色。 人的關懷與溫度是無可取代的 在AI的時代,老師預測15年之內有50%的工作將會被取代,很多工作將會消失,那我們下一代的年輕人能做什麼呢? 重複性工作的數量必定會明顯的遞減,但並不會全部消失。因為任何事要做到最頂尖還是非常困難的。機器可以做的事如紀錄賽事、產出每一季財報、翻譯各國語等,但是寫出有深度的文章;幫國家元首翻譯;辦一場專業型演講,機器還是很難做到的。所以下ㄧ代在面臨這些問題時,其實就是面對現實,如果認為某項職業是你的使命那就去做,因為那些最厲害的人一定會有他的生存空間。老師也建議年輕人應努力培養四種能力:做自己喜歡的事情、自我學習的能力、參考AI能做些什麼、加強(情商)軟實力。 世界變化太快,與其擔憂未來AI取代人類工作,年輕人更該認清有些能力是無法被取代的,只要能夠與時俱進,保持學習能力,EQ夠高,就不需要害怕知識過時。 「最好的行業會一直變動,不必太畏懼未來。」 2. 台灣的數據既不美國也不大陸,台灣的創業家不像狼,比較像哈士奇XD~台灣的政府政策常因政黨輪替導致內格改組變來變去,台灣AI科學家不足,當老師在分析美國跟大陸因為這4個不同浪潮所發展出的5年內5年後不同的AI模型,那麼台灣在AI時代,會有什麼角色? 鼓勵創業是非常重要的。老師叮嚀創業者需要考量到國情與市場結構的不一樣,我們或許可以借鏡但不應該一股腦地模仿。美國鼓勵顛覆式創新,VC都在看新創公司能怎麼改變世界;而中國大陸VC則是看新創如何創造價值,怎麼找到一個賽道,跑得比別人快,有更強的執行力。未來工作的崗位一定是不夠用的,那我們應該鼓勵什麼樣的創業在台灣呢?老師認為去鼓勵一個大市場模式的創業其實對台灣來說是一個很大的挑戰,因為台灣本身市場不大,台灣的企業也很難延伸到其他的市場。但是我們反過來想,台灣的優勢在於精密製造,特別是晶片跟半導體。以無人駕駛為例,投資無人駕駛對台灣來說可能太難了,因為需要有大市場大政策花大錢,這樣的投資大概也只有美國或中國大陸能負擔得起,但是如果將無人駕駛分拆出來,會發現裡面有很多晶片是台灣有能力製造的,但這些晶片製造的創業公司也可能是既有的公司,老師建議這些公司不能再閉門造車,應該把技術發揮在市場所在的地方。 台灣的第二個機會是有價值的數據。台灣要去累積世界最大的數據是不可能的,但是台灣有沒有什麼數據是獨一無二的,別人沒有的或者是世界不一致的。例如台灣的健保數據可能就是全球少數高品質的數據。例如我們可以用健保數據分析癌症治療方式並追蹤癌症患者採用各種方式的存活率,類似這樣的應用需要由創業者來摸索,可能不見得能成為世界上最有價值的公司,但是卻有可能成為台灣最有價值的公司之一。 台灣的第三個機會是打造有溫度的服務業。台灣最美的風景是人,台灣的人情味一直是國際有目共睹的,台灣的服務品質跟客戶至上的精神,真誠發自內心的對外來客的態度都是世界上最頂尖的。老師除了建議台灣可以加強旅遊業外,也可以往有溫度的服務業發展。可以參考瑞士如何踏進全球的酒店管理成為一個帶有溫度的服務培訓中心。 硬體、健保都值得去嘗試,但是台灣真正獨特的機會反而是在有溫度的服務業上 3. AI 社會來臨後,對台灣的政府有什麼建議? 老師認為教育需要做得更貼近這個時代,職業的教育尤其需要與時俱進,因為很多職校畢業的工作是相對重複性高的,這些工作消失的速度會更快,而轉移到服務業的密切性會更久。老師建議政府應該考慮以下幾點因素: 全民基本需求服務(而非UBI) 社會貢獻津貼:關懷、志工、自我提升 全民教育:如何成為不會被AI取代的人 杜絕冷戰思維:AI = 電而非核武 回歸全球化:各國彼此學習借鑑 從政府補助來說,老師解釋中國大陸科技成功的因素有以下三個:務實科技為先的政策(Technical Utilization)、創造很好的基礎設施、金錢的補助。 另外,老師也建議當一個新的技術產生時,政府是不是可以考慮讓它先運行起來,萬一有問題再做調整或是撤回,而不是採取保守防弊態度,延遲一個新技術的推出。老師提醒政府不要走最極端的保守也不要走最極端的自由,建議可以分析中國大陸在互聯網與AI這方面的成功的政策,有哪些是值得參考或啟發的,而不是盲目的模仿它國。 4. 書中提到AI有四波浪潮,從互聯網的AI,商用AI,認知的AI,到自主的AI,是否還會有第五波浪潮,那會在什麼地方呢? 老師相信未來還會有更多的浪潮。從1975年他第一次接觸Internet開始,在過去的24年間已經歷過了無數波的浪潮,從打造cable增加頻寬、瀏覽器建立網站、門戶網站、impression廣告、搜尋引擎、電子商務、社交網站、到O2O;接著進入雲,再進入與AI的對接。而Mobile Internet的崛起也讓這些浪潮又再重新來過一遍。到現在為止我們總共經歷過約15波的浪潮,所以老師認為我們的想像力是絕對不夠豐富的,他認為AI應該也會有15波浪潮,所以一定要篤定地相信浪潮必然會來臨。 5. 在未來的發展中,除了現在的促成者跟外來的想像之外,過程中還會有一些干擾因素,像是黑天鵝。當全世界都看好AI,上至政府下至企業都投入大量的資源在上面,在這個發展的過程中會不會有一些黑天鵝的出現,或一些甘擾因素阻礙AI的發展,老師對這個有什麼樣的看法呢? 老師點出AI能幫助人類更有效率,但也會給人類社會帶來許多的挑戰,他列出了以下6種可能的干擾因素: 隱私 安全性 大公司壟斷 數據的偏見 貧富不均 工作 老師認為最棘手的問題是「安全」問題,畢竟AI是軟體。可以看到在過去20年間有多少的網站被駭客攻擊,國家間的網路戰,俄國對美國選舉的干涉等等,這些都是駭客的行為。無論是數據的黑客、安全性的黑客、盜取隱私的黑客,或是去把一個正常的無人駕駛變成一個殺人武器,幾乎可以確定的是這類的安全問題在AI上會不斷的發生。老師也提到最近一篇著名的論文,人臉辨識的能力已經比人還準確,但是我們居然可以造一些很特殊的圖像就能成功騙過臉部辨識系統的「眼睛」。因此AI的機器學習,如果過程中塞了一些噪音進去,就會有可能去做一些非法的事,這樣的問題將來也會不斷的發生。幾乎可以確定它發生的頻率是超過「黑天鵝事件」。黑天鵝事件指的是非常難以預測,且不尋常的事件,通常會引起市場連鎖負面反應甚至顛覆。 警訊!AI黑天鵝事件儼然成形
李開復的《AI新世界》論壇心得分享
52
李開復的-ai新世界-論壇心得分享-final-1ce136403f81
2018-08-15
2018-08-15 04:15:18
https://medium.com/s/story/李開復的-ai新世界-論壇心得分享-final-1ce136403f81
false
60
null
null
null
null
null
null
null
null
null
人工智慧
人工智慧
人工智慧
85
jessie liu
Trained 3D animation turned Hollywood Compositor turned UI/UX Designer. Now a lead product designer based in Taipei, Taiwan. Contact me: jessie11919@gmail.com
d38049810c2c
jessieliu8
14
71
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-24
2018-08-24 05:51:59
2018-08-24
2018-08-24 06:41:32
2
false
en
2018-08-24
2018-08-24 06:41:32
23
1ce1ec595de4
3.205975
0
0
0
There are many ways you train machines every day. Every time you mark something as spam or send something from spam to your inbox, you are…
3
Spam? JK Rowling? — Day 2 #100DaysOfMLCode Photo by Pau Casals on Unsplash There are many ways you train machines every day. Every time you mark something as spam or send something from spam to your inbox, you are training a machine learning (ML) algorithm how to classify e-mails. One classification method is called Navie Bayes. This method is based on Bayes’ theorem. This is the first machine learning algorithm we will look at. Prior First the algorithm makes a hypothesis about something. In the case of our e-mail, it guesses whether the e-mail is spam or ham (not spam). It make this guess not knowing the words of the e-mail. All you know is that your inbox has one more item. It might say there are equal odds you will get spam vs ham. This means the probability of getting a spam e-mail prior to getting any evidence (reading text), is .5. This is written as P(S) = .5. Likelihood The algorithm then looks at the content of the e-mail. To make this example EXTREMELY simple (for both our sakes. I’m new to this too), let’s say every e-mail can only contain some combination of three words — “deal”, “meeting”, and “Rolex”. You and many other users have flagged spam e-mail in the past. The algorithm knows the frequency or likelihood of these words in a spam e-mail and the frequency/likelihood of them in a ham e-mail. Let’s say the e-mail only says “Rolex.” This means the probability of this being a spam e-mail given the evidence that “Rolex” appears is .7. This is written at P(R|S). Normalization There is still a possibility that this is a ham e-mail. To normalize the result, all probable outcomes using the evidence that “Rolex” is in the e-mail are calculated. This is the probability of spam’s prior times the likelihood of it being spam given “Rolex” plus the probability of ham’s prior times the likelihood of it being ham given “Rolex”. Posterior Lastly, the algorithm calculates its conclusion given the evidence. This is shown in the image below. The algorithm calculated that there is an 87.5% chance this e-mail is spam. If the algorithm had guessed the e-mail was ham, the posterior would have been 12.5%. The denominator (normalization) would have been the same. The numerator would have been .5 times .1. Why Naive? This algorithm looks at words individually. If the e-mail contained “Rolex Deal” it would have the same posterior as an e-mail containing “Deal Rolex”. Word order does not matter. To look at more than one word, the algorithm would multiply the likelihoods together. For spam it would be P(S) x P(R|S) x P(D|S) in the numerator and P(S) x P(R|S) x P(D|S) + P(H) x P(R|H) x P(D|H) as the normalization. JK Rowlings This algorithm can also guess at the author of a document or novel. JK Rowlings wrote a book using a pseudonym. A Naive Bayes algorithm looked at her previous writings and compared it with this new book to correctly guess she had written it. Most of the above example I learned from the Udacity course. I am also reading ”Machine Learning” by Stephen Marsland and he says - “This is Bayes’ rule. If you don’t already know it, learn it: it is the most important equation in machine learning. “ I haven’t finished this chapter yet. More for tomorrow. Thanks for learning with me! My Learning Resources: Udacity — Intro to Machine Learning, Sebatian and Katie Coursera — Machine Learning, Andrew Ng Machine Learning, Stephen Marsland (2015) You Are What You Stream By Christine Hung from Spotify- https://youtu.be/OMo6yXPETbM https://www.oreilly.com/ideas/machine-learning-at-spotify-you-are-what-you-stream Big data, big quality: Data quality at Spotify — Irene Gonzálvez (Spotify) at strataconf.com 2018 Weekly wrap-up videos will be on YouTube. https://www.youtube.com/user/SciJoy Daily Videos (all same content just different platforms): IGTV — http://instagram.com/scijoy Twitter — https://twitter.com/TheSciJoy Facebook — https://www.facebook.com/TheSciJoy LinkedIn — https://www.linkedin.com/in/jacklynduff/ Audio is the same on all these platforms. It is a podcast called Learnings of a Maker: RSS — https://anchor.fm/s/557c9e8/podcast/rss Anchor — https://anchor.fm/learnings-of-a-maker Apple Podcast — https://itunes.apple.com/us/podcast/learnings-of-a-maker/id1414916236 Google Podcast — https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy81NTdjOWU4L3BvZGNhc3QvcnNz Spotify — https://open.spotify.com/show/4AnBX33erDhqXzrMWY8yIh Breaker — https://www.breaker.audio/learnings-of-a-maker Overcast — https://overcast.fm/itunes1414916236/learnings-of-a-maker Pocket Cast — https://pca.st/uRk6 RadioPublic — https://play.radiopublic.com/learnings-of-a-maker-WzO35N Stitcher — https://www.stitcher.com/podcast/anchor-podcasts/learnings-of-a-maker You can add this to your Alexa Flash Briefing — https://www.amazon.com/gp/help/customer/display.html?nodeId=201601880
Spam? JK Rowling? — Day 2 #100DaysOfMLCode
0
spam-jk-rowling-day-2-100daysofmlcode-1ce1ec595de4
2018-08-24
2018-08-24 06:41:33
https://medium.com/s/story/spam-jk-rowling-day-2-100daysofmlcode-1ce1ec595de4
false
748
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
SciJoy
null
b65620178c67
SciJoy
3
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-28
2018-09-28 07:36:06
2018-05-16
2018-05-16 11:33:00
8
false
en
2018-09-28
2018-09-28 07:38:43
1
1ce25b7723f2
3.571069
0
0
0
There’s a new breed of companies on the rise whose business are solely based as a digital platforms.
5
How Digital Platforms will Threaten your Business There’s a new breed of companies on the rise whose business are solely based as a digital platforms. Companies are building a digital platforms and using them to bring communities together; for example: Amazon Amazon has a platform that brings buyers and sellers together. Buyers buy products and write reviews of the products they’ve purchased which in turn provides value to other buyers. So new buyers flock to the platform and in turn bring more sellers from their reviews. In this community of buyers and sellers both find value and as a result the network grows exponentially. amazon and their network of buyers and sellers Facebook Facebook also has a digital platform for people and their friends to share information with each other advertisers are now attractive to the platform because of the value it brings them. Facebook a social space for sharing information with friends, family and hosting business promotions LinkedIn provides a digital platform where its members can establish strong networks and showcase their professional side. Because of this companies that want to hire talent flock to the platform. LinkedIn a networks of potential employable via a digital CV Netflix Netflix is a platform where paid subscribers watch movies and they get movie recommendations based on what other people with similar interest watch here the platform matches its subscribers with movies and then they read these movies which help other subscribers that’s valuable! Netflix a new form of bespoke media consumption Some digital platforms however haven’t been so successful… Monster Monster.com brought employers and job seekers together but ended up with so much junk that it was difficult for either group to sift through the content to find what they wanted. The platform did not have any intelligence and so it lost its mojo! Monster.com a complicated digital platform A good platform provides a lot more value than just membership. Over time a platform can accumulates a lot of data. If such a platform can figure out what data to deliver and when then it’s got a secret sauce! Companies are now using machine learning and artificial intelligence to do just that. If AI implemented well then the network grows and the network becomes even more valuable. It is this relevance that keeps its members engaged and coming back. Uber a Golden Standard Uber provides a platform that brings drivers and riders together not only for the ride but also for the ability to use maps to not have to carry around cash as well as rate drivers and riders. Uber is now in a position to attract insurance companies to offer bids on insurance for each ride. This make sense once one realizes that Uber has information about its riders and even more information about its drivers and so can therefore provide automatic insurance coverage for each ride. All this can be facilitated by machine learning. Companies like Allstate and other insurance companies can of course offer ride based insurance but there’s the catch, they’ll now have to compete with each other based on the data that Uber owns. This gives Uber a huge competitive advantage. This scenario should scare insurance companies because now they’re at the mercy of Uber which has enhanced its platform as a competitive advantage. Uber is only going to keep investing in its platform. Given this how does an insurance company counteract this platform network effect? One option is for insurance companies to each build a platform of their own. Another option is to be ready to compete in the platform network ecosystem. These are essentially two vastly different strategies and depending on what strategy company chooses they will have to build capabilities to deliver to those strategies. However digital platforms like Uber are much faster along their digital journey and so are closer to adopting now technologies such as block chain in turn further outpacing those companies how have yet to even start their digital journey. Learn what works from Uber, Netflix and Amazon and remember what doesn’t from failed digital platforms such monster.com Hope this helps… Originally published at blackboxlabs.github.io on May 16, 2018.
How Digital Platforms will Threaten your Business
0
how-digital-platforms-will-threaten-your-business-1ce25b7723f2
2018-09-28
2018-09-28 07:38:43
https://medium.com/s/story/how-digital-platforms-will-threaten-your-business-1ce25b7723f2
false
646
null
null
null
null
null
null
null
null
null
Startup
startup
Startup
331,914
Chukwuka Orefo
Research Consultant & Technical Adviser in Health & Technology
55758e569e6f
chukwuka.orefo.x45
0
21
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-19
2018-02-19 11:43:45
2018-02-19
2018-02-19 12:52:06
1
false
en
2018-02-19
2018-02-19 12:52:06
2
1ce2f88375a8
0.954717
1
0
0
Today Artificial Intelligence reaching to the next level in the Core field of health care. What do you think? Artificial intelligence is…
5
The ‘Big Pharma Tilt’ towards Artificial Intelligence The ‘Big Pharma Tilt’ towards Artificial Intelligence Today Artificial Intelligence reaching to the next level in the Core field of health care. What do you think? Artificial intelligence is will harm the human life, Now the world’s leading drug companies are trying to utilize the power of artificial Intelligence in such a way that it will be easy for human being to survive for a long period of time. They are using AI as a source of unlimited power and turning AI to improve their business and finding new drugs and medicine. But Big Pharma companies aims to harness modern technology and machine learning systems to further predict how molecules will behave and the likeliness to make a successful drug. This would not only save them time but also money on unnecessary tests. How we can define “Intelligence”?, Generally it is the paste analytics which can be used by the machine to analysis and develop its own intelligence to predict the future and make it more beneficial result because they are proven facts. Human Intelligence is based upon analogical thoughts and stats while AI’s is build upon the data sets what it has been recording over past days. Read More
The ‘Big Pharma Tilt’ towards Artificial Intelligence
12
the-big-pharma-tilt-towards-artificial-intelligence-1ce2f88375a8
2018-05-08
2018-05-08 09:52:02
https://medium.com/s/story/the-big-pharma-tilt-towards-artificial-intelligence-1ce2f88375a8
false
200
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Tech Hunt
#Technology #Freak - BigData, Digital Transformation, Cyber Security, Cloud Computing, Internet of Things
33f515b691ce
techhunt2195
388
740
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-15
2017-11-15 18:15:10
2017-11-16
2017-11-16 13:11:57
1
false
en
2017-11-16
2017-11-16 13:11:57
2
1ce4a955fc6c
2.992453
1
0
0
Two months on the road: United Nations General Assembly, Techfugees and Web Summit. From global challenges that still appear impossible to…
5
Web Summit inspirations: how #AI can bring opportunities and not take away jobs… just yet Two months on the road: United Nations General Assembly, Techfugees and Web Summit. From global challenges that still appear impossible to grasp to the massive fear of soon ‘fully automated world’, thanks to artificial intelligence. Indeed, Web Summit was a transformative event for me as a former UN worker: warning of Stephen Hawking, AI Alexa ordering your coffee, Uber ’s flying cars, hundred of fin-tech solutions to ease how we do transactions and talks on what to do when robots will do work instead of us. Now am back to the reality, back to Lebanon where: financial inclusion is yet to reach at least 50%. no public transport of any sort. garbage crisis 80% of people work in the informal market mostly in the sell or trade and becoming a services-oriented economy is still a dream. Sectarian politics that appear medieval from the outside. 2 million refugees with less than 1% having work permit. Uber’s flying cars and Beirut’s urban scenes This is when you realize how much inequality we still have in this world, and having checks and balances between these two extremes is necessary. We are yet far away from the perfect world. Now how we can bring digital revolution where it is the most needed? This is what I learnt at the Web Summit… We are only in the beginning of the data revolution. On average each person will generate 1.5 GB of data every day by 2020. Now it is only 600–700 MB daily, so it will double. That means more and more data will be digitized and can be used for variety of automated processes — including autonomous driving. Just for the reference to power the ability of autonomous car to drive it is needed 4000 GB of data. Also, for the reference, according to IDC (International Data Center) only 15% of businesses in the Middle East has the online presence. (source: Intel ) There is a big market need for data digitization and labeling that is yet to be fulfilled! Some would argue that now everything is open-sourced, more data sets are being labeled and stored at Kaggle and there are depositories that make use of this data such as IBM Cognitive Business Whatson, AWS Startups use Alexa and image recognition technology to power AI businesses. Google Cloud just recently released an Open Images a dataset of ~9 million URLs to images that have been annotated with image-level labels and bounding boxes spanning thousands of classes. However, when at the IBM workshop I asked a little rasberry pi Tjbot “what do you see?”, it called my electric crimson jacket 5 different colors… Are we there yet to really use such tech for the design of any AI tech-driven solutions? For practical things based on such technology a lot of data, i.e. images or text, need to be still fed in. In short, there is a need for the Custom Language model for using tech like IBM Whatson in the specific domain of the company or a start up. There is a big market need for training the data to make a practical use of it. Lastly, when English is a unifying language for us all, how do we bring such technologies to the world where people speak other languages? Yet alone dialects? How about colloquial language? My Siri would not understand FD as Financial Disctrict…, yet in the US we have the most advanced level of text and language recognition tech, but other places are still not there… Majority of AI start ups are concentrated in the US, with a growing traction in Europe and Russia, but the rest of the world is still an untapped opportunity. All these unicorns, innovations, and ideas will be tailored and replicated everywhere. There will be Arab-market Instacart, and some entrepreneurs will take up a challenge to teach an autonomous car to drive in the luna-park-on-the-road looking Istanbul and Beirut. And teach chat bots Fusha or colloquial Arabic. More human intelligence in training such data is needed. Human in the loop as called by CrowdFlower. Which is a job opportunity for many. This is why more than ever I am inspired to work on the solution to bring remote work opportunity to the most vulnerable — displaced, unemployed and excluded — refugee youth in the Middle East. https://www.taqadam.org/ Cheers, K
Web Summit inspirations: how #AI can bring opportunities and not take away jobs… just yet
1
web-summit-inspirations-how-ai-can-bring-opportunities-and-not-take-away-jobs-just-yet-1ce4a955fc6c
2018-03-20
2018-03-20 17:39:17
https://medium.com/s/story/web-summit-inspirations-how-ai-can-bring-opportunities-and-not-take-away-jobs-just-yet-1ce4a955fc6c
false
740
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Karina Grosheva
Private Sector #development, #Social #innovation, former @UNDP turned #impact #tech #entrepreneur. Focus on post-conflict #futureofwork with #AI #MENA #refugees
2598c129b74a
Movetheglobe
186
298
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-11
2018-06-11 01:20:17
2018-06-19
2018-06-19 23:46:27
1
false
en
2018-06-21
2018-06-21 18:05:34
10
1ce54f380413
1.460377
6
0
0
In machine learning, an ensemble model is one that combines multiple individual models such that it is built in order to perform better…
3
Interpreting Ensemble Models In machine learning, an ensemble model is one that combines multiple individual models such that it is built in order to perform better than each of its composing parts. The most famous ensemble method is probably Random Forest, which is an agglomerate of randomly created Decision Trees. A Random Forest prediction is then the average of the predictions made by its Decision Trees. An ensemble combining multiple individual models. (image rights: https://bit.ly/2K7yEE2) Having said that, much has been talked about black box machine learning models. That is, models that might (seem to) work well but that give its creators a hard time trying to explain their inner workings with precision. Ensemble models definitely fall into that category since they are composed of many potentially complex individual models. It is not uncommon for Data Scientists to resort to algorithms that are straightforward to interpret like linear models, even if they can train a model that performs better using more complex black-boxy algorithms. Having explainable models is especially important for high-risk domains like health and security. Luckily, recent research has shown that it is possible to interpret what were previously thought to be truly black-box models. While some techniques like treeinterpreter are suited to a specific algorithm (in that case, Random Forest), other ones like LIME have the ability to explain classifiers built with any machine learning algorithm. SHAP goes even further and provides explanations both for classifiers and regressors. Explainable AI is one of the hottest topics and an active research area. Being able to explain even the most complex ensemble model is nothing short of amazing. If you speak Portuguese, I would be happy to share a talk I recently gave on Explainable Machine Learning (video, slides). If you are in Brazil in July, come meet at TDC Sao Paulo, where I will give an updated version of that talk. Otherwise, don’t miss out on the links in this post! Do you have more good stuff to share on Explainable AI? Please post it in the comments!
Interpreting Ensemble Models
58
interpreting-ensemble-models-1ce54f380413
2018-06-21
2018-06-21 18:05:34
https://medium.com/s/story/interpreting-ensemble-models-1ce54f380413
false
334
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Gabriel Cypriano
Data Scientist @CreditasBR. More at http://gabrielcs.me
8fdfba044294
gabrielcs
50
90
20,181,104
null
null
null
null
null
null
0
null
0
cc02b7244ed9
2018-04-10
2018-04-10 06:04:08
2018-04-10
2018-04-10 06:06:44
0
false
en
2018-04-10
2018-04-10 06:06:44
10
1ce5c08ee636
2.011321
1
0
0
RETAIL
5
Tech & Telecom news — Apr 10, 2018 RETAIL Technology is now prepared to invade the clothing retail supply chain, and to introduce extreme personalisation features. New software created by a Hong Kong startup (Bespokify) would enable customers globally to order bespoke clothes that are then manufactured in China and delivered within 2 weeks after purchase (Story) PRODUCTS & SERVICES Applications Even if the smartphone market is clearly decelerating globally, companies like Apple still have an attractive growth option in exploiting app stores on top of the devices. Consumer spending in apps grew +22% yoy in 1Q18, to $18.4bn according to a new survey, and iOS clearly leads vs. Android (almost 2/3 of total spend) (Story) Artificial Reality Magic Leap, the startup supposed to be developing the most advanced Augmented Reality system, has many people skeptical on its lack of commitments for a massive commercial launch. Still, this post argues that just the intellectual property that they’ve developed will create and attractive business opportunity (Story) HARDWARE ENABLERS Networks Private cellular networks on licensed spectrum are starting to proliferate in the US, with companies aiming to use them for specific IoT applications (e.g. to control critical processes) that can’t or shouldn’t be run on public networks. Qualcomm expects this “private LTE network” market to reach $17bn by 2022 (Story) SOFTWARE ENABLERS Artificial Intelligence SenseTime, a Chinese startup specialised in image recognition (and face identification), just raised $600m from Alibaba and others, at a valuation of more than $3bn, effectively becoming the world’s most valuable AI startup, and an example of China’s achievements in this field, which the country aspires to lead by 2030 (Story) Artificial Intelligence tools for developers are increasingly relevant to attract them to public cloud offers. And Google (the strongest AI player) is using them to catch up vs. Amazon AWS and Microsoft Azure. The company just launched a powerful speech-to-text API, including 4 different Machine Learning models (Story) Privacy Facebook’s M Zuckerberg will testify this week at the US Senate, on the Cambridge Analytica scandal, and in his prepared testimony, now released by the House committee, he apologises for not having done “enough to prevent (the company’s) tools from being used for harm”, and says it all has been “his mistake” (Story) In this context, US analysts are speculating on how to regulate social apps. Proposals include (1) data-privacy legislation (similar to Europe’s GDPR), (2) hold apps (partially) responsible of users’ bad behaviour, (3) consider applying antitrust rules, (4) push for transparency in data management, (5) enforce data portability (Story) Simultaneously, other voices are focusing on the specific solutions to the Cambridge Analytica issue, and claiming that Facebook recent actions to ensure deletion of the leaked data are not enough, as the algorithms which were trained with these data are still active and might be used again to manipulate public opinion (Story) Google also starting to be under pressure, as a consequence of the sudden increase in concerns about privacy that the Facebook scandal has triggered. Child protection groups have filed a complaint to US regulators asking for sanctions for YouTube on its collection / monetisation of data on children using the platform (Story) Subscribe at https://www.getrevue.co/profile/winwood66
Tech & Telecom news — Apr 10, 2018
1
tech-telecom-news-apr-10-2018-1ce5c08ee636
2018-04-11
2018-04-11 10:12:14
https://medium.com/s/story/tech-telecom-news-apr-10-2018-1ce5c08ee636
false
533
The most interesting news in technology and telecoms, every day
null
null
null
Tech / Telecom News
ripkirby65@gmail.com
tech-telecom-news
TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE
winwood66
Augmented Reality
augmented-reality
Augmented Reality
13,305
C Gavilanes
food, football and tech / ripkirby65@gmail.com
a1bb7d576c0f
winwood66
605
92
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-10
2018-09-10 18:53:59
2018-09-10
2018-09-10 19:11:44
3
false
en
2018-09-19
2018-09-19 22:12:16
11
1ce5e71f65b8
1.04434
39
0
0
Vol.2 Airdrop and Bounty campaign kick off is on. For this phase, we prepared some major upgrades for our members!
5
B&A Campaign! Let’s ROCK⚡️ Vol.2 Airdrop and Bounty campaign kick off is on. For this phase, we prepared some major upgrades for our members! 8.000.000 XDMC will be allocated among our participants, in order to preach MPCX’s word across the crypto ecosystem. We invite everyone to take place in our pioneering and democratic bounty program, and get a chance of earning some precious XDMC tokens upon our first ICO round. Below you may find the links for our campaigns: Bounty campaign Airdrop Campaign Let us disrupt financial history with MPCX Platform. United we Stand. To learn more about MPCX and XDMC Token readers can visit our website https://mpcx.co, read the MPCX’s Whitepaper or simply join us on our social media channels: Facebook, Twitter, Medium,Instagram,YouTube,Telegram and LinkedIn. Stay Tuned! Your, MPCX team!
B&A Campaign! Let’s ROCK⚡️
1,156
b-a-campaign-lets-rock-️-1ce5e71f65b8
2018-09-19
2018-09-19 22:12:16
https://medium.com/s/story/b-a-campaign-lets-rock-️-1ce5e71f65b8
false
131
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
MPCX Platform
The blockchain driven decentralized financial services platform
734181d9250d
mpcxplatform
2,505
40
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-10
2018-06-10 13:48:06
2018-06-10
2018-06-10 16:50:48
2
false
en
2018-06-11
2018-06-11 14:37:06
4
1ce67375b682
3.122956
3
0
0
Yesterday, the Amsterdam Film Institute created a short film about Sean Connery. It was published around 1:00 in the morning, just twelve…
5
Only women appear in this movie about Sean Connery Yesterday, the Amsterdam Film Institute created a short film about Sean Connery. It was published around 1:00 in the morning, just twelve hours after the announcement was made on Twitter that Eunice Gayson had passed away at the age of 90. She was the first ever Bond girl, playing opposite Sean Connery. That’s the link. Notably missing from the film about Sean Connery was, well, Sean Connery himself. He never makes an appearance. Neither does James Bond or any Bond girls, although Eunice Gayson is mentioned once. And any wider associations you might make with Sean Connery — gray beards, celebrity jeopardy, Her Majeshty’s Shecret Shervice — are also absent. Is this really a movie about Sean Connery? The description couldn’t be more explicit: This video is about Sean Connery. Stills from “2018–06–10.004-sean_connery.mp4” How could this be? We’re shown a woman in a fur coat, then an unnerving lady who is laughing and brandishing her teeth. I’d say one looks like a movie star, which might put her vaguely in the realm of Sean Connery. Am I going in the right direction with this? Or maybe it’s a commentary on aging? Or the objectified role of women in film? A tragic tale about U.K. dentistry, perhaps? Hard to say, but this is worth mentioning: this short film is also missing a director. It was created by a robot, and he’s producing several films every day. Meet Jan Bot. Despite being powered by the same deep learning algorithms that we’re told will cause the singularity apocalypse and perhaps cure cancer, this is not the kind of artificial intelligence application that you want driving your car, offering investment advice, or translating sloppy phone taps into relatively decent English. Nor is Jan Bot a good source for Sunday night movie recommendations, despite its association with Amsterdam’s respected film institution. Jan Bot is a different kind of experiment. The software generates short movies based on fragments of found film, following the trending social media topics of any given day. The co-father of Jan Bot, Pablo Núñez Palma, puts it like this: Using some of the trendiest AI tools available for the translation of images into words and words into semantic nodes, we have programmed Jan Bot to create meaningful connections between two completely unrelated items: a vast collection of old and unidentified film fragments, and trending topics from today. It’s like a game of Taboo. You have to talk about a topic (the word on your game card), but that word and any obviously related words are forbidden. As explicitly as the film is “about Sean Connery”, the source fragments for these films are explicitly disconnected from the subject in ways we would traditionally expect (for example, that Sean Connery would appear in a film about Sean Connery). In Taboo, the workarounds are what make it fun. What’s the effect for Jan Bot? Since I know the film doesn’t appear out of nowhere, I find myself tracing back connections using an interpretive excise that feels a bit like reverse engineering. I’m given a number of tags — girl, boy, horror, face, disguise, actress, actor — but not the reason why they were selected. I assume they are pulled from associated articles on the trending topic, which are also listed. I know that Jan Bot is working with a limited pool of fragments to build the film, so he doesn’t have the luxury to be a symbolic perfectionist. I’m not expecting the fragments to match the tags exactly, which leaves things up to the imagination. But maybe it’s a personal thing: for a film to be interesting, there needs to be some kind to be character development. In this case, that would have to be Jan Bot himself. As a viewer, I find myself trying to understand its algorithmic thinking, not its plot. The subject material is not “Sean Connery” as I commonly know and recognize him, but “Sean Connery” as a node within networks: social networks (subject material), language networks (to expand the subject outside of its time period), and image recognition networks (to select the fragments of the film). These networks look like something, and these films are generated every day for us to watch. It’s quite a nice exploration into computer vision by Bram Loogman and Pablo Núñez Palma. Kudos, guys.
Only women appear in this movie about Sean Connery
31
only-women-appear-in-this-movie-about-sean-connery-1ce67375b682
2018-06-18
2018-06-18 14:34:00
https://medium.com/s/story/only-women-appear-in-this-movie-about-sean-connery-1ce67375b682
false
726
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Story by Numbers
Digital communication for cultural institutions and their collections. We build websites, apps and site-specific installations.
750e657a5e1d
storybynumbers
12
39
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-29
2018-01-29 11:40:24
2018-01-29
2018-01-29 13:35:42
1
false
en
2018-01-29
2018-01-29 13:35:42
3
1ceaf726f6b2
7.524528
4
0
0
In today’s post, we will present to you one of our partners, Adam Votava, the founder of aLook Analytics, Czech-based company providing…
4
Interview with Adam Votava, aLook Analytics, Data Science Boutique In today’s post, we will present to you one of our partners, Adam Votava, the founder of aLook Analytics, Czech-based company providing tailor-made data science solutions. We had the opportunity to have a chat with Adam during the Hackathon organised by Keboola in January 2018. Hi Adam, can you please introduce yourself and your company? My name is Adam, and I’ve been working for several years in retail banking where I started as an intern and ended up working as Head of the Analytics and Data Management team. A few years ago I decided to quit my job and move to Japan. It is there where I founded aLook Analytics. At the beginning it was a one man company working with different customers on advanced analytics. It turned out to be a business for more people so I invited my friends and family to join me. We are now in the market for more than 3 years. We are specialized in building data solutions. It is not a product in itself, but we build tailor-made solutions for our clients. We work across industries because we don’t want to limit ourselves to a specific vertical. We believe in useful synergies across different industries. We usually find something useful in one case and then apply it into another industry. In this way we find solutions that we could not think about before hand. We work for different clients, from banking, ecommerce (which typically have big amounts of data but don’t have internal teams to take care of the analysis). We also work for startups which aim to build products which are in-built analytics that they outsource to us. This year we would like to enter the manufacturing industry as well. Why did you decide to have Keboola as your partner? For many reasons. First of all, it is the perfect environment for analysts to work together. Whenever we use Keboola to work for clients, it is much easier to deliver the projects. We do the data processing and cleaning in Keboola. If the clients have already built their reports and use Keboola but are interested in what’s happening next, aLook comes into the picture. It is useful because if they already use Keboola, then they just give us the credentials and we have all the data that we need for the projects to be delivered. We use R and Python for building our data science models and we do this as well in Keboola. The other reason we collaborate with Keboola is that we are not dealing directly with our clients. Our clients typically come from our partners, because we are a great team of data scientists but not sales people. We don’t do any business development within our team. So, it is essential for us to have great relationships with our partners. The way customers come to us is via referrals from our partners or via satisfied clients. We have two or three other similar partners such as Keboola (providing platforms) or others which create dashboards for their clients and call us when the client requires advanced predictive analytics. We mainly work with companies in Europe. We always work long distance with clients. It is easier to get clients in Europe than anywhere else, mainly because of our relations. But sometimes we work with offices all over the world. For example we are now working on a project that involves working with offices in South Africa. How and when did the cooperation with Keboola start? It was my last week at the bank when I got the opportunity to meet the guys from Keboola. Peter showed me how Keboola works and introduced me to the first client. This was really successful and we started working on many other projects. Everything is a story. Who are your customers? Do you focus on a particular industry? As mentioned before we work across various industries. One third of our customers are retail banks, maybe because of my background as well. We work with them on building customer insight or direct marketing optimization projects. Secondly, we typically work with ecommerce, which have tons of data, but don’t have an internal team to work on data. We build models to help them improve their customers’ experience. We work here on similar projects which involve direct marketing optimization or recommendation engines. As I mentioned before, our clients will usually require tailor-made solutions, out of the box. They would like to combine different sources and create various algorithms. Sometimes it can be very difficult to have the information from organization, not because they don’t want to share it, but because they find it difficult to find the information themselves. Because we are very transparent and we work very closely with our client, they trust us and collaborate as much as possible on giving us the needed information. We are a small business, and we cannot bet on long projects. We typically start with a a very small project to be delivered as soon as possible so that we can understand the potential cooperation with the client as fast as possible. We don’t build nice presentations, but we prefer to focus on a short pilot for a few days. Even if it is not perfect, we prefer to start somewhere. We put the first project into production and we go typically through three iterations: in 10 man-days for projects that involve recommendation engines or churn predictions for example. For the client it’s quite useful because after the first iteration, they can kill the project if they are not happy with the result and they don’t spend too much time on finding out if we are a good fit for each other. We can have endless iterations and improve, if we see a good potential project. What are your customers’ needs related to data? Typically, our solutions are being driven by the business needs. It is not that they come with their data sets and ask us what they can do with it. If they come to us with their dataset and no clear goal, then my first question is always “ what is your business need”: acquisition, retention, churn etc, optimization or other inefficiencies in the business. So, typically we start with this and it is quite simple. Once you have a problem with something then we find a solution that is data driven. These are things that are specific to each customer. Someone else might come to us asking for some basket analysis to improve the products revenue, according to the business needs, as I was mentioning before. If the business is small to medium, then we are in a better position because the solutions are discussed with the CEO or someone who has an important role into the business. They know exactly what the business needs in order to grow and where the hidden profit/cost savings are. They don’t want to invest money and time where they don’t see a direct increase in the revenue. On the other side, if the company is big, the different stakeholders will want to explore the different options that the data can give them. They are aware that most of the projects will fail but they are looking for that solution that will be successful. They just want to use something and not lose any opportunity in the market. Big organizations will do the analytics to understand their customers in house because it is not a good position for a bank to have someone working on these projects externally. We are working with these clients on projects that are less likely to be successful so it’s not something that translates into short term revenue. But the projects we work on can give more return in the long term. A typical scenario will be the transactional data and what can be done with it. It might also be behavioral data and the output will be quite open and flexible. There are two phases: One is where they give us a sample of the database, encrypted and anonymised. For the prototyping phase when we are building the algorithms and we want to see how successful the model will be we don’t need the whole data. The clients in this case are not that protective since the data is just a sample. When we go into implementation of the project, we can help them to implement in their structure or the client is doing this itself. If they are very big then they will not leave anyone external in the business to access the infrastructure. How exactly is Keboola helping your team and customers? In every aspect of our business. We get new customers thanks to Keboola. They are bringing us clients from different industries which is not boring. The tool is very good and we can do everything we need there. We have access to the developers directly and we can ask them for help when we find some bugs. Whenever we need to understand the data processing, every time, we have Keboola support to help us. The customers usually already use Keboola when they reach us. We just need someone who understands the business needs and then someone who knows the data (someone from Keboola or someone from the customer’s side). Are you involved in development activities as well? Not yet. We are now focused on working on different projects, and then we will find the product. So we don’t have a product in place at the moment, just custom made solutions which are driving the success of aLook. There are some repetitive tasks in the process and they are reusable and we could make our life easier, but we don’t have a product yet as we prefer to focus on our ongoing projects What are your business plans and future vision at aLook? Since I quit the corporate job I am happy with our lifestyle, we can travel and still deliver the projects, we can shape the work accordingly to our personal life and not vice versa. This year I would like to have a better understanding of which are the best streams of revenue and stabilize the portfolio. We are in a phase where we proved that we have a place in the market. It is not possible to predict the future for a longer term than the business’s life (aLook is 3 years old) and every month has been different so far. It will be an evolution based on the projects we will need to work on and based, of course, on our personality. We want to help our customers and have at the same time the freedom to do what we want to do. This is our ultimate goal. One sentence about the Hackathon It’s our first Hackathon so I was a bit worried about what will happen. I don’t like these open questions with an undefined goal to be achieved. There is a not enough time in a weekend to work on a full project and we don’t have a big amount of data but it is interesting to explore where the data will bring us. What I like is that we don’t work on the same projects across different teams. It is competitive but it is interesting to have insight from different perspectives (NLP, hit mapping etc). This is something we really appreciate that Keboola could organize. Keboola will be hosting an event with aLook this February 2018 on Transactional Data so don’t miss the chance and register to guarantee your spot.
Interview with Adam Votava, aLook Analytics, Data Science Boutique
11
interview-with-adam-votava-alook-analytics-data-science-boutique-1ceaf726f6b2
2018-03-02
2018-03-02 10:53:29
https://medium.com/s/story/interview-with-adam-votava-alook-analytics-data-science-boutique-1ceaf726f6b2
false
1,941
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Keboola UK
We provide cloud-based data engineering platform, helping clients #DoMoreWithData by integrating, augmenting and enriching it for analytics & data science needs
c9a3ec5a48f6
keboolauk
10
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-27
2018-09-27 05:40:42
2018-09-27
2018-09-27 07:32:12
2
false
en
2018-09-27
2018-09-27 07:32:12
9
1ceb39c174bb
3.039937
0
0
0
It is not anymore, a mystery that organizations are gathering customers’ digital impressions and making an interpretation of them into…
3
5 Dramatic Impacts of Big Data on Education It is not anymore, a mystery that organizations are gathering customers’ digital impressions and making an interpretation of them into customized benefits through distributed computing and machine learning in this time of big data. The change of data into real-world isn’t constrained to shopping; similar advances are modifying the way we learn. With the appropriation of technology in more schools and with a pressure for more open government information, there is unmistakably a considerable amount of chances for better data collection and research in education. However, what will that resemble? It’s a politically charged inquiry, no uncertainty, as a few states are swinging to things like government authorized test score information to measure teacher effectiveness and, thus, maintenance and promotion. Cherish it or despise it, Big Data is changing the education system. How Is Big Data Moulding the Education Field? Schools, colleges, universities, and other instructive bodies hold a lot of information identified with faculty and students. This information can be analyzed to get insights that can enhance the operational adequacy of the educational organizations. Gathering and analyzing of student information include — attendance, grades, test scores, and disciplinary issues. All these needs in view of changing educational prerequisites can be handled through statistical research. It gives schools and locale new, significant bits of knowledge into student conduct and performance. Big data makes an approach for a progressive framework where students will learn in interesting methods. Let’s know how it will help. 1. Empowers Better Decision-Making At the point when schools store, order and examine volumes of data all the time, they will be in a superior position to concoct learning techniques and objectives that are practicable. Their basic leadership skills are fortified when data is exhibited as a mix of detailed information, analysis derivations and the discoveries from educationalists. Utilizing this data originating from various quarters, schools will be in an ideal position to enhance their teaching methods in order to acquire a more prominent significance in education. Thus, the key is to know how to utilize the Big Data. Read More: Augmented Reality as an Effective tool in Education 2. Students’ Results Similarly, when huge information is implemented in the education field, the whole instructive body receives the rewards, incorporating with parents and students. Estimating a student’s scholastic performance is through examinations and the results they create. Every student produces an exceptional information trail amid his or her lifetime, which can be investigated for a superior comprehension of a student’s conduct to make the most ideal learning condition. Big data analysis screen students’ progress, for example, classroom performance, favorite subjects, curriculum activity interests, the time they take to finish an exam, and numerous different things in a student’s educational condition. A report can be developed that will demonstrate the interest as well as concern areas of a student. Read More: 5 Ways Mobile Apps can Make Learning Fun 3. Career Prediction Further, diving intensely into the performance report of the student will assist the authority with understanding his or her improvement and their weaknesses and strengths. As said before, the reports will recommend the regions in which a student is interested and this will help to know he/she can seek a profession in which field. In case that a student is enthusiastic about taking in a specific subject, at that point the decision ought to be valued and the student ought to be urged to follow what they desire to follow. Read More: How is Education Tech Evolving with Time? 4. The Mapping Concept Mapped information comes in as an important contribution to comprehend the learning models of students. At the point when data is mapped, it will prompt the situations of creative learning, self-learning or group learning. A multitude of online learning interfaces is gathering mass data about students from over the globe. It is through this mapped data that those platforms can better address the issues of the students. Subsequently, laying hands on information about the interests of students will come as an appreciated move towards customized and progressive learning. To read more please click here… You can read more articles here. To visit our website please click here…
5 Dramatic Impacts of Big Data on Education
0
5-dramatic-impacts-of-big-data-on-education-1ceb39c174bb
2018-09-27
2018-09-27 07:32:12
https://medium.com/s/story/5-dramatic-impacts-of-big-data-on-education-1ceb39c174bb
false
704
null
null
null
null
null
null
null
null
null
Technology In Education
technology-in-education
Technology In Education
63
NewGenApps
NewGenApps is an organization focused on delivering world class technology solutions,leveraging new generation technologies, cloud computing and mobile apps
305335c84885
newgenapps
54
105
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-23
2018-07-23 06:54:41
2018-07-23
2018-07-23 06:54:51
0
false
en
2018-07-23
2018-07-23 06:54:51
1
1ceb7a5f756b
0.977358
0
0
0
Artificial Intelligence is mainly a systemic procedure that deals with the procedure of manufacturing machines or systems to do several…
1
How Artificial Intelligence Training is beneficial for your career? Artificial Intelligence is mainly a systemic procedure that deals with the procedure of manufacturing machines or systems to do several tasks which generally require human intelligence to complete the tasks. Artificial Intelligence Training program is uniquely designed to help you learn the concept of artificial intelligence such as Deep Networks, Structured Knowledge, Hacking, Machine Learning, Natural Language Processing, Recurrent Neural Network, Artificial and Conventional Neural Network, Self-Organizing Maps, LDA, Dimensionality Reduction, Model Selection and Boosting. The training program will help you to get the best jobs in the industry. During the training program you will: · Explain what AI is, the role it can play, and the potential benefits it can bring to your organization · Identify different types, characteristics, and uses of data in AI solutions · Identify the primary capabilities of AI and the core associated technologies needed to deliver them · Discuss the ethical implications of AI in different areas of the economy, government and society · Outline the different components required to deliver complex AI systems · Identify software which can be used to process, analyze, and extract meaning from natural language, images and numerical data to develop insights and understanding Target Audience: · Individual aiming to be an ‘Artificial Intelligence Scientist’ · Lead Analytics Managers · Information Architects · Graduates seeking to build a career in Artificial Intelligence and machine learning · Analytics professionals · Experienced professionals who want to implement Artificial Intelligence in their fields to get more insight Visit here https://www.multisoftsystems.com/artificial-intelligence/
How Artificial Intelligence Training is beneficial for your career?
0
how-artificial-intelligence-training-is-beneficial-for-your-career-1ceb7a5f756b
2018-07-23
2018-07-23 06:54:52
https://medium.com/s/story/how-artificial-intelligence-training-is-beneficial-for-your-career-1ceb7a5f756b
false
259
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Multisoftsystems
null
3551bc79d5c8
multisoftystems
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-06
2018-08-06 03:05:31
2018-08-06
2018-08-06 03:06:54
0
false
en
2018-08-06
2018-08-06 03:06:54
0
1cecc788f74
1.796226
0
0
0
Introduction to Data Science Certification Training
1
data analytics certification training in hyderabad Introduction to Data Science Certification Training ExcelR offers 160 hours classroom training on Business Analytics / Data Scientist / Data Analytics. We are considered as one of the best training institutes on Business Analytics in Hyderabad. “Faculty and vast course agenda is our differentiator”. The training is conducted by alumni of premier institutions such as IIT & ISB who has extensive experience in the arena of analytics. They are considered to be one of the best trainers in the industry. The topics covered as part of this Data Scientist Certification program is on par with most of the Master of Science in Analytics (MS in Business Analytics / MS in Data Analytics) programs across the top-notch universities of the globe. Our Business Analytics certification training course is designed by the industry experts, which is precisely tailored for the professionals who wants to pursue a career as a Data Scientist in job market. We offer a comprehensive placement program where we equip you with hands on training on Business Analytics, resume preparation, case studies, Live projects, mock interviews etc. We do the necessary hand holding till the participants are placed in a job in the field of Analytics. What is Business Analytics /Data Analytics/ Data Science? Business Analytics or Data Analytics or Data Science certification course is an extremely high-in-demand profession which requires a professional to possess sound knowledge of analysing data in all dimensions and uncover the unseen truth coupled with the logic and domain knowledge to impact the top-line (increase business) and bottom-line (increase revenue).ExcelR’s Data Science curriculum is meticulously designed and delivered matching the industry needs and considered to be the best in the industry Also, Google Trends shows the upward trajectory with an exponential increase in volume of searches like never seen before. This is proof enough to back the statements made by Harvard Business Review and the business research giants, that Business Analytics will be the most sort after professional world has ever witnessed. What is a Data Scientist? Or Rather Who is a Data Scientist? Data for a Data Scientist is what Oxygen is to Human Beings. This is also a profession where statistical adroit works on data — incepting from Data Collection to Data Cleansing to Data Mining to Statistical Analysis and right through Forecasting, Predictive modelling and finally Data Optimization. A Data Scientist does not provide a solution; they provide most optimized solution out of the many available. Gartner predicted in 2012 that Data Scientist & Business Analytics jobs will increase to the tunes of Millions by the end of 2015. This is very evident with the rise in job opportunities in various job portals. As a Data Scientist or an aspirant, you should not believe us. Go! research for your own and confirm the facts and figures.
data analytics certification training in hyderabad
0
data-analytics-certification-training-in-hyderabad-1cecc788f74
2018-08-06
2018-08-06 03:06:55
https://medium.com/s/story/data-analytics-certification-training-in-hyderabad-1cecc788f74
false
476
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Vinith Nalam
null
e1962cdd3a58
vinithnalam
0
1
20,181,104
null
null
null
null
null
null
0
null
0
661161fab0d0
2018-08-07
2018-08-07 13:26:55
2018-08-07
2018-08-07 13:29:37
1
false
en
2018-08-07
2018-08-07 16:38:53
1
1cedcfd76bf2
3.498113
2
0
0
Note: This story has been published in my publication “A world without waste”, and is gratefully republished here.
5
Title, or Our Waste Future Note: This story has been published in my publication “A world without waste”, and is gratefully republished here. I’m not sure how to title this piece, so I’ll write under the vast, blue skies of “Title”. I’m not sure what to write, either. But there is the kernel of an idea bubbling that wants out. Hopefully not the Alien bursting from my chest, but instead a little puff of wonderment. I have a feeling that tech can do a whole lot more when it comes to creating a world without waste. Not just the old trick of virtualising everything. You know, that trick where digital everything transforms atoms into bits and paper becomes a thing of the past. And perhaps it will. That’s not what I’m talking about. I’m instead thinking about a space where real materials are moved first from “waste” to “unwanted materials”, and from there they are connected up into cool new lives for cool new things. Perhaps I need to back it up a bit. Waste isn’t intrinsically anything. No, that’s not quite true. Waste is materials in the wrong place, at the wrong time, in the wrong quantities or with the wrong concentrations for what is desired. Of these, the wrong concentrations is the closest to being an intrinsic property of the material. Think arsenic trioxide from gold mining. That’s intrinsically waste (but only because there’s no market for arsenic anymore, not since the days of Hercule Poirot and proper, kill-em dead pesticides…if we had a market for arsenic then who knows…) Here’s a thought. Waste might be just the residual necessary for the efficient functioning of markets. It might be that an efficient market ejects waste just like a steam engine releases steam. Valuable materials both, but both let loose because the system needs to dump excess something. Pressure. Perhaps waste is the material economy’s way of releasing excess pressure. Which returns me to my initial thought, which is that waste is not intrinsically waste. It’s just stuff that doesn’t fit into the here and now. And no longer fitting into the here and now, the stuff gets all jumbled up so that entropy kicks in and it’s very difficult to extract value any more. But before that point of mixing, before entropy is let loose from her stall, before then… If we can capture unwanted materials before entropy is loosed then perhaps we can create a world without waste. If we can capture unwanted materials before they get discarded to the void, then perhaps we have a chance at creating a world without waste. And this is where tech comes in. Entropy is loosed on unwanted materials because the transaction costs of doing anything else is so high. A mixed up bin is the best place for it. Imagine, for a second, a world where you have a little robotic butler that follows around behind, takes your materials (wanted and unwanted), and scurries it off to the proper place. And it just knows where that place is, because it’s learned what the things are and where they belong, because it is constantly learning through AI. And then imagine if those places included clever little bins, segregated by material, that whisked themselves out to larger, dedicated bins which in turn consolidated all the way up the line until the unwanted materials ended up at a processing factory where they could be simply, efficiently and cleanly reconverted back into wanted materials again. And all of this happened without you needing to think, and with micro-transactions all along the way that are optimised by each little robotic player to create breathtaking complexity out of these very simple rules. Fractal complexity where the surface area across which materials are refined in their collection is infinite and forever, and yet bounded and discrete. A world formed by the massing of swarms of little robotic creatures that see and learn and optimise as they go. A world where processing factories are fed by the materials that lay to waste all around them. What does that world look like? Terrifying? Or wonderfully abundant? It could be tremendously disempowering if ridden by a monopolies enforcing control over resources, or it could be richly diverse if it recreates a new commons. That political economy is yet to be, and whilst it could perhaps be designed in advance, it’s more likely to be left to chance. Now this is science fiction. It is no coincidence that it appears under “Title”, resting under the gaping blue sky of endless possibility. But it’s not really. It’s a future. A possible future. A possible waste future that, like the quantum universe, collapses into being upon observation. It has now been seen, it has now been felt. It exists. There is, already, a clear trajectory with all of the existing tech where this future comes to pass. The wave function of possibilities collapses into a discrete reality, and we now find ourselves in a world where the blank “Title” has become “Our Waste Future”, and this future now begins to unfold. “Title” is become “Our Waste Future”, but the sky above remains blue. The sky is always blue. The heavens always filled with hope.
Title, or Our Waste Future
49
title-or-our-waste-future-1cedcfd76bf2
2018-08-07
2018-08-07 16:38:54
https://medium.com/s/story/title-or-our-waste-future-1cedcfd76bf2
false
874
where the future is written
null
null
null
Predict
predictstories@gmail.com
predict
FUTURE,SINGULARITY,ARTIFICIAL INTELLIGENCE,ROBOTICS,CRYPTOCURRENCY
null
Philosophy
philosophy
Philosophy
39,496
Adam Johnson
Wanderer through ideas, guided by a desire to create a world without waste.
be5352b702a4
garbologie
72
103
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-05
2017-12-05 05:45:34
2017-12-05
2017-12-05 07:48:48
2
false
en
2017-12-05
2017-12-05 15:21:51
5
1cedd8aea60
7.111635
24
0
0
It was 8 am, and what appeared to be the population of a small midwestern town was curled back and forth in front of the Long Beach…
3
NIPS Day 1: Deep Queues It was 8 am, and what appeared to be the population of a small midwestern town was curled back and forth in front of the Long Beach Convention Center, waiting to get their passes for NIPS, the truly massive ML conference I’ve come here to attend. Inside the tutorial I was attending, on “Reinforcement Learning By and For the People,” the presenter, Emma Brunskill, asked the room who was familiar with the mathematical foundations of RL; 95% of the room raised their hands. “Well,” she said, after a beat, “I suppose if you made it here this early, you’re motivated.” photo credit to https://twitter.com/chaoticneural Reinforcement Learning By and For the People The first tutorial of my day (tutorials are a special kind of session for this initial day of NIPS; each one lasts about two hours) dealt with different ways Reinforcement Learning systems interact with people: either as objects of the algorithm that introduce special concerns, or as potential aids to the algorithms learning and performance. To start out with, she gave a broad overview of Reinforcement Learning, which I won’t repeat here, but the majority of which I’ve covered in my RL summary series here. Starting out with the first framing, “RL for the people,” Brunskill highlighted a core problem with building reinforcement learning systems — like, say, a system that learns a dynamic curriculum policy to give a student, based on their progress: they’re hard to simulate at useful timescales. When you’re playing the game of Go, the rules are fixed, and easy for humans to codify, so we can easily write a digital simulator that perfectly replicates the reward system that the AI would faced if it played a human player. It’s less obvious how to do this effectively when the system whose rewards (e.g. educational outcomes) you want to learn to predict well is a person. If you want to do any kind of online or policy learning — that is, any kinds of learning that rely on your observations coming from the policy you’re working on optimizing — you’d need to build a simulation of student behavior, and use that to generate your rewards in the way a game engine works. Or, alternatively, you need to use batch, often observational/historical data. That motivates Brunskill’s main points: the necessity of sample efficiency, since observations derived from humans are likely to be costly and thus rare. Another reason for thinking about data efficiency and the related problem of transfer learning is the fact that humans are not a monolithic group. And, in order to be principled about things, we might want to learn separate policies for each individual. However, given that samples are costly to begin with, it’s pretty undesirable to have to learn each individual’s policy from scratch, without being informed by what we’ve learned from others. A few approaches suggested for handling this problem were Assuming that people fell into on one a small number of groups; a model could then be learned per group (balance between specialization and parameter reuse) Assuming that individuals differ from each other based on the value of a latent variable, that’s parameterized in continuous space (so, instead of having to learn multiple functions, we can just interact our learned function with the value of the latent variable) As mentioned, when we don’t have an a priori simulator of our reward-generating-system (like we do with a Physics engine), we can either build a simulated model (model-based), or else sample from data generated in a single batch. However, you run into the issue that, when you’re evaluating a new potential policy, you’re still using data generated using a totally different policy. That issue motivates the use of Importance Sampling, which basically just means that when we evaluate a policy on the old batch set, we weight each observation based on how likely it would be to appear under the policy we’re evaluating. However, each of these two approaches has trade-offs The model-based approach can fall prey to bias, particularly if the model class used to estimate the rewards is wrong (if your model is misspecified, like, trying to learn the parameters of a Gaussian when what you really need is an Exponential, no amount of new data can fix that). But it has low variance. The importance sampling approach has low bias, but, due to the potentially widely varying weights, high variance. A frequent approach in Brunskill’s work is apparently just finding ways to combine these two approaches, to get a good bias/variance trade off. Some ideas that came out of the “by the people” section, which focused on how to facilitate people giving feedback to RL systems, were: That machines could perform “Inverse Reward Design”, to deal with the fact that the rewards humans say they want might only be an incomplete representation of their true reward; this approach places Bayesian priors on what the model believes the human reward function is, and narrows the posterior bounds using data, keeping track of it’s uncertainty. Another approach, even more bare metal, would be to have machines actually learn an underlying human reward function, by watching demonstrations. Work that’s trying to get the RL model to identify where we might want a new action, not specified in the original set, to exist. This would give the machine even more room to diverge from past human behavior, in order to find an optimum. Fairness in ML After copious caffeination, it was time for the second tutorial, this one focusing on the problem of fairness in ML applications. While a lot of these ideas were familiar to me through previous work on applying ML in the lending space, I thought the presenters did a really stellar job of outlining the conceptual frames in which we should think about fairness in ML. One point that was emphasized, that I think is worth highlighting within the technical space, is that there’s a difference between asserting that a feature (for instance: a group membership) has no relevance to prediction of the target class, and that that feature should, from a normative perspective, due to a history of that feature being unjustly used to discriminate, not be allowed to influence our decisions. It’s a subtle distinction, but an important one, particularly when it comes to technical ML experts communicating with legal experts. The speakers them addressed two legal paradigms of fairness: Disparate Treatment, which forbids explicit use of group-membership information, and is mostly motivated by a concern with procedural fairness and equal opportunity Disparate Impact, which requires that outcomes between groups be the same, unless a strong business justification can be found to explain those differing outcomes. This frame puts more emphasis on distributive justice, and promoting equality of outcome. The rest of the talk focused on providing three broad categories of what someone might want when they say they want an algorithm to be fair: Independence: that the scores of your classifier be entirely independent of class membership (basically: equivalent score distributions between group) Separation: that the scores of your classifier, be independent of class membership, conditional on the target variable (basically: equivalent score distributions between groups, when you compare only within target=1 or target=0) Sufficiency: That your outcome Y be independent of race, given the scores of the classifier. This one is admittedly a bit weird to me, and seems to be promoting the capture of *more* group-based information into the score, rather than less If you want to gain some intuition about the trade offs between criteria like these, I recommend playing around with this tool Google built, which lets you try different settings or solving of the problem, and see how you do on additional criteria. Geometric Deep Learning By this point in the day, my caffeine levels were at neither a local nor global maximum, and so I took a break during the second half of the talk. The first half focused on how we could apply ideas from conv nets to input data that took the form of a graph. The main difficulty of this, as framed very cogently by one of the presenters, is that graphs vary in fundamental structure, as well as scale, from graph to graph. This is not the case with images, which have a fixed grid structure. So, the goal of creating a graph convolution operator became: creating an operator that was insensitive to permuting of the ordering of vertices, and to adding more vertices to the graph. A simple approach, and the one their methods built on, is to have two sets of weights: one to multiply by the vector corresponding to the “current” node, and one to multiply by the mean of all other neighboring points. This works since taking the mean of a collection of values isn’t order-dependent, and won’t fundamentally break if you add an additional value. A downside of this is that all the filters learned in this fashion are radial, meaning that they treat every direction the same and don’t know the meaning of things like “up” and “down” and “left” and “right” since those ideas don’t make much sense on a graph. Powering the Future In the last talk of the day, and the first invited talk, we saw John Platt, who is working with Applied Science at Google on something typically Google-esque and cool: providing machine learning support to help in the development of fusion research. The premise of this project is that fusion is a Great White Hope for getting humanity to our future energy needs, and that it’s a high value research opportunity right now. The team is working with a physics laboratory, which has a machine to induce plasma by superheating the deuterium inside. The overall goal of the project is to use Variational Inference to help suggest which regions of parameter space are reasonable vs those that are likely to make the machine break. This is useful because, to bring this technology past Proof of Concept, the physicists need to prove that they can get the plasma to very high temperatures, and keep it stable at those temperatures. Giving the physicists a better informed way to explore parameter space, with reduced probability of engineering error, will hopefully help this Proof of Concept come into existence sooner rather than later. Picture of the Day: Tweet of the Day: Quote of the Day: “When they called to ask me to be program chair, I asked what I always ask in these cases: ‘are you sure you didn’t mean to call my brother’?” — Samy Bengio Collective Mood of the Day: “Ooh, did you get an invite to the Tesla party?”
NIPS Day 1: Deep Queues
82
nips-day-1-deep-queues-1cedd8aea60
2018-04-09
2018-04-09 16:48:08
https://medium.com/s/story/nips-day-1-deep-queues-1cedd8aea60
false
1,783
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Cody Marie Wild
machine learning data scientist; lover of cats, languages, and elegant systems; professional curious person.
b6da92126145
cody.marie.wild
1,405
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-21
2018-08-21 02:29:14
2018-08-21
2018-08-21 02:29:57
0
false
en
2018-08-21
2018-08-21 02:29:57
1
1cee3c1865ad
0.366038
0
0
0
May 11, 2018. Twitter: Thomas Wood on Twitter: “3/ AI technology is already so advanced that the human mind (even a large number of human…
1
What happens to society when the only human elites that are left are the mathematicians who write the AI algorithms? Will AI lead to the extinction of the capitalist class, as Jack Ma, CEO of Alibaba, has predicted? Was Marx right, but for the wrong reasons? May 11, 2018. Twitter: Thomas Wood on Twitter: “3/ AI technology is already so advanced that the human mind (even a large number of human minds working cooperatively) cannot compete with it — because the human mind cannot deal with the immense data sets that AI can already manage with ease.”
What happens to society when the only human elites that are left are the mathematicians who write…
0
what-happens-to-society-when-the-only-human-elites-that-are-left-are-the-mathematicians-who-write-1cee3c1865ad
2018-08-21
2018-08-21 02:29:57
https://medium.com/s/story/what-happens-to-society-when-the-only-human-elites-that-are-left-are-the-mathematicians-who-write-1cee3c1865ad
false
97
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Thomas Wood
Repoliticized on November 8, 2016. Never Trump. Never GOP. The Resistance. Berkeley, CA @twoodiac
d0a3606ebfd6
twoodiac
2
3
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2018-07-09
2018-07-09 19:16:18
2018-07-09
2018-07-09 19:21:09
3
false
en
2018-08-06
2018-08-06 23:31:20
2
1cefcd91dff4
2.674528
5
0
1
June 25th — Tact.ai Receives US$27 Million Series C Funding Tact.ai raises US$27 million in its series C round from Amazon, Microsoft, and…
5
AI Biweekly: 10 Bits from June W4 — July W1 June 25th — Tact.ai Receives US$27 Million Series C Funding Tact.ai raises US$27 million in its series C round from Amazon, Microsoft, and Salesforce. The conversational AI company plans to use the funds to develop a cutting-edge AI-powered CRM voice interaction system to change the way salespeople interact with information. June 28th — Microsoft Aims to Enhance Its Face Recognition System Microsoft leverages big data to train a new face recognition AI algorithm to reduce error rates for darker-skinned people. The model aims to increase accuracy rate with diverse skin tones as well as with different hairstyles, jewelry, and eyewear. June 29th — LinkedIn Launches Translation Functionalities with Microsoft AI Power LinkedIn launches a translation feature based on core Microsoft AI technologies including Microsoft Azure Text Analytics and Microsoft Translator Text programming. The new feature will enable cross-language LinkedIn news feed functions. June 29th — Formula One Adopts Amazon AWS as Cloud and Machine Learning Provider. The Formula One Group announces its infrastructure data centres will be powered by AWS cloud and machine learning services. In collaboration with Amazon, the tech will also be used to enhance Formula One race data tracking and digital broadcasts. July 3rd — Intel AI Partners with Baidu for AI-based Camera and Processor Development At the Baidu Create 2018 Conference in Beijing, Intel and Baidu announce a series of collaborations in AI, including integration of the new Baidu Xeye AI camera and Intel Movidius vision processing units (VPUs). Also announced was a technology development collaboration between Intel’s FPGA and Xeon processors. July 3rd — Sigma Ratings Raises US$2.4 Million for AI-based Risk Management Solution NYC-based ratings agency Sigma Ratings raises US$2.4 million from TechStars, FinTech Collective, and Barclays. The company will use the funds to leverage AI technology to assess non-credit risks and expand operations. July 4th — Albert Technologies Leverages AI for Marketing Campaigns Autonomous digital marketing solutions company Albert Technologies raises US$18 million from Schroders Investment Management and Hargreave Hale. The funds will be applied to AI techniques to convert data into insights to help improve client marketing campaigns. July 5th — Nasdaq Develops Early-Stage AI System Nasdaq uses machine learning techniques to develop an early-stage financial AI system that will automate some financial report generation, detect potential fraud and help improve customer service. Nasdaq envisions the development of “AI augmentation” systems that partners with humans — a sector expected to generate US$2.9 trillion in 2021. July 6th — Daimler Earns its Autonomous Driving Permit Mercedes-Benz owner Daimler Group becomes the first foreign automaker granted a self-driving permit for public road testing in Beijing. The company’s L4 autonomous cars have already successfully completing closed-course government testing in Beijing’s National Pilot Zone for Intelligent Mobility. July 6th — ApplyBoard Raises CDN$17 Million for AI-Based International Student Application System Canadian startup ApplyBoard raises CDN$17 million in Series A funding led by Artiman Ventures, Think+, and Candou Ventures. The company intends to build an AI-based international student application platform to streamline and speed up school selection and application processes. Author: Synced Global Analyst Team | Editor: Michael Sarazen Follow us on Twitter @Synced_Global for more AI updates! Subscribe to Synced Global AI Weekly to get insightful tech news, reviews and analysis! Click here !
AI Biweekly: 10 Bits from June W4 — July W1
21
ai-biweekly-10-bits-from-june-w4-july-w1-1cefcd91dff4
2018-08-08
2018-08-08 12:52:53
https://medium.com/s/story/ai-biweekly-10-bits-from-june-w4-july-w1-1cefcd91dff4
false
563
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
global.sns@jiqizhixin.com
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-26
2018-06-26 14:39:38
2018-06-26
2018-06-26 14:41:58
1
false
en
2018-06-26
2018-06-26 14:42:40
2
1cf11a88e750
3.079245
0
0
0
Our team were recently invited to attend a talk entitled “When surveillance goes private: a 2027 retrospective” by Adrian Hon (CEO of Six…
5
Predicting our future view of privacy Our team were recently invited to attend a talk entitled “When surveillance goes private: a 2027 retrospective” by Adrian Hon (CEO of Six to Start) hosted by our neighbours at Mozilla. Adrian provided a fascinating glimpse into what our world could be like in 10 years time. He spoke as if it were 2027 today and looked back at the technical and social developments which led to a world where 8 out of 10 homes in the UK are filled with a range of microphones, cameras and motion detectors. Building on the digital advertising channels of today, these devices, bristling with sensors, will be very cheap for the consumer. All they need to do is share every detail of their lives with the supplier in return for the ability to secure their property, find lost items, gain fashion advice or for whatever other useful function. “All they need to do is share every detail of their lives with the supplier in return for the ability to secure their property, find lost items, gain fashion advice or for whatever other useful function” This all sounds a little Nineteen Eighty-Four The omnipresent government surveillance of George Orwell’s Nineteen Eighty-Four was replaced with a friendly multinational company, and the subjects of this tyrannical regime were replaced by willing volunteers who didn’t fully read the terms of service as they were racing to get fashion tips before a night out. Adrian’s talk was massively insightful, with a succinct vision of how the Amazon Echo’s and Google Homes of today could develop over the next ten years, with advances in supporting technologies such as wireless charging, LiDAR and sensor fusion. Future prediction and Human Factors I think it is very likely that our lives will be filled with technology that senses every feature of our daily lives, I would go so far as to say the rate of change is likely to be faster than most expect. I had a chance to chat to Adrian after his talk and we spoke about how important it is to consider human factors when predicting and describing future technology. This is especially relevant if you want to avoid the more dystopian narratives which often accompany this type of future gazing. Privacy, Legislation and Company Policy We also spoke about how, over the last couple of decades, there has been a general shift in people’s views on privacy; people are much more comfortable sharing personal details previous generations would not have. This flexible view on privacy has driven an environment where governments and corporate entities test the boundaries of what is socially acceptable with regards to privacy. These boundary tests are occasionally branded as data abuses if they are publicly exposed, and the backlash tends to guide future legislation and internal company policies. On the government side, to protect us, there is the recently introduced Investigatory Powers Act 2016, which provides a framework for how the UK Government can breach your right to a private life if it is in the interest of the wider community or to protect other people’s rights. As well as defining how your privacy can be breached, it also details the checks and balances that need to be in place to ensure any breach of privacy is properly justified and authorised. At the moment there is also the General Data Protection Regulation (GDPR) which is being discussed as a way to strengthen and unify data protection across the EU (The GDPR will be enforced from May 2018, while the UK is likely to still be in the EU, and will probably be replaced by UK legislation on Brexit). There has been some concern over artificial intelligence being used in an automated system to make important decisions about people’s lives for things such as banking, employment or healthcare. There have been calls for a regulatory body to scrutinise these autonomous algorithms. An open and honest conversation about the use of AI is required, I think it is vitally important that people’s best interest are considered when this powerful technology is to be used. The government and policy makers are going to have a tough time navigating this challenging new area of technology. We need to invest in technology to make artificial intelligence more open and able to explain its reasoning and we must put human users front and centre, to ensure we don’t find ourselves on the track of a more dystopian future with AI. Want to know more? Sign up to our newsletter for articles straight to your inbox.
Predicting our future view of privacy
0
predicting-our-future-view-of-privacy-1cf11a88e750
2018-06-26
2018-06-26 14:42:40
https://medium.com/s/story/predicting-our-future-view-of-privacy-1cf11a88e750
false
763
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Qumodo Ltd
Advancing human + AI interaction through research, design and development.
aad67ae0f077
1530019197930
5
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-05
2018-09-05 09:57:47
2018-09-11
2018-09-11 21:50:11
2
false
en
2018-09-11
2018-09-11 21:59:14
2
1cf40285f05d
2.628616
1
0
0
In simple terms — Reinforcement Learning is learning from experience
4
Reinforcement Learning Simplified In simple terms — Reinforcement Learning is learning from experience Just like humans, machines can also learn from its interaction with the environment; Reinforcement Learning is how they can do it. It is the branch of Machine Learning in which the learner is not trained(like other Machine Learning domains) rather, supposed to learn from its experience by interacting with the environment. The interaction includes taking actions through trial-and-error search, and getting feedback( positive or negative) from the environment. It has the following elements: Agent: It learns and makes decision by interacting with its environment. Environment: Everything that is outside of agent and cannot be directly controlled by the agent is known as the environment. It responds to agent’s action by giving feedback and presents new state to the agent. Reward function: It defines the reward of the agent depending on its action. It tells the agent what kind of reward it will get if it takes a particular action. Policy: The behavior of the agent is defined by the policy. It tells the agent what actions to take and what actions to avoid to achieve its goal. Value function: It evaluates the action of the agent taken in a particular state considering futu re rewards. It give the agent information about the long term consequences its actions. Model of the environment(optional): It is the representation of the environment based on which it gives feedback and presents new state to the agent. I will illustrate the idea behind each element through a popular childhood game of tic-tac-toe. tic-tac-toe game Tic-Tac-Toe is a 3x3 board game of two players and the players who successfully place Os or Xs in three consecutive places either horizontally, vertically or diagonally wins the game. The game is draw otherwise. The above figure shows Xs in three consecutive places diagonally. Now consider two players — player A and player B are playing against each other; Player A is is an imperfect player(who is semi-skilled and can make mistakes at times) and Player B is the one who can learn from experience.In this case the elements are: Agent: Player B because it can learns and makes decisions based on its interaction with the environment. Environment: everything(including Player A) is the environment as it gives feedback and presents new states to Player B. Reward signal: Goal of the player B; In this case to win the game Policy: What move to make when going from one state to another? Value function: What moves are good or bad for Player B in the long term? Model of the environment: representation of the environment which is used to give reward to player B Now that we have an overview of the elements of reinforcement learning. Let me explain about the interaction between them. Agent-Environment Ineraction At each time step t, the environment sends some information about agent’s state s<t>;In above example, s<t> is column/row of the board. The agent then takes an action a<t> depending on the s<t>. In the case of tic-tac-toe game, a<t> would be the move Player B makes after knowing about its state. As a consequence of agent’s action, the environment then sends a numerical reward r<t+1> at time step t+1. This interaction continues until the agent achieves its goal. References: An introduction to Reinforcement Learning, Sutto and Barto David Silver Course on Reinforcement Learning PS: This is my first online post. I wrote it based on my understanding of Reinforcement Learning. Any suggestion/improvement about the content and/or style of writing will be appreciated.
Reinforcement Learning Simplified
1
reinforcement-learning-simplified-1cf40285f05d
2018-09-11
2018-09-11 21:59:14
https://medium.com/s/story/reinforcement-learning-simplified-1cf40285f05d
false
595
null
null
null
null
null
null
null
null
null
Reinforcement Learning
reinforcement-learning
Reinforcement Learning
883
Bibek Chaudhary
self-learner
84a0db77e00e
bibekchaudhary
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-29
2018-09-29 11:33:14
2018-09-29
2018-09-29 11:49:43
3
false
en
2018-10-23
2018-10-23 07:46:20
8
1cf474b2b570
2.953774
1
0
0
As mentioned previously, Supervised Learning is one of the types of Machine Learning algorithms. The most basic and fast Supervised…
5
Naive Bayes Classifier “rocks on sea bed” by Yannis Papanastasopoulos on Unsplash As mentioned previously, Supervised Learning is one of the types of Machine Learning algorithms. The most basic and fast Supervised Learning algorithm is called the Naive Bayes classifier. As can be seen in the name itself, this is a classification algorithm and is based on Bayes’ probability theorem. The reason it’s called the ‘Naive’ Bayes classifier is because it assumes that all the features are independent of each other. For those of you who aren’t too familiar with this theorem, click here to gain a better understanding. In short, Bayes probability theorem states that if there are two events, A and B, then the probability of A occurring given that B has already occurred is determined by: Source Let’s take the case of the iris data set. For the first row, the Naive Bayes classifier will calculate the probability of each of the three possible classes (Iris-virginica, Iris-setosa, Iris-versicolor) based on the data of the features present in the first row. It checks the likelihood of that flower belonging to each class. So, for the same data, it will calculate the probability of the flower being from each of the three classes. Ultimately, it assigns the class that has the highest probability. This process is repeated for each row. The iris data set has been designed to serve as an introduction into the machine learning and data science world, and hence it doesn’t contain any anomalies or missing values. Real life data, however, isn’t this perfect. If a data set contains no observations in certain rows, then the Naive Bayes classifiers assigns a 0 value (zero frequency) to that particular feature for that instance. Probability cannot be calculated for a 0 value, and so this is a problem of the Naive Bayes classifier which has to be solved. Let’s take the example of classifying an email as ‘spam’ or ‘not spam’. Email one says — ‘check out your 2018 monthly horoscope!’ and is marked as ‘spam’ by the classifier. Email two says — ‘check out ur 2018 monthly horoscope!’ You can see that the two emails mean the same thing and that the only difference between the sentences is the spelling of ‘your’ and ‘ur’. However, if ‘ur’ is not present in the classifier’s dictionary, then it will assign it the value of 0, which will lead to an overall probability of 0 for the ‘spam’ label. Thus, email two might not be classified as ‘spam’ purely because the spelling of a word is different. The solution to this problem is a smoothing technique called Laplace correction, which adds the value of the smoothing parameter (a parameter found when coding for the Naive Bayes classifier) to the 0 value, ensuring that the probability is calculated. For example, if the smoothing parameter is 1, then, Now the probability will never be 0. There are three types of Naive Bayes classifiers – 1. Gaussian This classifier is used when dealing with real-time data because it assumes that the features follow a normal distribution, and so only the mean and standard deviation of the data needs to be estimated. Click here to see the Gaussian Naive Bayes Classifier coded in Python using Scikit-Learn. 2. Multinomial This classifier is commonly used in the field of Natural Language Processing (which will be discussed in future articles). For example, it is used to calculate the number of occurrences of words in a piece of text. 3. Bernoulli This classifier is based on the binomial theorem, and hence deals with data that has binary (two) labels. For example, to classify emails as ‘spam’ or ‘not spam’. To read more, click here. Please let me know what you thought of this post in the comments below, thank you :) Next, I’ll be discussing Linear Regression, so stay tuned!
Naive Bayes Classifier
1
naive-bayes-classifier-1cf474b2b570
2018-10-23
2018-10-23 07:46:20
https://medium.com/s/story/naive-bayes-classifier-1cf474b2b570
false
637
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Shubhangi Hora
A python developer working on AI and ML, with a background in Computer Science and Psychology. Interested in healthcare AI, specifically mental health!
26cd7f373776
shubhangihora
4
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-14
2018-09-14 14:12:11
2018-09-14
2018-09-14 14:14:23
1
false
en
2018-09-14
2018-09-14 14:14:23
0
1cf502560f08
0.301887
0
0
0
Go to www.TitanAutonomous.com to download it now!
5
Come be a part of revolutionizing how cloud computing is done. The white paper for Titan Autonomous has been released. Go to www.TitanAutonomous.com to download it now!
Come be a part of revolutionizing how cloud computing is done.
0
come-be-a-part-of-revolutionizing-how-cloud-computing-is-done-1cf502560f08
2018-09-14
2018-09-14 14:14:23
https://medium.com/s/story/come-be-a-part-of-revolutionizing-how-cloud-computing-is-done-1cf502560f08
false
27
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Titan Autonomous
A distributed AI network for enterprise.
7a48f58ac38e
titanautonomous
5
16
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-09
2018-06-09 15:19:51
2018-06-09
2018-06-09 15:24:19
2
true
en
2018-06-09
2018-06-09 15:24:19
21
1cf6736b8700
2.787107
0
0
0
Artificial Intelligence is the next big thing in application development in the technological era that we are living. The use of artificial…
1
How AI(Artificial Intelligence) Works in App Development Artificial Intelligence is the next big thing in application development in the technological era that we are living. The use of artificial intelligence with augmented or virtual reality enriches an application with multidimensional ability. AI not only helps to build entertainment software but also smart and advanced enterprise applications. Creating an AI assistant implies the use of a specific software tool, which is identical to neurons in Central Nervous System (CNS). The system is also able to remember, and process previously received data for future analyzing and practicing. AI-BASICS AI’s central principle is to be able to make the decision independently. To do this, the whole process is done by inputting data and expected result in some cases. Developers build an artificial neural network for the system, which works with the same algorithms as the actual human brain uses while memorizing. The program code of such artificial neural network is very complicated; building one is tough. There is some preset artificial neural network available for creating a different type of AI applications. Among them, Tensorflow, API.ai, Amazon AI Wit.ai, Clarifai and IBM Watson are the most popular ones. CHOOSING THE APPROPRIATE API CoreiBytes uses Different API for different functionality. Based on the type and complexity of the work developers chose their needed one. Below are characteristics of some of the popular API. WIT.AI. is an application-programming interface. In this interface, the previously created training samples and one of the users’ personal experience are analyzing the input data. This interface has two primary mechanisms. The first one is the central object of the input data and the second one is the object part. For example, if the user request is “Where to find the car?” Here “the car “is the first part and “where to find” is the object part. An application like Siri can easily be made using this interface. IBM Watson. is world’s one of the first AI solution. This AI interface also translates voice data into text and then process searching in the internet search engines. The most incredible part of this interface is it implements a multi-tasking process where a number of algorithms are treated simultaneously. API.AI. Google’s developer’s team developed Api.ai. It works almost like Wit.ai. However, this interface works through an incredibly precise entity identification. Each synonym of a particular word being processed differently and often brings a different result. The massive knowledge base of this interface makes it one of the popular solution. AMAZON AI.Amazon Ai is able to recognize visual objects, human speech, and implement deep machine learning processes. A fully adapted to cloud deployment solution, which allows creating small applications. CLARIFAI. In AI field, this platform plays a completely new role. Calrifai analyzes the data comprises with capacitive and complicated algorithms. This allows the application created in this platform being fully adapt to an individual user’s personal experience. Calrifai is the best choice while creating an assistant based AI. TENSORFLOW. Developed by the Google developer’s project Tensorflow’s concept is based on the artificial neural network graphs generation. Later the graph is conditioned by the information with the basic knowledge and personal experience. Tensorflow’s library is hard to master and not recommended for primary users. Based on the characteristics and demands an API used to build an AI application. Developers must understand the request for a specific application. So, the most critical question is what do you want from the application? The range of tasks AI can do, is immense. Therefore, while developing an AI application, one must bind with the ethical part of it. It draws the most important update of the previous question. What should you want from the application? This article is also published in CoreiBytes and LinkedIn Follow CoreiBytes: Facebook; Twitter; LinkedIn; Google+; Instagram; Youtube; Tumblr
How AI(Artificial Intelligence) Works in App Development
0
how-ai-artificial-intelligence-works-in-app-development-1cf6736b8700
2018-06-09
2018-06-09 17:58:39
https://medium.com/s/story/how-ai-artificial-intelligence-works-in-app-development-1cf6736b8700
false
637
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
CoreiBytes Codetech, Inc.
The primary functions of our firm are Mobile App Development, Web App Development, Customize Software Development, Software as a Service (SaaS)
cac1cb270113
coreibytes
0
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-06
2018-06-06 18:53:35
2018-06-17
2018-06-17 20:53:34
3
false
en
2018-06-18
2018-06-18 21:39:43
1
1cf7173074f4
1.500943
7
0
1
Using the Bible text data in Python, some interesting visualizations can be made which gives insight into how certain words are distributed…
5
Dispersion plot from Bible using Python Using the Bible text data in Python, some interesting visualizations can be made which gives insight into how certain words are distributed throughout the book. There are deeper insights which can be derived from this data- however this is a good starting point. In the embedded notebook you would find the basic steps to import, clean and make some interesting visualizations. A visualization which has been used is the dispersion plot (looks like a bar code). It shows the spread of any particular word across the whole text. In the graph below, the x axis represents the ‘narrative time’- measured by the number of words in Bible i.e 789651. Also, when the desired word appears in the text a black vertical line is plotted, otherwise it remains blank (white line). Below is the plot showing the occurrence of the word “Moses” in the bible. It is evident that the Bible talks about “Moses” extensively during the first few books of the Bible and it is sparse later on. “Moses” in Bible At the same time, let’s observe the dispersion plot for “Jesus”. As we know the Bible talks about the birth and life of Jesus only from the New Testament, hence the dispersion plot is completely concentrated towards the last few books of the Bible. “Jesus” in Bible — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — Connect on LinkedIn
Dispersion plot from Bible using Python
140
dispersion-plot-from-bible-1cf7173074f4
2018-06-18
2018-06-18 21:39:44
https://medium.com/s/story/dispersion-plot-from-bible-1cf7173074f4
false
252
null
null
null
null
null
null
null
null
null
Bible
bible
Bible
17,047
Rohan Joseph
Operations Research and Data Science @ Virginia Tech https://www.linkedin.com/in/rohan-joseph-b39a86aa/
a2819eeaf8c5
rohanjoseph_91119
292
5
20,181,104
null
null
null
null
null
null
0
null
0
8f8b8301b30a
2018-02-17
2018-02-17 17:41:08
2018-02-12
2018-02-12 21:04:31
0
false
en
2018-02-17
2018-02-17 17:41:36
4
1cf73383fafe
0.856604
0
0
0
This blog will exemplarily portray this process for a smaller portion of an organisation’s business: development and project management…
2
modelling your business This blog will exemplarily portray this process for a smaller portion of an organisation’s business: development and project management. Roughly following the modelling methodology OMG SysML from omg.org and specifically inspired on the books by Tim Weilkiens at oose.de The model will, once drafted, at first enhance a better understanding of the organisation. The core processes will step by step drift to the center of the model, and some activities may, under this new light, become obsolete. For good. The more we dive in the model the better we will see relationships and key processes. Secondly, the model will become the ground for the oncoming collaboration between the machine and the human. Following the thoughts of Nicky Case in “How To Become A Centaur” at the MIT Media Lab / MIT Press (JoDS), we would envision a future where instead of a match between the human and the AI, we will likely see the dawn of a collaboration between the AI and the human. A merge of the Intelligence Augmentation (IA) with Artificial Intelligence (AI) to the Artificial Intelligence Augmentation (AIA). Creating a model of our business, will give our AI partner a much bigger chance to success on the tasks we will be working on together. Thus making our work together more effective and enjoyable. Originally published at 2030organisation.blog on February 12, 2018.
modelling your business
0
modelling-your-business-1cf73383fafe
2018-02-17
2018-02-17 17:41:38
https://medium.com/s/story/modelling-your-business-1cf73383fafe
false
227
a walk through towards the corporation of 2030
null
null
null
2030 Organisation
hugo.ormo@icloud.com
2030-organisation
CORPORATE CULTURE,CORPORATE INNOVATION,AI,ARTIFICIAL INTELLIGENCE,MODELLING
hugoormo
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
hugoormo.blog
Ehemann, Wanderer, Leser. Pasaporte español, cor i seny català. Quiet, TED fascinated, likes Apple. Bicyclist and hybrid driver, collaboration tools enthusiast
729bbb63774c
hugoormo
15
65
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-20
2017-11-20 11:20:15
2017-11-20
2017-11-20 11:31:37
7
false
en
2017-11-20
2017-11-20 12:17:23
7
1cf74ddf244f
7.238679
11
1
0
Last week we released the Bitcoin price astrological forecast by the Human Design method; and an extensive critical discussion on…
3
Checking the Bitcoin price astrological forecast with Machine Learning Last week we released the Bitcoin price astrological forecast by the Human Design method; and an extensive critical discussion on Bitcoin.com emerged. Well, we have always believed that criticism is much more useful than accolades, and here is our response. Human Discovery Platform is an open platform, where different authors can introduce their own methods of various signs interpretation, and researchers can validate it. For example, to check whether people with certain psychotype have perspectives in certain professions or not. In the case of bitcoin price forecast data were collected just easily, as the forecast leveraged the Human Design method, based on the astrology. Astrology is based on the celestial bodies position, and they are well known. Moreover — as planets and stars are moving on certain trajectories, their positions are well-known even in the future (if you don’t believe me, ask NASA). Human Design is a method, which we have used for bitcoin price astrological forecast, and it will also be available on the platform. This method implies that every object (man, company or an animal) has the creation date, has its own set of attributes, and each day you can superimpose the current celestial bodies position on this object to get signs for today. In the description it looks like this: “At November 16, 2017 Bitcoin has active gate 5,11, 32, 42, 43, 54, 57, which gives the activation of the channel 32–54 and solenoid in the channel 10–57”. The problem of astrology is in the subjectivity of interpretation. In the astrological books the numbers correspond to some descriptions, and then the astrologer tries to explain that, considering the situation. That is how human factor arises, which has to be removed everywhere possible. Therefore, Human Discovery Platform will also provide its participants with big data analysis tools, to test the most improbable for the first look hypothesis with statistics. To see if the astrological forecast could be true, we checked astrological characteristics the Bitcoin price over the last three years and its’ actual price trends with machine learning. So. As the input, we have: Bitcoin price data from August 18, 2010 till November 16, 2017 Bitcoin astrological characteristics, according to the Human Design method for the same period; Astrological forecast according to the Human Design method for the next three years. The objective is to find if there is a dependence between the astrological data and the direction of the price trends and check the future forecast. Challenge accepted! :) Machine learning To test the dependence of two factors and answer the question: “Is there a correlation between the variables (astrological gate) and the target function (price growth)”; you need to build a machine learning model, provide it with data and to ask to find these dependencies automatically. We put historical data into the tables of characteristics and the objective function. Then we divided it into 2 samples: learning and testing (usually 70% is for training, 30% for test). Training set was used for the model training, and we arranged a search of dependencies. So, we got a “model.” It is possible to provide it with signs, and get the result as “growth/drop”. The model has one key property — accuracy, it reflects the amount of test data she predicted right. This characteristic can be from 0 (the model did not predict correctly anything) to 1 (the model predicted all the test data correctly). At the beginning source data look like this. There are some notes, corresponding to each date — the bitcoin price on that date, the difference of the price with the previous day, gates and some more data about the lines, channels and magnets (we won’t go into the details of what that means, just take these as a sign). But the data in this form are not suitable for machine learning models, first they need to be prepared. Let’s spread out all the channels and gates to the columns. If the channel 14 was active on a particular day — in the column will have 0, otherwise there will be 1. As objective function we will take a very simple sign “whether there was a rate growth this month”. We can compare rates of the last and first dates of the month, and if it increased the target value will be 1 if not, it will be 0. It would be logical to evaluate a rate within days, but now we are checking the astrologer’s forecast, which is monthly, and we couldn’t build a sustainable model for each day. Prepared data look like this, you can download them here or see it here. This is a big table with >150 signs and 2600 lines (the number of days elapsed since the start of Bitcoin) Building the model Building the model, we need to throw away signs that we do not evaluate. Now we are removing the month and date. And train the model. But first, let’s understand, what will be the minimum accuracy for these data. There are 61% days when bitcoin’s price has grown to the end of the month. So, just saying each time “Bitcoin will grow in this month” we’d be correct in 61% of cases :) It’s a pretty good chance. Let’s throw a coin a little bit higher. This will be the minimal probability, which we should compare the model with. We’re checking the several classifying models. I am not going to go into the code and complicate everything — I’ll just show the numbers: LogisticRegression accuracy: 0.788262370541 GaussianNB accuracy: 0.683544303797 SVC accuracy: 0.901035673188 KNeighborsClassifier accuracy: 0.940161104718 Now we see, that the last model is the best. So, let’s check the classification quality — how well it predicted the rise and fall. Good enough. There’s no difference in class 1 (growth) or in the class 0 (drop) In the end, we see that using the K-nearest neighbors method we received a model, that predicts the data on the test sample with an accuracy of 94% :) But what does this mean for us? The model gets the result for each particular day, and the target function is defined for a month… Let’s try to do it this way — run the model for each day and get the result. We will have the forecast for the each “whether Bitcoin will rise this month”. Let’s sum them up and divide by the number of days in this month. So, we will get a probability of the model prediction results for this month. If there are 20 days within the month, predicting monthly growth, and 10 days foretelling the fall (at the end of the month, not the day) this month bitcoin will rise with 66% probability. Let’s test this algorithm on the months, we already have. The link to the GoogleDoc spreadsheet For each month there are the price at the beginning of the month, the price at the end of the month, the difference between them and the probability of the rate increase on the basis of the model calculated. There is no one failed month on the old data, where the model would be wrong. Conclusion So, we managed to build a Machine Learning model with a accuracy of 94% for predicting whether the Bitcoin rate will grow in a month on astrological indicators historical data. First, let’s see what it will give for the future months data. This model shows the fall of bitcoin in the future, until July 2019. The probability of growth in next year’s February-March, and July-October is much lower (especially in autumn, where it is close to 50%, and this, it seems like coins toss). I don’t recommend to look at the probabilities below 75%. Let’s compare this with the forecast of our expert. In March-August of the following year, she predicted a strong exchange rate decrease. Our model gives a much lower probability of growth in February and March indeed, but in April-June it shows no problems Difference in forecasts In general, predictions by astrologer and those by the machine learning model are not very different In February-March, where the astrologist predicts bitcoin price decline — the machine learning model gives a very fuzzy result, with this probability the model neither confirms nor refutes astrological forecast. But unlike the neighboring months in the model there is no clear indication of growth In May-June, the negative forecast of the astrologer contradicts the ML model. The only place in the whole forecast is where the model is strongly at variance with the forecast. In July-August, the negative forecast of the astrologer nor confirms, nor contradicts the model. Also in September October, the positive astrological projection does not contradict the model. In other periods, the astrologer’s forecast is either confirmed by the model, or does not say anything about the course (as in the period from March to October 2019) Does this mean that you have to buy bitcoins right now, and get rid of them in February — of course, no. The model is based on the not very big data set, only 2600 lines. Despite the fact that the model accuracy is quite high (94%), in translation month, and don’t mistakes, it does not mean that the model will be stable in the future. To further validate the model, we need more data, so we will take the same data for the top 10 coins with the highest market capitalization and test this model in their prices. The more data we have, the more accurate the model will be, and the more confidently we can say, if it works or not. Check out the experiment and see the probability of one or the other coin growth on our page: hdplatform.io/coin-trends Human Discovery Platform Team
Checking the Bitcoin price astrological forecast with Machine Learning
268
checking-the-bitcoin-price-astrological-forecast-with-machine-learning-1cf74ddf244f
2018-06-13
2018-06-13 18:17:16
https://medium.com/s/story/checking-the-bitcoin-price-astrological-forecast-with-machine-learning-1cf74ddf244f
false
1,640
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Human Discovery
null
8ab0209f689a
Human_Discovery
88
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-30
2018-03-30 11:21:30
2018-03-30
2018-03-30 11:21:30
1
false
en
2018-03-30
2018-03-30 11:21:30
1
1cf82f924000
1.622642
0
0
0
null
5
Three Ways Machine Learning Is Improving The Hiring Process Technology’s advance into all industries and jobs tends to send ripples of worry with each evolution. It started with computers and continues with artificial intelligence, machine learning, IoT, big data and automation. There are conflicting views on how new technology will impact the future of jobs. But it’s becoming clear that humans will need to work with technology to be successful — especially as it relates to the hiring process. There’s a great example of this explained by Luke Beseda and Cat Surane, talent partners for Lightspeed Ventures. On a recent Talk Talent To Me podcast episode, they spoke with the talent team at Hired, where I work, about why it’s critical to understand why a candidate is pursuing a given job. They concluded that machines can’t properly manage the qualitative aspect of hiring. For example, machines can’t tell if a candidate is seeking higher compensation or leveraging a job offer to negotiate new terms with their current employer. Humans can. However, machines are better at making processes more efficient. For example, machine learning brings value by processing job applications faster than humans — which can reduce the amount of time it takes to recruit and hire a new employee. With that in mind, here are three ways machine learning is improving the hiring process: Most HR professionals today use recruitment platforms to find potential employees through a search-based system where they can narrow down a list of candidates based on factors like skill, industry, experience and location. But with machine learning capabilities, hiring managers don’t have to manually dig through applications from hundreds of candidates to find the best fit. Instead, they can rely on networking and job sites to leverage machine learning and offer intelligent recommendations on the candidates who can fill a given role. This enables a more efficient hiring process for both job seekers and recruiters. Machine learning can help level the playing field in hiring. It can be employed to provide equal exposure to opportunities, regardless of a candidate’s pedigree or background. Algorithms should focus on skill-based data, not on the universities where a candidate has studied, the companies where they have worked, or their ethnicity or gender. Posted on 7wData.be.
Three Ways Machine Learning Is Improving The Hiring Process
0
three-ways-machine-learning-is-improving-the-hiring-process-1cf82f924000
2018-03-30
2018-03-30 11:21:31
https://medium.com/s/story/three-ways-machine-learning-is-improving-the-hiring-process-1cf82f924000
false
377
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Yves Mulkers
BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world
1335786e6357
YvesMulkers
17,594
8,294
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-21
2018-09-21 17:30:17
2018-09-23
2018-09-23 10:22:54
3
false
en
2018-09-24
2018-09-24 04:43:05
6
1cf981ff2108
2.957547
2
0
0
A multi-label classification for an image deals with a situation where an image can belong to more than one class. For example the below…
4
Approaches to Multi-label Classification A multi-label classification for an image deals with a situation where an image can belong to more than one class. For example the below image has a train, woman, girl and Jacuzzi all in the same photo. Photo Credit: Open Image Dataset V4 (License) There are multiple ways to solve this problem. The first approach is that of binary classification. In this approach we can use ‘k’ independent binary classifiers corresponding to k classes in our data. This approach the final layer is consists of k independent sigmoid (logistics) activation. The class prediction is based on a threshold value of the logistics layer. This approach is easy to understand and train. However, this approach has a deficiency. There are semantic relation between the labels which we are ignoring. For example ‘Human Face’ and Woman are related. And so are Human Face and Girl. Multi-Label Classification with label relation encoding There are multiple approaches to encode the semantic relation between labels. In this note I am describing 3 different ideas. The first approach is to use a Graph to create a label relationship. This label relationship can be prior knowledge (see references [1], [3]). For example, a prior knowledge based hierarchical graphs can encode the fact that ‘Human Face’ is the parent of both ‘woman’ and ‘girl’. This method can be extended to create complex structures. For example we can encode the idea of exclusion — a human face cannot be a monkey, or overlap — woman can be both mother and child. The main problem with this approach is to handle the complexity of the graph which over a period of time becomes specialized to a particular domain and does not scale to other domain. The second approach is to learn the label relationships from data independently. An early example is in reference [2]. The label relationship is captured either by a Bayesian Network or a Markov network. The Bayesian network is again a graph. It could be an acyclic graphical model, where each node represents one label and the directed edges represent the probabilistic dependency of one label on another. The parameters of such a graph can usually be learned by a Maximum Likelihood Estimation. A big challenge in using these graphical models is the problem of understanding the structure of the models. It is well known that both exact inference and estimating the structure of Bayesian Networks is a NP-Hard problem. See reference here. The last approach that we will consider is that of a unified CNN-RNN framework for multi-label image classification, which effectively learns the semantic relationship between labels. A good reference of this method is the paper [4] by Jiang Wang and team. In this approach a RNN model learns a joint low-dimensional image-label embedding to model the semantic relevance between images and labels. The image embedding vectors are generated by a deep CNN . In this joint embedding the probability of a multiple labels is computed sequentially as an ordered prediction path over all labels. One Image from reference [4] gives a good intuition about the architecture. RNN-CNN architecture from reference [4] Conclusion The final approach to the image classification problem depends of the nature of the problem and the nature of interdependence of the labels. For domains where interdependence of the labels are complex a joint RNN-CNN approach seems promising. As I start work my experiments on multi-label classification, I will report back the performance metric against the methods described here. [1] Wei Bi, James T. Kwok Mandatory: Leaf Node Prediction in Hierarchical Multilabel Classification (link) [2] Yuhong Guo and Suicheng Gu: Multi-Label Classification Using Conditional Dependency Networks (link) [3] Jia Deng et. al.: Large-Scale Object Classification using Label Relation Graphs (link) [4] Jiang Wang et. al.: CNN-RNN: A Unified Framework for Multi-label Image Classification (link)
Approaches to Multi-label Classification
6
approaches-to-multi-label-classification-1cf981ff2108
2018-09-24
2018-09-24 04:43:05
https://medium.com/s/story/approaches-to-multi-label-classification-1cf981ff2108
false
638
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Saurav Chakravorty
I am a data scientist solving some interesting problems in the industry. https://www.linkedin.com/in/sauravchakravorty/
a83474a25c2
csaurav
30
9
20,181,104
null
null
null
null
null
null
0
null
0
aff791a18caf
2018-05-13
2018-05-13 16:01:56
2018-05-13
2018-05-13 16:02:38
4
false
en
2018-05-13
2018-05-13 19:37:57
4
1cf9aaff7f93
3.60566
1
0
0
Yes, Data Science is exciting, Yes, Hacking is fun, but which Use Cases are valuable and have an impact on your business?
5
The Use Case Treasure-Trove (Part I): Reducing Administration Costs with AI-driven Record Linkage Yes, Data Science is exciting, Yes, Hacking is fun, but which Use Cases are valuable and have an impact on your business? As you certainly have experienced, finding the right use case for your company is a daunting task. Every day, you get hundreds of emails from internal managers claiming their data is highly valuable. They need your help. However, after careful analysis, your team of Data Scientists later reports to you that the data set is too small, lacks targets and even more annoying unhelpful details. In this non-technical series, I take you through common but highly valuable Data Science Use Case, answering the following questions for you: What are they about precisely? Are they applicable to your business? How does the end-product look like? What should you expect from your team of Data Scientists? In this first part, I tackle Record Linkage and how it will reduce your administration costs. Story Time It is another sunny Monday morning at the Head Office. Sophia arrives at the office. She glances over the unending list of new unread emails. She opens the first one. Her manager James, reports that last week was pretty successful. A large batch of new clients were registered recently, but although the number of reported registered forms seem to match, some clients data seem to be missing. Sophia scrolls through the different the databases. There she sees it again! The same problem, she keeps encountering over and over again, Transcription errors. Transcription errors. Misfiling. Common administrative challenges, which can easily be tackled by Machine Learning and reduce your costs. She will have to once again reallocate administrative staff to check the database record by record and manually correct the data. “These clients will have to wait”, she says. “What a waste of time and ressource”, she thinks. Yes, it is Sophia, it sure is! It does not matter which industry you belong to. Every day countless forms are registered. It could be containers from Hong Kong arriving by ship in Rotterdam, suffering from mistyped customs ID. What about misplaced client ID’s in the mortgage form? The list goes on… This problem is commonly referred to as Record Linkage. What is Record Linkage precisely? “Record linkage is about extracting information on a single entity, e.g. client, commodity, etc… , from different datasets which may or may not share common identifiers. These identifiers could be keys, identification numbers, etc…” Sounds like something that should never go wrong doesn’t it? Unfortunately not. With the Digitalization of forms, many historical analog data had to be transcripted. Even more so, people will always make mistakes. Even the best secretaries, mistype a name, a client id, etc.. . The larger the company, the greater the occurences of oversights. Automation of this Data sanitisation means less administrative cost, needed to supervise the processes. How does the end product look like? The simplied work-flow is illustrated in figure below. Automatisation of Record Linkage The client has been registered in different facilities, some of which are outside your company and often might be the source of faulty data. The data is then uploaded to your central database. Regularly, a sanitization of the data sets is scheduled. Each unchecked new entries in the databases are checked by your (developed) blackbox Machine Learning model, which links the entries, delivers the final form and updates the central Database with cleaned entries. What to expect from your Data Scientists? The most common approach is often designated as Fuzzy Matching. Your team analyses the common errors in the database and uses algorithm to estimate the similarity between records using distance measures. An basic example of such distance measures, is the Levenshtein distance. This measure estimates the similarity between Strings, i.e. words/texts/sequence of characters, based on the number of deletions, insertions, or substitutions required to match two Strings. Siamese Network is a Deep Learning method for Record Linkage. You will hear words such as cosine similarities, cosine similarity TF-IDF, Doc2Vec, variational auto-encoders, triplet networks, siamese networks, etc… Whether advanced techniques might be necessary, depends on the quality of data. There are no silver bullets after all… Conclusion Record Linkage is a common occurrence within the industry. Administrative costs can be reduced using a sanitisation work-flow. At the center of this flow, lies a black box machine learning model which compares newly added entries in your database and matches them based on similarity measures. References Christen P., Advanced record linkage methods: scalability,classification and privacy AT&T Bell Laboratories, Signature Verification using a “Siamese” Time Delay Neural Network Navarro, Gonzalo (2001). A guided tour to approximate string matching Bell RM, Keesey J, Richards T. The urge to merge: linking vital statistics records and Medicaid claims.
The Use Case Treasure-Trove (Part I): Reducing Administration Costs with AI-driven Record Linkage
1
the-use-case-treasure-trove-part-i-reducing-administration-costs-with-ai-diven-record-linkage-1cf9aaff7f93
2018-05-13
2018-05-13 19:37:58
https://medium.com/s/story/the-use-case-treasure-trove-part-i-reducing-administration-costs-with-ai-diven-record-linkage-1cf9aaff7f93
false
770
My personal take on various tools, techniques, use-cases and many other topics in Machine Learning and Data Science! Logo Created by Kjpargeter - Freepik.com
null
null
null
Machine Learning Rambling
benoitdescamps@hotmail.com
machine-learning-rambling
MACHINE LEARNING,DATA SCIENCE,MACHINE LEARNING AI,PYTHON,DEEP LEARNING
null
Data Science
data-science
Data Science
33,617
Benoit Descamps
Philomath and great lover … of Math and Science.
ba1d60a919f6
benoitdescamps
66
59
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-12
2017-09-12 04:00:14
2017-09-12
2017-09-12 06:42:00
1
false
en
2017-09-12
2017-09-12 06:42:00
0
1cfd0b6d0b3e
1.913208
9
0
0
Late last year Dallas Mavericks owner, American businessman and investor, Mark Cuban urged everyone to either study trends in Artificial…
4
The future of AI & our “jobs.” Late last year Dallas Mavericks owner, American businessman and investor, Mark Cuban urged everyone to either study trends in Artificial Intelligence or become extinct in a span of 5 years. Whether that is true or not it’s a story for another day perhaps in such an article. According to wikipedia; Artificial Intelligence (AI) is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. So will AI kill our jobs? Maybe, maybe Not. Let’s discuss the jobs threatened by AI. Any job which consists of doing the same thing again and again can be automated. And if it becomes economically viable, then it will be automated. That means drivers, customer care executives, masons, librarian, delivery boys, police forces, bank clerks, secretarial jobs…And the list goes on and on! Actually any boring job that a human being hates but has to do for a living will be taken over for robots + AI + automation pretty quickly. It’s more real than we think and already in progress. I think AI will create some jobs… On the other hand it will create an increased demand for some high skilled jobs. It will create an increased demand for software developers, security specialists, coders that maintain legacy code and technicians who service machines. These, however, are high skilled jobs. In the past, automation has replaced low skilled jobs with low skilled jobs. AI will replace faster than we might adjust. Change is inevitable and the future generations might just suffer the transition. AI will replace jobs faster than we can retrain and reorient our future generations. A generation will suffer the turmoil of change. Our present society;whether in govt , social or science realm, is ill prepared to deal with change. We need to transform education ASAP. As human beings we have to learn to be masters of the machines. The only humans who will have a job in this world will be those who can navigate this world and constantly find ways of developing themselves past the point where a machine can threaten their livelihood. Finally someone on the internet offered what he thinks might be a solution: 1.) Focus on an alternative hobby, something we love and not just depend on our 9–6 job which might not exist few years down the line. 2.) Teach our children to be explorers and learn science,arts or sports to become innovators and not engineer/doctors/ clerks/soldiers. Creativity becomes more central than discipline! Spare the rod… 3.) Love what we do and be best at it as this time the challenge is not big but different!
The future of AI & our “jobs.”
85
the-future-of-ai-our-jobs-1cfd0b6d0b3e
2017-10-02
2017-10-02 06:36:37
https://medium.com/s/story/the-future-of-ai-our-jobs-1cfd0b6d0b3e
false
454
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Dennis Lighare
Dennis is a trained multimedia journalist. Interested in Design Thinking for problem solving and passionate about the future of doing business in Africa.
60be5e96ffc0
dennislighare
115
145
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-03
2018-04-03 08:41:09
2018-04-03
2018-04-03 08:48:22
6
false
fr
2018-09-04
2018-09-04 08:31:54
0
1cfd651ece36
3.583962
0
0
0
Les logiciels musicaux proposent à leurs utilisateurs une playlist composée de musiques jamais entendues tous les lundis en se basant sur…
4
Introduction technique aux méthodes prédictives : le cas de la musique en ligne Les logiciels musicaux proposent à leurs utilisateurs une playlist composée de musiques jamais entendues tous les lundis en se basant sur leurs habitudes. Pour se faire ces plateformes de streaming musicales utilisent plusieurs méthodes. Cette approche permet de créer un système qui comble les défauts de chaque modèle. Il y a trois modèles principaux : · Modèle de filtrage collaboratif, qui fonctionne en réalisant une analyse comportementale. · Modèle de traitement automatique du langage naturel (NLP), qui fonctionne en réalisant une analyse de texte. · Modèle audio, qui fonctionne en réalisant une analyse directe de son. 1/ Le Modèle de filtrage collaboratif Il travaille sur la récupération des différentes données capables d’enrichir la compréhension des goûts des utilisateurs, cela se produit de façon implicite en prenant en compte plusieurs indicateurs : le nombre d’écoutes d’un titre, la sauvegarde d’un titre, la visite d’une page d’un artiste après son écoute… De par cette myriade de données, les plateformes musicales peuvent constituer une fiche personnelle des auditeurs. C’est ici que la méthode de filtrage collaboratif peut se mettre en marche. · L’utilisateur 1 aime les titres A,B,C · L’utilisateur 2 aime les titres A,B,C,D Il est par conséquent probable que le titre D plaise à l’utilisateur 2. Aussi, pour factoriser sur des centaines de données et des millions d’utilisateurs, une grande marque de streaming musicale suédoise utilise un système matriciel sous Python. Chaque ligne représente un utilisateur parmi 150 millions et chaque colonne représente une musique parmi 30 millions. Ainsi, par une factorisation matricielle sur Python, on obtient plusieurs vecteurs d’utilisateurs combinés avec leurs vecteurs de musique correspondant. Le modèle de filtrage collaboratif va permettre de retrouver les utilisateurs avec les goûts musicaux les plus proches afin de faire des choix pertinents. 2/ Le Modèle de traitement automatique du langage naturel Cette méthode concerne l’application de techniques informatiques sur les aspects du langage humain par la lecture d’articles, de blogs et d’autres textes sur internet. Le logiciel musical parcourt le web et cherche de façon automatique toutes les informations relatives à la musique, en essayant de comprendre le texte des internautes. Typiquement, une stratégie appelée les « meilleurs mots » est couramment utilisée. Chaque artiste et chaque musique possèdent des milliers de mots associés. Ces mots sont par ailleurs changés quotidiennement. Par le biais du nombre d’occurrences, chaque mot a donc un poids associé qui révèle son niveau de pertinence à l’évocation de la musique ou de l’artiste. S’ensuit, comme pour le modèle de filtrage collaboratif, un système de comparaison des musiques grâce au système de poids des mots. 3/ Le Modèle audio Ce modèle à un avantage sur les deux autres. Il permet de plus rapidement catégoriser les nouvelles musiques. Supposons qu’un ami, auteur compositeur, place une nouvelle musique sur une plateforme. Cette musique aura environ 50 écoutes, très peu pour avoir une pertinence sur le modèle de filtrage collaboratif, et encore moins pour le modèle de traitement automatique du langage naturel. Le modèle audio est donc beaucoup plus performant sur cet aspect car il utilise le réseau de neurones convolutifs, technologie initialement adaptée pour la reconnaissance faciale. Elle utilise des fonctions mathématiques à plusieurs paramètres, qui pré-traitent des quantités d’informations sur une piste sonore ajustée, comme le ferait un cerveau, reconnaissant les motifs redondants. En subissant ces traitements, la musique est modélisée selon plusieurs prismes. Ainsi, le réseau de neurones donne sa compréhension de la piste, et renvoie plusieurs données d’analyses, compatibles à une comparaison, comme le tempo, les accords, le niveau de son, le temps… Au final, la compréhension de ces caractéristiques clés permet de trouver des similarités fondamentales entre les pistes sonores. Voici les bases des trois principales méthodologies, qui sont combinées pour proposer aux auditeurs un choix adapté à leurs préférences musicales. Les problématiques liées à la Data et au Big Data sont au cœur du quotidien d’Inhérence (cabinet de conseil spécialisé dans le management de l’information et membre de l’écosystème Alan Allman) depuis plus de 10 ans. Le sujet de la musique en ligne rentre dans cette logique puisque les plateformes de téléchargement utilisent des méthodes prédictives. Contact : contact@inherence.fr
Introduction technique aux méthodes prédictives : le cas de la musique en ligne
0
introduction-technique-aux-méthodes-prédictives-le-cas-de-la-musique-en-ligne-1cfd651ece36
2018-09-04
2018-09-04 08:31:54
https://medium.com/s/story/introduction-technique-aux-méthodes-prédictives-le-cas-de-la-musique-en-ligne-1cfd651ece36
false
698
null
null
null
null
null
null
null
null
null
Business Intelligence
business-intelligence
Business Intelligence
4,052
Alan Allman Associates
null
5c694a101bd1
brandmarketinginternational
1
2
20,181,104
null
null
null
null
null
null
0
null
0
d9f11f11a015
2018-04-07
2018-04-07 22:53:37
2018-04-08
2018-04-08 17:47:05
3
false
en
2018-04-27
2018-04-27 17:07:54
10
1cff055d3a5f
6.987736
48
3
0
The Emergence of Decentralized Artificially Intelligent Networks
5
On Engineering Economic Systems The Emergence of Decentralized Artificially Intelligent Networks Decentralized Artificially Intelligent Networks exist; if somewhat by happenstance, we have already created them. In this century, it is self evident that one cannot draw clear distinctions between technology, culture and society; one might argue, it was never possible, but the technologies of past centuries are taken for granted today. The changes to society and culture caused by shifts in technology frequently occur and are only afterward remarked upon. Blockchains and Smart Contracts in consort with quite modest artificially intelligent bots are active members of today’s “crypto-economy.” AI tools from rules engines to statistical and machine learned models have long since infiltrated the financial industry with many more trades executed algorithmically than those executed by human traders.¹ The rise of crypto-currency investing has made algorithmic trading (as well as fundamental trading) more available to a large body of amateur quants; though it is hard to comment on their talent as we’ve experienced periods of extreme growth in which all participants made huge gains, as well as correction cycles which were scarcely possible to avoid without simply exiting crypto altogether. This article steps beyond trading to arbitrary programatic economic activity enabled via blockchain networks. For a detailed example of programmatic economic activity please see the two part series on CryptoKitties by Markus Buhatem Koch which characterizes the economic activity and profitability of bots fulfilling a critical function in the game economy.² I consider the CryptoKitties game to be an Engineered Economic System in the sense that the designers of the game included a role for external agents and provided rewards to incentivize fulfillment of that role. The practice of designing these incentives is called Token Engineering and is practiced by applying tools from optimization theory and mechanism design. An interested reader is directed to the three part series by Trent McConaghy.³ Meaningful token engineering is distinguished from simply arguing that one’s ICO token will grow in value by formal attention to the token’s role in driving the network towards a shared optimization objective associated with the network’s function, rather than the tokens value on a secondary market. 4This video sums up my feelings on driving up the value of an ICO token. Setting aside jokes about ShitCoin(TM), this article is about AI and Blockchain, and the perspective that systems engineering gives us regarding the implications of combining these technologies. Before going further, it is necessary to set the record straight regarding what constitutes Artificial Intelligence (AI) because the most common association is with machine learning (ML) which is itself only a branch of AI. Artificial intelligence includes: Optimization Control Engineering Signal Processing Machine Learning Any automated heuristics and other forms of mathematical engineering Similarly, a generalized definition of a blockchain network is required in order to consider the combination of AI and Blockchain properly. Generalized Definition of a Blockchain Network: A data structure: the “ledger” in crypto jargon. For the Ethereum network, the data structure is the state of the EVM. The network maintains the history of all changes to data structure, and thus the current state of the data structure. A set of methods which operate on the data structure. In the simplest case this is a transaction sending tokens from one address to another, but more generally defines the set of legal changes to the data structure. A consensus protocol: a set of rules for agreeing on the true state of the data structure, based on verifying the validity of transactions or more generally that the methods above were applied appropriately. A community: the set of agents (human or machine) which are participating in the network. Lightweight clients may broadcast transactions but full nodes are required to participate in validation process (consensus protocol). Please note that this definition covers more than just Bitcoin and Ethereum, but rather a broad class of models for coordinating to agree on changes to the state of a data structure. This characterization still assumes prioritization of consistency over performance and availability in the sense of the CAP theorem.⁴ For more on the inherent trade-offs in decentralized systems, the reader should see Trent McConaghy’s article regarding his work on BigChainDB.⁵ There are a great many intelligent people working hard to solve computation and transaction rate scalability challenges in blockchain networks. Some projects aim to create new networks while others build on the success of existing frameworks. For the purpose of this article, I will step past the technical challenges in order to examine the economic networks that are emerging within the infrastructure that already exists. Since blockchain nodes are effectively bots running autonomously, I prefer to characterize the blockchain infrastructure as a robotic network and to examine the system. To further delve into the relationship between blockchains and system models, the reader is directed to material on state machines.⁶ If one considers the blockchain network to be the plant of a robotic network, then an important property emerges immediately: Agents within the system make local decisions with global information. To put this in context, I did my PhD work on relatively general decentralized optimization and control problems and the fundamental barrier to finding globally optimal solutions with local actions is not that actions are local, but rather the missing information.⁷ To be fair, more information isn’t always better when those signals are deceptive or if the decision maker is irrational. However, in our case the signals are coming from high fidelity sensors, the cryptographically secured blockchain data structure, and the decision making will be an optimization algorithm of some kind, engineered to make use of this sensor data. So, in this case I will assert that global information is strictly better than local information. With this in mind, it is possible to define a blockchain-enabled artificially intelligent economic network: Sensors — the blockchain data structure, APIs to market data feeds and other sources Decoders — ETL software that accesses, sub-selects and restructures blockchain data for use Filters/Estimators — processes that remove noise and fuse signals to create useful signals Controllers — software that computes decision variables from signals available, including state feedback Actuators — smart contract code which defines the decision space and carries out decisions made by agents in the network Actions — the actions that are taken by agents; the loop is closed as these actions appear in the Blockchain data structure making them observable Historically, economic systems are open loop systems, meaning that they do not have the capacity for state feedback, fundamentally limiting the levels of complexity that can be engineered effectively. The system defined above is closed loop. This means that it can be defined dynamically using state feedback to achieve stability around desirable patterns of behavior. Example of a Feedback Controlled Physical System⁸ In physical systems, actuators have limitations defined by physics, often power limits or energy consumption requirements. This is also true in our blockchain networks: using smart contracts requires financial power and has computational limits. Using computation in a blockchain network costs gas fees (in the vernacular of the Ethereum network) even when the actions themselves have no financial costs associated with them. So much like in physical robotic networks there is an implicit energy optimization problem, but in the case of blockchains this is simply measured in the native cryptocurrency used for driving smart contracts. Local bots running in tandem with blockchain nodes implement closed loop systems In the figure above, the off-chain components labeled in blue are local to a node or agent in the network and kept private from the rest of the network. Some simpler closed loop systems could be entirely implemented on chain, but computation limits make this infeasible as the default configuration. Since these bots are acting with financial power they have the potential to waste money by making bad decisions. Furthermore, security becomes an immediate concern as control hacking any part of this closed loop process could result in the theft of cryptocurrency by a bad actor tricking the controller into automatically taking actions that are against the interests of the human account holder. Strictly speaking, any data feed used to make a decision to be executed in a blockchain network, as well as any decision system, is an attack vector. The cryptographically secure moniker does not extend to software systems operating outside of a network’s validation scheme. It is the responsibility of network participants to secure their private infrastructure in order to enhance their nodes with artificially intelligent closed loop systems. For cases where these closed loop systems are part of public services, scalability projects like the Truebit Protocol⁹ are paving the way for extending security guarantees to complex computations using verification games. Conclusion Closed loop systems are in the mathematical DNA of the natural world and are engineered properties of our most fantastical machinations. The necessity of human decision making (in the loop) has long been a limiting factor on economic networks. Algorithmic trading marks the entry of AI into financial systems, but it is a rather limited case, constrained by the centrality of the exchanges that facilitate those trades. Our future will include intelligent agents that act on our behalf in economic networks. Some may lease our excess computational and storage resources; others may serve us targeted advertisements without disclosing our private data. I believe that we have done the economic equivalent of quantifying electricity and that we are as ill-equipped to imagine the future as Ampère was equipped to imagine an integrated-circuit motor-controller. Feedback appreciated (pun intended); this is a working draft in an ongoing series exploring my thoughts and findings from applying systems engineering to the design, analysis and tuning of Blockchain-Enabled economic networks. Forthcoming Articles + Opportunities and Perils in Emerging CryptoEconomic Networks + Case Studies on Economic Systems and Token Engineering Acknowledgements Special thanks to the Block Science team for research, insights and editing, Trent McConaghy for early feedback and to Aleksandr Bulkin who has been pushing me to write for years. References https://www.cnbc.com/2017/06/13/death-of-the-human-investor-just-10-percent-of-trading-is-regular-stock-picking-jpmorgan-estimates.html https://medium.com/block-science/exploring-cryptokitties-part-1-data-extraction-1b1e35921f85 / https://medium.com/block-science/exploring-cryptokitties-part-2-the-cryptomidwives-a0df37eb35a6 https://blog.oceanprotocol.com/can-blockchains-go-rogue-5134300ce790 https://en.wikipedia.org/wiki/CAP_theorem https://blog.bigchaindb.com/the-dcs-triangle-5ce0e9e0f1dc https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/unit-1-software-engineering/state-machines/MIT6_01SCS11_chap04.pdf https://repository.upenn.edu/edissertations/1515/ https://www.electronics-tutorials.ws/systems/closed-loop-system.html https://truebit.io/
On Engineering Economic Systems
250
on-engineering-economic-systems-1cff055d3a5f
2018-06-19
2018-06-19 16:22:51
https://medium.com/s/story/on-engineering-economic-systems-1cff055d3a5f
false
1,706
Science and Engineering Principles applied to Economic Systems
null
null
null
Block Science
media@block.science
block-science
BLOCKCHAIN TECHNOLOGY,DATA SCIENCE,SYSTEMS ENGINEERING,ECONOMICS,TOKENIZATION
block_science
Blockchain
blockchain
Blockchain
265,164
Michael Zargham
Founder, Researcher, Decision Engineer, Data Scientist; PhD in systems engineering, control of networks.
bdd1335dfbd
michaelzargham
289
153
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-13
2018-02-13 17:44:51
2018-02-13
2018-02-13 18:53:28
2
false
zh-Hant
2018-03-03
2018-03-03 17:02:49
2
1cff0580a232
1.300314
1
0
0
“……The enormous minority of unhappy people in our society, are dramatically louder and are impacting us more than the far majority of us…
5
21世紀的 #SocialMediaVegan 拉了人類一把 “……The enormous minority of unhappy people in our society, are dramatically louder and are impacting us more than the far majority of us that are happy and good.…….We are hard wired in a way that allows us to protect each other in a net game. Yet, the mainstream media and the way we hear. All you hear is negativity. All that mainstream media does is tell us about the .0001% worst of us. I think we as humans are going to have some level of a sense of responsibility that if you’re in a good place. It might be good to tweet or instagram or snapchat about that versus what we default in to which is there are people’s twitter feeds that are basically every 7 or 10 days when they’re complaining about an airline or venting about work like the volume of negativity is just way too high in reality of what’s going on. And I think the silent majority of us. And this is what happens every empire falls when the silent majority of happy isn’t loud enough. Share your good.” — Gary Vaynerchuk. 這些都植物造的肉,香氣在鐵板上徐徐噴發,加上一流的味道,誰說Vegan的選擇不多?(Photo by me) 於2038年,憑著Social Media的發展,演算法變成基本的技術。大家於網上的每一個行為都成為數提據。每家公司都相信可以從Data中找到洞見及商機。同時,Cathy O’Neil提出Big Data其實充滿歧視,產生更多問題。AI這過氣產物就是依據大量缺失的行為數據去發展。 “Where is the different between being human and human being?” — An Anonymous Artist 上面的問題簡單地道出的Big Data的痛點。在電影「Bland Runner 2049」中,複製人跟真實人類差不多,也需要衣、食、住、行。如果把他們的生活習慣的數據和真實人類放在一起,分別在哪裡?Social Media Vegan就是用行動證明,我們是Human being(人類),不只是being human(做出人類的行為)。Stupidity 才是我們進化的動力。 「We are what we eat.」 科學家經過研究,事實上是「We are what we digest.」。身體裡的細菌和微生物影響我們對食物的喜好,它們會影響我們選擇某一類食物,好讓它們消化,變成生存需要的營養。這如同我們每天會接觸和消化各種的資訊一樣。社交網絡不斷更改演算法,讓「同溫層」的困局再次出現。各人曾經只是看到或者得到的都是已經被過慮了的資訊,相反的訊息差不多都被隱藏。日子一久,你便會覺得社會只有一種面向,所以相反的意見全是無理。 白總可以在黑中顯現。(Photo by me) Artificial Stupidity(A.S.)將所有Positive(正面)和Negative(負面)的Data中學習Study(不是Learn,兩者是不同的行為),然後分析哪些東西才是具備Longevity(耐久性)的事物,這也是Super Normal的精神。Social Media Vegan突破演算法的困局,將正負兩面的東西連繫。支持者看到世界的運作法則是雙對性的,光影同在才是正常,令「同溫層」所導致的以偏概全一拳一拳的打破。而且Social Media Vegan補充了社交網絡上優質和正面的Data的缺少,電腦才可以更全面分析人類的行為和背後的意義,再提出幫助提升智性、情緒、藝術性及靈性成長的意見,更正「同溫層」造成的認知錯誤。數據中夾雜著人類比較愚蠢和無知的資料,理性不能完整地解釋它們,不過人類卻不斷地重覆去做。歷史學家、哲學家、心理學家及社會學家對這些行為十分清楚。從Social Media Vegan在社交網絡上開始,讓Artificial Stupidity(A.S.)的種子慢慢地萌芽。 過度的Positivty(正面)和Negativity(負面)都是有害,為何不選擇成為一棵樹,吸收二氧化碳,然後放出氧氣呢? (English version will uploaded soon. Cheers.)
21世紀的 #SocialMediaVegan 拉了人類一把
5
大家在21世紀實行-socialmediavegan-1cff0580a232
2018-06-14
2018-06-14 01:35:27
https://medium.com/s/story/大家在21世紀實行-socialmediavegan-1cff0580a232
false
243
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Super Normal Life Lab
超日常生活Lab - Study and predict the future on the basis of history and current trends with Future Studies and Super Normal design concept.
de1ed5df11d2
snllab
41
34
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-21
2018-05-21 11:14:19
2018-05-21
2018-05-21 12:12:57
5
false
ja
2018-05-21
2018-05-21 12:12:57
2
1cff6652941c
3.851333
0
0
0
Introduction
2
Unetを用いた着色画像→線画 Introduction pix2pixを用いた線画着色では大量の線画と着色画像のペアのデータが必要となりますが、着色画像から線画を生成する手法としてはopencvを用いたものが簡単に出来ます。具体的には輪郭線を細くして元画像との差を取ることで、輪郭線のみを抽出するというものです。しかし、この手法では以下のように線に濃淡が出来てしまい、線画としては不自然なものになります。そこで、今回は自然な線画を生成するためUnetを用いて試してみました。すんなりといけそうだと思っていましたが、意外とハマったのでここにまとめます。 Dataset pixivの塗ってみたタグのついた画像から着色画像と線画を120ペアほど集めました。この時、 余計なものが着色されていないか(例えば線画では白背景なのに着色画像の方には背景に多様な色が塗ってあったり、線画にはないものが書き込まれていたりしてないか) 着色画像と線画で位置のズレがないか の2点をチェックしました。次に、これらの120ペアからランダムに128*128サイズを3枚ずつcropし、それぞれに対して90, 180, 270度回したものをデータ水増しとして用意しました。従って、合計120*3*4 = 1440ペア生成されたことになります。 Result 以上のように作成したデータセットに対して、入力データを着色画像、教師データを線画(グレースケール)としてUnetを用いて学習を行いました。損失関数は平均絶対値誤差を用いました。学習したネットワークを用いて、テスト用画像を線画変換したものを以下に示します。尚、今回ネットワーク出力後の画像データの値は[0,1]に収まっているのですが値が0.9未満の場合0として、それ以外の場合は1を出力するようにしています。二値化することで余計なノイズを無くすためです。 size : 256 * 256 size : 512 * 512 どのサイズでも、均等な濃淡を持つ線画が形成されていることが分かります。多少線が形成されていない部分があるものの、大まかにはうまくいっています。 Trouble 詰まった所が2点あるので、戒めとして残しておきます。 1点目は、データセットの用意においてです。初めのうちは、画像サイズを圧縮したものを学習用データとして用いていたため、以下のようにギザギザ状態が残ったものが出力されました。これは、データセットの部分で述べたように圧縮せずに元画像からcropすることで解決しました。 2点目は、学習用ネットワークについてです。今回の例ではUnetを用いていましたが初めのうちは単なるAutoEncoderを用いていました。しかし、これではうまく出力されなかったためUnetを用いました。やはり、着色画像と線画のように、色は異なるものの構造は同じペアにはUnetを用いるのが良いのでしょうか…. Summary 今回は、Unetを用いて着色画像から線画へ変換することを行いました。大まかにはうまくいっています。しかし、実用として大量の画像を線画へ変換することを考えると、一枚の画像でさえGPUを用いても数秒はかかるので、冒頭で述べたopencvを用いた方が荒削りではあるものの高速であることは確かです。 今回の手法で作成した線画とopencvを用いて作成した線画で、pix2pixで着色した時に違いが出るのかも試してみたいですね。 Environment OS : Ubuntu 16.04 LTS (64-bit) CPU : Intel(R) Core(TM) i5–4590 CPU @ 3.30 GHZ GPU : NVIDIA GTX970 メモリ : 8GB
Unetを用いた着色画像→線画
0
unetを用いた着色画像-線画-1cff6652941c
2018-05-21
2018-05-21 12:12:58
https://medium.com/s/story/unetを用いた着色画像-線画-1cff6652941c
false
46
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Lento
Kawaii makes world accelerate
32ad3dda5e90
crosssceneofwindff
25
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-29
2018-01-29 15:41:50
2018-01-29
2018-01-29 15:41:51
0
false
en
2018-01-29
2018-01-29 15:41:51
1
1cffb285875
0.215094
0
0
0
null
3
Time series databases have long been the cornerstone of a robust metrics system, but the existing options are often difficult to manage in production. In this episode Jeroen van der Heijden explains his motivation for writing a new database, SiriDB, the challenges that he faced in doing so, and how it works under the hood. Listen Now!
SiriDB: Scalable Timeseries Database For Your System Metrics (Interview)
0
siridb-scalable-timeseries-database-for-your-system-metrics-interview-1cffb285875
2018-01-29
2018-01-29 15:41:52
https://medium.com/s/story/siridb-scalable-timeseries-database-for-your-system-metrics-interview-1cffb285875
false
57
null
null
null
null
null
null
null
null
null
Data Engineering
data-engineering
Data Engineering
690
Boundless Notions
Content and Consulting to make sense of your data
4db925846be1
TobiasMacey
7
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-07
2017-09-07 19:06:35
2017-09-07
2017-09-07 19:11:33
3
false
en
2017-09-07
2017-09-07 19:11:33
11
1d04d0fd9759
3.576415
0
0
0
If you knew the state of your machines at each moment, and could take action based on that knowledge, how much value could you create for…
5
Industrial IoT’s “German Gladiator” Problem If you knew the state of your machines at each moment, and could take action based on that knowledge, how much value could you create for your customers? Digital transformation, successfully embedding products and services inside the value chain, creates incredible opportunities for new revenue streams and customer relationships. While embracing new business models created by connected systems can be challenging for enterprises, problems with data management and system integration are more often the cause of industrial IoT project failure. Let’s look at an example ‘ripped from the headlines’ of how decisions today could cause your organization to be left behind. The harsh reality of industrial IoT is your connected system will be inundated with traffic from many sources and hostile environments. Without centralized data management based on 3 core principles, your IoT data lake will degrade over time from a fountain of knowledge to an expanding tar pit. Just recently a classic example of the importance of the first principle of identity came to light at Audi, where at least one factory in Germany appears to have produced thousands of vehicles with the same Vehicle Identification Number (VIN) and shipped them to countries across Asia. The VIN is a 17-digit identifier stamped visibly into each car during production, allowing tracking of things like ownership history, maintenance, and accident records throughout the life of a particular vehicle. EU law states a VIN may be used only once every 30 years to prevent overlap. Government investigators discovered the VIN duplication (though not the cause) during their investigation into “Dieselgate,” a scandal prompting regulators in the United States to order Volkswagen (owner of the Audi brand) to recall almost 80,000 vehicles for refunds and repairs. To comply, Volkswagen used the (fortunately) unique VIN of each car to identify and contact their 80,000 unique owners. Legal problem, solved. In IoT terms, they used the unique device identifier (VIN) to query for the value (contact info) of a property (current owner) and took action based on the information received (compliance with court order). System value, delivered. Now imagine if some of the German Audis shipped to Asia with the duplicate VINs were also subject to the recall. For simplicity, let’s say just one of the cars must be repaired. You enter your query using the VIN of the affected vehicle, and a name appears on your screen… with thousands of other names right below it. Each owner of the thousands of cars with the duplicate VIN is reported to be the owner of the one car you need to find. Now what? That’s a great question that is probably keeping at least one Volkswagen executive up at night. In their nightmares, they see a Roman general running a simple query to locate the leader of a rebel army in a crowd of defeated soldiers, using his name as the unique identifier. Rather than the system returning a single gladiator for punishment, the executive/general’s screen comes alive with thousands of results all shouting through the speakers, “I Am Spartacus!” German intrigue and Roman frustration aside, successful business transformation depends on enforcing the unique identification of each connected data source, and maintaining that identity over the life of your system. Even so, data will occasionally enter the system with ambiguous provenance. How will your system adapt? This is not just another ‘feature’ of an IoT application — identity is a core data management challenge to be solved through proper modeling at the system level, inside the data layer itself. With a trusted relationship between your IoT system of record and your ERP, CRM, and other enterprise systems, data integrity is maintained for enabling intelligent action and creating business value. Yours and every connected industrial system faces a complex world where sources of data — component parts and sensors — are replaced and swapped between machines. These machines have identities of their own, and are moved between locations and even sold to different organizations. In the midst of this ongoing shell game, manufacturing defects, network errors, and distracted service technicians can cause duplicate or incorrect identity values to enter the system. All of this may have little impact on queries for general trends and distribution insights, where data aggregation can be sufficient. But when you learn that 15% of your pumps and motors installed in customer facilities contain a defective part, proper data management matters. Will you be able to find and replace them all before they go down, taking your customers’ production lines and your business along with them? If you can’t trust the reported state of your machines at each moment, you can’t take action based on that knowledge. How much value will you create for your customers? Finally a question with an easy answer… Industrial IoT and Data Management Solutions
Industrial IoT’s “German Gladiator” Problem
0
industrial-iots-german-gladiator-problem-1d04d0fd9759
2018-01-24
2018-01-24 11:08:20
https://medium.com/s/story/industrial-iots-german-gladiator-problem-1d04d0fd9759
false
802
null
null
null
null
null
null
null
null
null
Industry 4 0
industry-4-0
Industry 4 0
1,042
Bright Wolf
Data-centric technology provider and system integration partner for industrial IoT and connected product solutions. http://brightwolf.com
2f63ba6a9939
bright_wolf
146
31
20,181,104
null
null
null
null
null
null
0
null
0
c0020e66c820
2018-06-25
2018-06-25 23:17:43
2018-06-27
2018-06-27 20:34:33
17
false
en
2018-06-28
2018-06-28 13:38:19
1
1d059427c96c
6.758491
62
0
0
we are at the beginning of the Fourth Industrial Revolution, and in the next decade, we’ll see everything changing more quickly than ever…
5
The Risepic Manifesto | A new equality-based future designed for Passionate People, Lifelong Learners and Culture-Driven Companies we are at the beginning of the Fourth Industrial Revolution, and in the next decade, we’ll see everything changing more quickly than ever before. GLOBALIZED COMPETITION The exponential grows of disruption by companies that are reinventing entire markets, will change more quickly than ever before the standards in every industry. In the next decade companies and employee needs to be competitive in a globalized scenario, no more only in their country standards. EVER-CHANGING STANDARDS The standards and skills are changing more quickly than ever before. people need to reinvent themselves more times with a “Learn, Unlearn and re-learn” approach to be competitive. BIO-TECH AND AUGMENTED TECHNOLOGY Bio technology is changing everything about our techniques and consciousness about our body. Meanwhile, wearable technology sounds already old and Augmented body technology is rising more faster as we can think. So probably after this decade, our body will be definitely different from today… BLOCKCHAIN AND TRUSTLESS We’re starting to trust algorithms to validate values like cryptocurrencies and logic with Decentralized Applications and Decentralized Data, this technology in the next decade will be scalable and definitely will reinvent our concept of data, fake news and reality without trust any entity to validate pieces of information but just because things happened. DECENTRALIZED AI Artificial Intelligence, our best companion for some points of view and at the same time a possible villain for other points of view. This technology will achieve his best results, especially in the prediction field thanks to validated and Decentralized Information. Immagine this powerful technology with clean data and a massive amount of distributed computational power or even with Quantum architecture what can achieve in the next years. AUTOMATION Automation and Machine Learning algorithms are already an imperative for every company. Decentralized A.I. is the next step. This scenario will change more quickly than ever before our concept of work, replacing every non-creative jobs. EXPONENTIAL GAP The real problem to mitigate in this decade is the cultural gap between countries, as faster the stand in innovation rise, as this distance increase. Innovation and technology are saving us from the globalization issues, but people in countries without tech culture are starting to fear it. Could we reach a Workless Dystrophic scenario? FROM A WORK-BASED ECONOMY TO A CREATIVITY-BASED ECONOMY This period is a transition from a society based on repetitive jobs and hourly pays to a new work order based in what we can’t automatize… uniqueness, and creativity. This transition could start an ideal scenario if you don’t fear it to be free from repetitive jobs and replace it for a new one where you can express your creativity beginning to do love what you do. AN ENTIRE WORLD OF THINGS TO BE REINVENTED There is a lot of new opportunities in innovation; we live in a world where everything needs to be disrupted not only in hi-tech but in every market. The challenge is to empower tools to help people to understand that they can make great things if they change the way they do it without fear of new approaches. Learn Unlearn and Relearn is the key! We want to build a new Faster, Cheaper and even Better opportunity for people to constantly Reinvent their-self and became competitive in this new innovative era regardless of their country, cultural gap, family wealth and personal connections. BUILDING A PERSONALIZED AND LIFELONG LEARNING ECOSYSTEM FOR EVERYONE We believe that the future of Learning will be Personalized in a Lifelong way defeating every friction, cultural wall and fake news from the best teacher, professionals, and contents to Lifelong Learners, achieving our goal to promote equality in learning right and work opportunity based on every single attitude, knowledge, and expertise that make a person unique. HELPING COMPANIES TO MATCH PEOPLE BASED ON NON-SELF-REPORTED PIECES OF INFORMATION The problem to solve is to help personalized and lifelong learning to be a new standard for broad mass adoption. To achieve this goal we need to defeat every friction from the data we can take from every learning experience and provide it to companies, replace complicated and expensive procedures for Recruiting and Outsourcing with free and straightforward matchmaking. HELP EDUCATIONAL PLAYERS TO BUILD AN AWESOME EXPERIENCE Gameful Dynamics and Social Interaction are an imperative in the online industry. The challenge is to develop a technology that learns from the entertainment industry, building a culture and an experience to empower ed-tech players that will need to be focused only on what they love to do, products and mentorship. TRANSFORM A STANDARDIZED APPROACH TO A NEW DATA-DRIVEN DYNAMIC ONE Certifications and standardized tests are no longer work; they don’t build any interesting information based on the modern Companies needs. In today’s Socio-Economic period is critical building a frictionless connection between educational and learning providers and Companies HR, capturing new smart data from learning products based on modern hr-needs (soft-skills, mindset, creativity, uniqueness) ACHIEVING THE THIRD GENERATION OF HUMAN POTENTIAL Once upon a time, there was a society where the world of work, the methodologies to get the things done and the tools were static without changes. Standardized testing and generalist degrees plans were designed during that period. Today modern companies need specific information on soft and hard skills about the uniqueness of people in a new scenario where methodologies and tools are changing more quickly than ever before. In this scenario, companies have to spend thousands of dollars to find a single employee or freelance with slow HR procedures and however with a gap of 30% between the expectations and the proposals that the recruiter is able to propose. We need to embrace a new generation of methodology to solve this vast gap defeating every friction from learning experience to tasks and job positions thanks to AI and blockchain technology and the idea to build matchmaking platform that connects Lifelong Learners with learning contents and Companies teams. We called it The Third Generation of Human Potential BUILDING DECENTRALIZED AND TRUSTLESS AI ALGORITHMS TO TRACK THE QUALITY OF CONTENTS WITHOUT TRUSTING ENTITIES We have designed the Proof Of Potential to solve this critical threat. Decentralized machine learning algorithms designed to work with action-based validated data about the Human Potential (Knowledge and Expertise) of users. With the P.O.P. we can finally track the real quality of contents and how they’re deep inside every context without trust entities. Basically, we track for every action that will happen in a content the level of knowledge and expertise about the context that user that interact with have. Immagine how this idea could help internet to defeat fake news and other ethical problems without creating a lobby of validators of reality to trust. THE “LIGHT” A CRYPTO-TOKEN DESIGNED TO DEVELOP A WORLDWIDE EQUALITY IN LEARNING RIGHTS We’re developing a trustless system for building an infinite abundance of values to help people without economic possibilities to study without debt or other old-school stuff. We’ve called it Community Driven Investments, a wallet powered by fees from every transaction in LIGHT. The Community Driven Wallet will allow users to earn for their participation and at the same time will help LIGHT holders to get more LIGHT, simply choosing where to invest the Community Wallet funds, without spending their own LIGHT but holding it for a specific amount of time and earning from the participation of the funded user. A NEW SCENARIO WHERE EVERYONE CAN FOLLOW THEIR PASSIONS AND AN AWESOME IMPACT ON THE WORLD Risepic’ll provide to lifelong learners the big picture of trends and standards and grow their reputation from different kinds of learning contents independently from their country culture, limits or wealth. Companies will be matched with the best people in a new accurate and faster way. Teachers and educational organizations will use the data to build better products, build amazing communities and match their product with the right customers. AT RISEPIC WE WANT TO. EMPOWER THE LEARNING ECOSYSTEM TO MITIGATE CULTURAL GAP AND ACCELERATE THE TRANSITION TO A NEW INNOVATIVE, MERITOCRATIC AND CONNECTED SOCIETY. More at Risepic.com
The Risepic Manifesto | A new equality-based future designed for Passionate People, Lifelong…
2,621
the-risepic-manifesto-a-new-equality-based-future-designed-for-passionate-people-lifelong-1d059427c96c
2018-06-28
2018-06-28 20:12:53
https://medium.com/s/story/the-risepic-manifesto-a-new-equality-based-future-designed-for-passionate-people-lifelong-1d059427c96c
false
1,367
The Third Generation of Human Potential
null
risepicOfficial
null
Risepic
info@risepic.com
risepic
LIFELONG LEARNING,PERSONALIZED LEARNING,BLOCKCHAIN TECHNOLOGY,ARTIFICIAL INTELLIGENCE,STARTUP
null
Personalized Learning
personalized-learning
Personalized Learning
437
Alessandro Mario Lagana Toschi
Co-Founder at Risepic | Co-Founder at Decentralized AI Community | Blockchain and AI Enthusiast
b5e20e64ca68
alet89
545
103
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-01
2018-07-01 08:17:16
2018-07-01
2018-07-01 08:19:13
1
false
ru
2018-07-01
2018-07-01 08:19:13
3
1d06706aef37
1.554717
0
0
0
Согласно организации McKinsey Global Institute, с новой ступенью цифровой эры пришла неофициальная иерархия: компании, которые раньше…
3
Компании, которые раньше остальных внедрили технологии ИИ, обладают преимуществом перед конкурентами Согласно организации McKinsey Global Institute, с новой ступенью цифровой эры пришла неофициальная иерархия: компании, которые раньше остальных начали применять искусственный интеллект, сообщают о повышении прибыли. Они также могут получить непреодолимое преимущество над конкурентами. Почему это важно: Предприятия, которые не могут быстро внедрить искусственный интеллект, чтобы повысить скорость роста, все сильнее и сильнее отстают от тех, кто уже активно использует в своей деятельности эти технологии. Так уменьшаются их возможности и шансы привлекать талантливых сотрудников, а вся рабочая сила собирается вокруг нескольких «звездных» компаний. Что происходит: В докладе не упоминаются названия компаний, которые уже применяют новые технологии, и их индустрии, однако с точки зрения исследования ИИ, девять компаний — все расположены в США и Китае — значительно опережают своих конкурентов. В этих организациях работают лучшие сотрудники, и у других предприятий практически нет шансов с ними конкурировать. К списку таких компаний относятся Google, Microsoft, Baidu и Tencent. «У прогрессивных компаний, которые используют ИИ, выше прибыль. Они видят в искусственном интеллекте возможность для роста, поскольку технологии совершенствуют качество и создают абсолютно новые продукты. Остальные видят в этом лишь возможность сокращения затрат». — Сьюзан Ланд, соавтор доклада. О чем говорится в докладе: McKinsey Global Institute опросила 3031 исполнительных директоров в семи западных странах Северной Америки и Европы. 71% участников, причисляющих себя к числу «прогрессивных компаний», утверждают, что ожидают рост прибыли более чем на 10%. Согласно McKinsey Global Institute, такие ответы указывают на скорейший приход компаний-гигантов, применяющих технологии ИИ. Отрывок из доклада: «Тенденции роста и увеличения прибыли у компаний, использующих ИИ, будут продолжаться. Если они затем заново будут вкладывать эти доходы в развитие технологий, у них появится непреодолимое преимущество над конкурентами. Таким образом, остальные компании поймут роль ИИ в бизнесе». Негативная сторона: Исследователи считают, что в связи с растущим доминированием больших технологических компаний наблюдается стагнация заработной платы в США и мире в целом, поскольку остальные компании не могут платить те же зарплаты, что и их доминирующие конкуренты. McKinsey Global Institute опасается, что то же самое произойдет в ИИ: «Заработная плата, которую предлагают отстающие фирмы, ниже, потому что они не пользуются экономическими преимуществами технологий так, как их успешные конкуренты», — указано в докладе. Источник
Компании, которые раньше остальных внедрили технологии ИИ, обладают преимуществом перед…
0
companies-that-used-ai-technologies-before-others-1d06706aef37
2018-07-01
2018-07-01 08:19:13
https://medium.com/s/story/companies-that-used-ai-technologies-before-others-1d06706aef37
false
359
null
null
null
null
null
null
null
null
null
Business
business
Business
153,000
Ruslan Gafarov
Founder Malikspace.com
7ddafc1a553d
Gafarov
39
3
20,181,104
null
null
null
null
null
null
0
null
0
7f60cf5620c9
2018-01-15
2018-01-15 15:43:17
2018-01-15
2018-01-15 15:44:09
9
false
en
2018-01-20
2018-01-20 11:07:44
8
1d0699de1a9d
4.320755
17
0
1
Building a Convolutional Neural Networks (CNN) is not a big challenge , that a Data Scientist or a Machine learning engineer can do.Ones…
5
Convolutional Networks for everyone source : MathWorks (https://goo.gl/zondfq) Building a Convolutional Neural Networks (CNN) is not a big challenge , that a Data Scientist or a Machine learning engineer can do.Ones someone understand it’s architecture it is so simple to implement it for solving an Artificial Intelligence (AI)or a Machine Mearning (ML)problem. This post is for making the CNN Architecture easy understandable without going much into math. Artificial Neural Network to Convolutional Networks Source : https://goo.gl/aX44Z1 In ANN there will be an Input layer where the input will be length of input vector (eg. 28 x 28 = 784 Neurons). Let’s see how convolutional networks differ from ANNs How are CNN’s different from ANN’s ConvNet architectures make the explicit assumption that input are images. 2. Their architecture is different from feedforward neural networks to make them more efficient by reducing the number of parameters to be learnt. 3. In ANN, if you have a 150x150x3 image, each neuron in the first hidden layer will have 67500 weights to learn. 4. ConvNets have 3D input of neurons and the neurons in a layer are only connected to a small region of the layer before it. ConvNets The neurons in the layers of ConvNet are arranged in 3 dimensions: height, width, depth. Depth here is not the depth of the entire network. It refers to the third dimension of the layers and hence a third dimension of the activation volumes. In essence, a ConvNet is made of layers which have a simple API — transform a 3-D Input volume to a 3-D Output volume with some differentiable function which may or may not have parameters. Source : Stanford University (https://goo.gl/rHmTSP) Layers in CNN Input layer- Consist of Height x Width x Depth (R,G,B) Convolutional Layer- Connected to small part of the input Activation- RELU activation used in the CNN Layer Pooling Layer- Pooling used for downsampling on the width, height Fully connected layer Architecture of CNN source : MathWorks (https://goo.gl/zondfq) Kernels or Filters source: Stanford University (https://goo.gl/g8FV4M) A filter is represented by a vector of weights with which we convolve the input. You can increase the number of filters on the input volume to increase the number of activation maps you get. Each filter gives you an activation map. Each activation map you get, tries to lean a different aspect of the image such as an edge, a blotch of colour etc. If on a 32x32x3 image volume, you implement 12 filters of size 5x5x3, then the first convolutional layer will have dimension 28x28x12 under certain conditions. several filters are used to extract several features in a convolution layer of a NNet. A single step for the 3X3 matrix is called a “stride”. Activation Functions The activation function is usually an abstraction representing the rate of action potential firing in the cell. There are mainly Linear Activation and Non Linear Activations.Without non linear the neural network would be much powerful. Activation function used to introduce non linearity is needed. Rectified Linear Units Layer (ReLU) As in feedforward neural networks, the purpose of an activation layer in Convnet is to introduce nonlinearity. R(z) = max(0,z) is the equation of RELU Activation. Consider two integers positive and a negative Positive R(1) =max(0,1) = 1 → Postive Output Negative R(-1) = max(0,-1) = 0 → Negative Output It reduces the number of parameters in the network, thus enabling it to learn faster. Softmax Activation Softmax is logistic activation function which is used for multiclass classification. equation for softmax activation Softmax function is applied in the last layer of the network for taking the maximum probability from the classes and to predict. Pooling Layer source: Wiki (https://goo.gl/snMC4o) source: Towards Data Science (https://goo.gl/xohkdV) Pooling is used for downsampling on the width, height of the image but depth remains same. Mainly there are three types of pooling. Min, Max, Average Pooling The pooling layer works on each depth slice independently, resizes it using the mathematical operation specified such as MAX or Avg. etc. Fully Connected Layer Finally, after several convolutional and max pooling layers, the high-level reasoning in the neural network is done via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular neural networks. Their activations can hence be computed with a matrix multiplication followed by a bias offset. softmax activation is used in the fully connected layer for taking the max prob and to make a prediction. Overfitting Problem Overfitting may be seen in the classification accuracy on the training data, If the training accuracy is out performing our test accuracy, it means that our model is learning details and noises of training data and specifically working of training data. source (Rutger Ruizendaal, Towards Data Science, https://goo.gl/87as34) To reduce Overfitting → Add more data Use data augmentation Use architectures that generalize well Add regularization (mostly dropout, L1/L2 regularization are also possible) Reduce architecture complexity. Implementation of CNN Let’s try to implement CNN in MNSIT Dataset. CNN implemented using TensorFlow in Python To download the data you can goto (http://yann.lecun.com/exdb/mnist/) More Links. www.github.com/rohanthomas www.rohanthomas.me http://www.deeplearningbook.org/ (Book by Ian Good Fellow, Yoshua Bengio, Aaron Courville
Convolutional Networks for everyone
56
convolutional-networks-for-everyone-1d0699de1a9d
2018-07-13
2018-07-13 15:00:18
https://medium.com/s/story/convolutional-networks-for-everyone-1d0699de1a9d
false
827
Sharing concepts, ideas, and codes.
towardsdatascience.com
towardsdatascience
null
Towards Data Science
null
towards-data-science
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,ANALYTICS
TDataScience
Deep Learning
deep-learning
Deep Learning
12,189
Rohan Thomas
A young data scientist who trys to make this world a better place to live
f714c979b15b
rohanthomas.me
92
17
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-30
2017-11-30 01:11:30
2017-11-30
2017-11-30 01:14:32
2
false
en
2017-11-30
2017-11-30 01:14:32
12
1d0781dea832
2.655031
0
0
0
Here’s everything that’s new in artificial intelligence and computer vision, with a little tech pop culture to make the medicine go down…
4
AI Inspiration #13: YouTube’s Best AI Channels; Inside Apple’s Face ID; Truck Drivers of the Future Here’s everything that’s new in artificial intelligence and computer vision, with a little tech pop culture to make the medicine go down. Our logic is undeniable. “Yoo-hoo…I’ll make you famous.” Thanks to Facial Recognition, This Old Billy the Kid Picture Is Now Worth Millions The next time you pick up a photo from your local flea market, you may want to have it analyzed by computer vision. That’s how one North Carolina man discovered that a tintype picture he bought for $10 might actually be worth millions. Forensics experts ran the ferrotype through facial and image recognition software and discovered that two of the people in the picture were legendary outlaw Billy the Kid and his eventual killer, sheriff Pat Garrett, the only image ever featuring the two together. Talk about monetization. BBC News A Pet Monitor That Sees, Feeds and Trains Your Pooch Remote monitoring cameras that let you see your dog via live footage, or send you alerts via motion sensors, are not new, but Furbo has just taken it one step further. Featuring a snack dispenser that holds up to 100 treats you can shoot out remotely via a smartphone app, the dog-monitoring camera now uses computer vision to ID when a dog paces, eats, plays with another dog, and comes up to the camera. In other words, you can train your pooch to essentially smile for the camera to get a treat. TechCrunch About Face ID Running a 3D depth sensing camera to map faces for facial recognition is no easy task, and it’s made even more challenging when it runs locally (to protect privacy) and has to share resources with all the other operations running on your smartphone. Here, Apple explains how it pulled this off, which should ease privacy concerns, as long as the iPhone X maker doesn’t ever actually upload that training on your face to the cloud. Machine Learning Journal 5 AI YouTube Channels You Need to Follow Right Now Lost in AI? No problem. Just as it’s helped you learn how to repair your car, cook dinner and dress better, YouTube can also teach you about artificial intelligence. From programming tutorials to explainers on how neural networks work to segments on the latest image recognition milestones, the videos you’ll find on these YouTube channels will help you get up to speed. And they’re not all lectures, as evidenced by the punchy and hyperkinetic explainers and rapid-fire interviews on Artificial Intelligence Education, the channel run by YouTube creator Siraj Raval, the self-styled “Kanye of Code.” Check out the list, add them to your queue and happy viewing. The Visionary How to Keep on (Autonomous) Truckin’ Tesla’s recently announced electric truck can haul 80,000 pounds up to 500 miles on a single charge, which is cool and all, but what does that mean for the 600,000 human drivers in the Teamsters union when they’re out of work? This is a future social and political juggernaut that nobody is thinking about seriously, per various technology and labor experts. For those drivers who are open to it, however, the future may well be in front of a computer screen monitoring their autonomous rig from home, which could well beat a strung out all-nighter on the highway. Los Angeles Times Anyone you know interested in computer vision? Forward this to them so they can subscribe, too. And please submit any computer vision stories you think we’d be interested in posting. The Visionary newsletter is produced by GumGum.
AI Inspiration #13: YouTube’s Best AI Channels; Inside Apple’s Face ID; Truck Drivers of the Future
0
ai-inspiration-13-youtubes-best-ai-channels-inside-apple-s-face-id-truck-drivers-of-the-future-1d0781dea832
2018-04-06
2018-04-06 01:46:24
https://medium.com/s/story/ai-inspiration-13-youtubes-best-ai-channels-inside-apple-s-face-id-truck-drivers-of-the-future-1d0781dea832
false
602
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
The Visionary
Weekly computer vision news, exclusive visual content and original feature-length articles on how AI intersects with your daily life, business and marketing.
5ef0074c3a03
thevisionary_73083
16
2
20,181,104
null
null
null
null
null
null
0
def eraStandardize(self,dataset,model): placeholder = pd.DataFrame() era = set([''.join(x for x in element if x.isdigit()) for element in dataset['era']]) era.discard('') era = [int(i) for i in era] maxera,minera = max(era)+1,min(era) for i in range(minera,maxera): data = dataset[dataset['era']=='era{}'.format(i)][[f for f in list(dataset) if "feature" in f]] placeholder = placeholder.append(pd.DataFrame(model(data))) return placeholder self.X_train = Preprocess().eraStandardize(self.training_data, Preprocess().StandardScaler) self.X_test = Preprocess().eraStandardize(self.test_data, Preprocess().StandardScaler) self.X_prediction = Preprocess().StandardScaler(self.X_prediction) def advisory_screen(self,portion,train_x): model = RandomForestClassifier(n_estimators=50) X_test = self.x_prediction sample_size_test = X_test.shape[0] idholder = pd.DataFrame() for i in range(1,97): X_train = train_x[train_x['era']=='era{}'.format(i)][[f for f in list(train_x) if "feature" in f]] X_train_id = train_x[train_x['era']=='era{}'.format(i)].id.reset_index() sample_size_train = X_train.shape[0] X_data = pd.concat([X_train, X_test]) Y_data = np.array(sample_size_train*[0] + sample_size_test*[1]) model.fit(X_data,Y_data) pre_train = pd.DataFrame(data={'wrong-score':model.predict_proba(X_train)[:,1]}) pre_test = pd.DataFrame(data={'right-score':model.predict_proba(X_test)[:,1]}) num_data = round(portion * X_train.shape[0]) test_alike_data = pd.concat([X_train_id,pre_train],axis=1) test_alike_data = test_alike_data.sort_values(by='wrong-score',ascending=False)[:num_data] ##############for control only##### print('out of {0} training sample and {1} testing sample'.format(sample_size_train,sample_size_test)) print('correct for training: {}'.format(sum([1 for i in model.predict_proba(X_train)[:,1] if i<0.5]))) #print('correct for validation: {}'.format(sum([1 for i in model.predict_proba(X_test)[:,1] if i>0.5]))) #print(pd.concat([test_alike_data.head(n=5),test_alike_data.tail(n=5)])) #print(pd.concat([test_class.head(n=5),test_class.tail(n=5)])) ################################# idholder = idholder.append(pd.DataFrame(test_alike_data.id),ignore_index=True) return train_x[train_x.id.isin(idholder.id)] self.X = self.training_data while self.X.shape[0] >= 75000: self.X = Preprocess().advisory_screen(0.9,self.X) print(self.X.shape[0])
4
null
2017-09-01
2017-09-01 10:17:28
2017-09-01
2017-09-01 10:19:08
1
false
en
2017-09-01
2017-09-01 10:26:32
0
1d078cf5189c
3.264151
1
1
0
In the previous article, I have demonstrated the method to iteratively read in the data for Numerai tournament, implement data…
3
Numerai Tutorial — II — Label Specific Preprocessing and Iterative Screening In the previous article, I have demonstrated the method to iteratively read in the data for Numerai tournament, implement data preprocessing, and high-level algorithms from scikit learn by creating a class variable. I have also included an implementation of adversarial validation, which is to intentionally select that most resemble the test data. The assumption is that train and test sets may come from different distributions, and we are given a big set of training data relative to the test data that we can possibly waste some without losing too much information. Machine learning is an exercise of garbage in and Garbage out(“GIGO”). If you feed too complex data into a algorithm which does a lot of logics and maths, chances are you would get some meaningless output, as algorithms are at the end of days, merely a qunatiative representation of information. Therefore, I am going to dig into the preprocessing part in this article. For the tournament, the training and test sets come with a label call “era”, which would determine our consistency score. If our prediction is consistent across all era, it would have a high consistency score, and vice versa. Valid submission should have at least 75% consistency. So the logic is we can take advantage of this label to do some customizing for prediction, as long as we preserve a 75% consistency. In the last article, I have included a function that preprocesses the data era by era, however that funciton does not fit for the test data so I updated it with the following one: Since we need to feed in both training and test data for the era-specific preprocessing, this function would work with both data for convenience. we can specify the datasets by the variable “dataset”, and the function to be applied on the data by “model”. This is an example implementation. both train and test data are fed into the custom eraStandardize function, using the scikitlearn StandardScalar function, while the prediction data is fed into the StandardScalar function directly, since we dont have the “era” label for the prediction data. Since the data is capable of adversarial validation, why dont we implement era-specific adversarial validation? So we screen out data for training era by era, in order to preserve the same percentage of data from each era. The code basically specifies the test data with a label of 1, and specify the training data with a label of 0. Then we apply the classifier to train the data era by era, select only the certain top range of training data. RandomForest with n_estimator 30 above performs quite well in the classification, usually losing only a few datapoint, but takes almost half an hour to complete one loop(all era for once) in my Mac when I use one core only. If you want an iterative way to screen out data, while each time only screen out a small portion, you can easily do it by recursively feeding the data back into the function, until the number of data meets your target. In fact that is what I would recommend because it includes more stability and avoids screening out a lot of information in one go. However it would take quite a few hours to complete then. Let me know if you have any question/comment on the above code. I am also looking for teammates for Kaggle/Numerai or in general buddies to learn machine learning together. Contact me if you are interested in coding and machine learning. I am also open to do financial trading analysis on interesting topics. If you have a trade idea but having difficulty to implement the back-test/finding the relevant datasets, feel free to contact me and we can chat about it.
Numerai Tutorial — II — Label Specific Preprocessing and Iterative Screening
1
numerai-tutorial-ii-label-specific-preprocessing-and-iterative-screening-1d078cf5189c
2018-04-15
2018-04-15 17:58:35
https://medium.com/s/story/numerai-tutorial-ii-label-specific-preprocessing-and-iterative-screening-1d078cf5189c
false
812
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Chris Wong
calisthenics lover. Python developer. financial trading and meditation
cc85c70ce5f
chris_whirlwind
52
67
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-22
2018-09-22 09:41:27
2018-09-22
2018-09-22 11:41:38
3
false
en
2018-09-22
2018-09-22 11:41:38
0
1d07ebd908bb
1.300943
0
0
0
Flash Text is a python Library used for searching and replacing words in the document. You can use Flash text instead of Regex library for…
5
FlashText- For Natural Language Processing Related Tasks Flash Text is a python Library used for searching and replacing words in the document. You can use Flash text instead of Regex library for making a search and replacement of words in a document, data frame. Why should you use Flashtext Library. For doing any kind of text analysis, first job is to clean the data in that we mainly replace , remove or count the frequency of particular keywords. for all these tasks we mainly uses regex library. where as regex library takes a lot of processing time to finish this job. and with the increase in size of document and the keywords(no. of terms) processing time with regex library increases whereas for Flashtext it almost takes same time. Flashtext vs Regex Flashtext is very much faster than regex. Almost 28x times faster than a compiled regex for 1k Keywords Time taken by FlashText to replace terms in comparison to Regex The graph explains that with the increase in No. of terms time taken by regex is continuously increasing whereas for FlashText its remains constant. Installing FlashText pip install flashtext Usage of Flashtext with code But whereas Flashtext can’t be used for all the task we used to do with regex.
FlashText- For Natural Language Processing Related Tasks
0
flashtext-for-natural-language-processing-related-tasks-1d07ebd908bb
2018-09-22
2018-09-22 11:41:38
https://medium.com/s/story/flashtext-for-natural-language-processing-related-tasks-1d07ebd908bb
false
199
null
null
null
null
null
null
null
null
null
Regex
regex
Regex
547
Shubham Sinha
Data Scientist at Tata consultancy Services Innovation Lab
44f43df615bd
sinha7387
0
1
20,181,104
null
null
null
null
null
null
0
null
0
31f4f88d6548
2018-02-09
2018-02-09 15:12:28
2018-02-09
2018-02-09 15:16:04
1
false
en
2018-02-09
2018-02-09 15:18:44
0
1d08a903ce4e
2.139623
1
0
0
CDS’s Andreu Casas and colleagues use NLP to analyze the evolution of 104,005 non-enrolled bills in the US Congress
5
I got 99 problems — and the law is one: how can we measure legislative efficacy? CDS’s Andreu Casas and colleagues use NLP to analyze the evolution of 104,005 non-enrolled bills in the US Congress In 2014, The Washington Post published an article summarizing the career of retired US legislator Robert E. Andrews under a damning headline: “Andrews proposed 646 bills, passed 0: worst record of past 20 years.” Ouch. The statistic appears to suggest that Andrews’s career was a flop — but as CDS’s Data Science Fellow Andreu Casas explained at a recent Moore Sloan Research Lunch Seminar, new conclusions arise once we consider how the legislative system functions as a whole, and use a more nuanced approach to analyzing the data about successful and unsuccessful legal bills. Presently, researchers use a metric named the Legal Effectiveness Score (LES) to analyze the efficacy of bills and legislators. The LES scoring system measures how far a particular bill advances within the complex multi-step legislative process. “But,” as Casas reminded us, “a bill is a vehicle for policy ideas and not necessarily a policy idea itself.” What LES does not account for is that the main ideas of several bills are usually extracted and then inserted into other larger bills that do eventually become the law. Moreover, the text, meaning, and intention of the ideas often remain intact when incorporated into larger bills. With this in mind, analyzing the evolution of these ‘hitchhiker’ bills, as Casas and co-authors Matthew Denny and John Wilkerson called them, instead of simply counting how many bills passed and failed, would be a more accurate way of measuring legislative efficiency. The question is, how can this be done? After compiling a dataset of 104,005 versions of non-enrolled bills and 4,073 enrolled bills from the 103rd to 113th Congress between the years of 1993 to 2014, Casas and colleagues tracked the insertion of non-enrolled bills into laws using an ensemble of NLP algorithms (that boast a 95% accuracy rate!). Essentially, these algorithms first pre-process the text of bills, reducing them to their core expression, and then evaluate the extent to which the full meaning of each non-enrolled bill has been inserted into a bill that became law in that same Congress. Their investigation yielded some revealing conclusions. For example, not only do more senate bills become law as hitchhikers on house laws (1,118) than when enacted on their own (1,037), but that they often become law when included in a bill that concerns a different topic. The key role that hitchhiker bills play in forming larger bills suggest, Casas concluded, that the legislative system is more decentralized and less partisan than we think. When taking these hitchhikers into account,we see that legislation is shaped by more viewpoints, interests, and people. An open question, however, is whether we have the right people in Congress. Will data science one day have the power to identify the ill-intentioned from the heroes? Well, it’s not a reality yet — but one can certainly dream. by Cherrie Kwok
I got 99 problems — and the law is one: how can we measure legislative efficacy?
5
i-got-99-problems-and-the-law-is-one-how-can-we-measure-legislative-efficacy-1d08a903ce4e
2018-02-09
2018-02-09 15:28:17
https://medium.com/s/story/i-got-99-problems-and-the-law-is-one-how-can-we-measure-legislative-efficacy-1d08a903ce4e
false
514
This is the official research blog of the NYU Center for Data Science (CDS). Established in 2013, we are a leading data science training and research facility, offering a MS in Data Science and, as of 2017, one of the nation’s first universities to offer a Ph.D. in Data Science.
null
nyudatascience
null
Center for Data Science
ab4829@nyu.edu
center-for-data-science
DATA SCIENCE,DATA MINING,TECHNOLOGY,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
NYUDatascience
Politics
politics
Politics
260,013
NYU Center for Data Science
Official account of the Center for Data Science at NYU, home of the Master’s and Ph.D. in Data Science.
880781a85c2
NYUDataScience
3,530
9
20,181,104
null
null
null
null
null
null
0
Mean temperature (.1 Fahrenheit) Mean dew point (.1 Fahrenheit) Mean sea level pressure (.1 mb) Mean station pressure (.1 mb) Mean visibility (.1 miles) Mean wind speed (.1 knots) Maximum sustained wind speed (.1 knots) Maximum wind gust (.1 knots) Maximum temperature (.1 Fahrenheit) Minimum temperature (.1 Fahrenheit) Precipitation amount (.01 inches) Snow depth (.1 inches) Indicator for occurrence of: Fog Rain or Drizzle Snow or Ice Pellets Hail Thunder Tornado/Funnel Cloud # The public FTP address provided by the NOAA ftp.ngdc.noaa.gov # Using python 2.7 import ftplib startYear = loopingYear = 1903 endYear = 2018 print 'Starting connection to NOAA database' # Try connecting to the server try: ftp = ftplib.FTP('ftp.ncdc.noaa.gov') ftp.login() print 'Connect successful' except ftplib.all_errors, e: errorcode_string = str(e).split(None, 1)[0] # Find out where you are and the files in the directory print 'Current working directory is %s' % ftp.pwd() print 'Listing files in current directory' ftp.retrlines('LIST') # Change to the GSOD directory to get your data print 'Changing directory to \"/pub/data/gsod/\"' ftp.cwd('/pub/data/gsod/') print 'Current working directory is %s' % ftp.pwd() all_files= ftp.nlst() # Check if a directory exists to put all the files in # If none exists, make it else ignore the command import os directoryName = 'GSOD Data' if not os.path.exists(directoryName): os.makedirs(directoryName) # Move into the folder directoryPath = '%s/%s' % (os.getcwd(), directoryName) os.chdir(directoryPath) print 'Searching directories for an individual year\'s weather data' while loppingYear <= endYear: tempDirectory = '/pub/data/gsod/%s' % loppingYear tempFileName = 'gsod_%s.tar' % loppingYear file = open(tempFileName, 'wb') ftp.cwd(tempDirectory) print 'Downloading %s' % loppingYear try: ftp.retrbinary('RETR %s' % tempFileName, file.write) print 'Successfully downloaded %s' % loppingYear except ftplib.all_errors, e: print 'Errir downloading %s' % loppingYear errorcode_string = str(e).split(None, 1)[0] loppingYear += 1 print 'All files downloaded' print 'Closing connection' # Close the connection ... it's just good practice ftp.close()
17
null
2018-04-09
2018-04-09 14:37:12
2018-04-09
2018-04-09 15:30:35
3
false
en
2018-04-09
2018-04-09 15:30:35
1
1d096b061ef3
3.704717
1
0
1
Connecting to an FTP is easy with Python, especially if you’re using the service to gather data for a project.
5
FTP Access with Python Connecting to an FTP is easy with Python, especially if you’re using the service to gather data for a project. In this tutorial we’re going to go over: Connecting to an FTP; Finding the files you need on the server; And, downloading the files. The data we need comes from the US Department of Commerce’s National Oceanic and Atmospheric agency. It is the Global Summary of the Day (GSOD) based on over 9000 stations from around the world, dating back to 1903. Current Homepage for the NOAA Each GOSD file holds data from a corresponding station’s location and time of year, on the following variables: Dope. We can get the data by downloading individual files, but with over 100 years of data and 1000s of locations that is going to take a long time. Let’s connect to the NOAAs FTP and download everything at once. We’re going to use the library ftplib which makes it almost too easy to connect to FTPs. The local variables belowstarYear ,loopingYear , and endYear are all set after reviewing the GSOD read me and finding out what years and files we want to grab. The specific files we want are the annual summary files, which hold all the year’s files, compressed by the different monitoring stations. They all have the same naming structure for both files and individual years, so it will be easy to find and save them. E.g. 2006’s file is called gsod_2006.tar and lives in the 2006 folder. We also find out from the read me, we don’t need a login or password. Let’s start the project: If no errors are caught you should be on the server, but where are you exactly and what is in the directory. Similar to your terminal pwd() and nlst() , are going to help you understand the the directory you connected to. Once you know those you can cwd() around into different directories. I’m not going to make you look around, as the data we need is in /pub/data/gsod . There are many other datasets in /pub/data/ however, we’re going to stick with the GSOD for now. Keeping things visual will help you understand the file structure Before we start pulling down all the data, lets make sure we have a place to put all our files. Now we can start downloading all the data into a single folder. Luckily for us everything has an easy naming convention as shown above. We’re using the built in open(filePath, mode = '') to set the path to write the file to. We have also set the mode to wb so the files are saved with no modification. It is a safer operation this way. Then, theftp.retbinary() will retrieve the file in a binary transfer mode, and write to disk. The code will loop until we’re at 2018 — our current endYear — and then closes the connection to the server. Looping through each year And … there you go! Not only was this easy, you did not have to use any code not included in the standard Python library. The steps here can be easily transferred to other FTP servers you need to scrape data for. And, right now you should have a folder full of GSOD data for you to use for your next data science project. Cheers.
FTP Access with Python
1
ftp-access-with-python-1d096b061ef3
2018-04-09
2018-04-09 15:30:36
https://medium.com/s/story/ftp-access-with-python-1d096b061ef3
false
836
null
null
null
null
null
null
null
null
null
Ftp
ftp
Ftp
183
Robert R.F. DeFilippi
Sometimes Chef ◦ Sometimes Data Scientist ◦ Sometimes Developer
8e46cdd91cd4
rrfd
211
131
20,181,104
null
null
null
null
null
null
0
null
0
6b7116b1d1
2018-01-10
2018-01-10 14:26:01
2018-01-11
2018-01-11 11:43:42
2
false
en
2018-01-11
2018-01-11 11:43:42
0
1d09917d232d
1.545597
14
0
0
At SearchInk we are starting to see a big growth in the number of machine learning and deep learning models being generated. The reason for…
4
Managing AI Models At SearchInk we are starting to see a big growth in the number of machine learning and deep learning models being generated. The reason for this is that we are running numerous experiments and simultaneously deploying certain models into production. At some point, we asked ourselves the following questions: Which model should we deploy in production? Where is the model stored? Where are the parameters and evaluations of the model stored? To answer these questions, we ideated on a solution we considered to be simple and scalable. Our thought process was as follows: Fig 1: Thought process We cannot store models in our git repositories as they are too large. The better alternative is to store them in Cloud Storage Buckets. We are currently using Google Storage Buckets for the same. We select models from here for our deployments. We also need to store the metadata associated to the model, such as, training/validation/test accuracy and loss, the dataset it was trained on, the hyper-parameters and other metrics like precision, recall, AUC ROC, to name but a few. We decided to store it on MongoDB. The final part was having a simple dashboard that provides the overview for all the models. Fig 2: Our first version of the dashboard It is common to see the models in production that do not perform as well as previous models that were deployed. With this framework it is a breeze for us to switch to previous models. In the forthcoming versions, we will be adding filters and various evaluation metrics to this dashboard, which will help us compare models and allow us to see the results generated by these models. This along with an admin console, where non-tech folks can manage models based on various criteria will give us a greater win. It will help in checking how the model performs on sample datasets which they can upload autonomously.
Managing AI Models
80
managing-ai-models-1d09917d232d
2018-04-15
2018-04-15 10:00:09
https://medium.com/s/story/managing-ai-models-1d09917d232d
false
308
Engineering Blog for omnius
null
null
null
omni:us
nischal@omnius.com
omnius
DEEP LEARNING,MACHINE LEARNING,COMPUTER VISION,NLP,COMPUTER ENGINEERING
omniusHQ
Machine Learning
machine-learning
Machine Learning
51,320
Nischal HP
VP, Engineering at Omnius | Bangalorean living in Berlin | Data and Music inclined
ce15d187c7ec
nischalhp
124
34
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-06
2018-07-06 00:39:21
2018-07-06
2018-07-06 02:19:47
3
false
zh-Hant
2018-07-06
2018-07-06 02:19:47
6
1d0cb1e928d7
0.787736
0
0
0
今日主題:條件式對抗網路與對抗網路家族
2
Day 46 — Conditional GAN and GAN Zoo 今日主題:條件式對抗網路與對抗網路家族 參考資料 hindupuravinash/the-gan-zoo 一日自學 GAN 17種GAN變體的Keras實現 Mirza, Mehdi, and Simon Osindero. “Conditional generative adversarial nets.” arXiv preprint arXiv:1411.1784 (2014). Goodfellow, Ian, et al. “Generative adversarial nets.” Advances in neural information processing systems. 2014. 筆記 結束類神經網路動物園之後,今天又找到了新的動物園可以逛啦!上一個動物園花了一個月左右的時間才逛完,而這次的新動物園也不遑多讓呢!看來又可以讓我安心的慢慢學(ㄍㄨㄟˇ)習(ㄏㄨㄣˋ)個幾周。 清單太長,常到動物園長需要按字母順序排序,直接抄襲過來會把文章長度搞爆掉,請自行參考[1]的首頁(Readme)。 這麼大量的GAN變體我想要一一介紹的話大概100天就沒了,所以預計是會挑著幾篇重點來寫。變體雖然多,但是根據辜狗的講法[2],沒有一種GAN能在真正意義上超越原版,只是在不同領域不同use case會有不同的GAN勝出。單就這點,其實可以看得出Goodfellow大神有多神,一個演算法提出到現在四年了,還沒有人能真正超越他…。 [2]跟本文一樣是個整理文,收集了不少好資源,[1]其實也在它的清單裡面。光是要照著那裏面的清單寫大概就會直接有寫不完的題目了。 今天要介紹的變體是Conditional Generative Adversarial Networks(CGAN),條件生成對抗網路。 這個網路的概念其實只有針對原版做了一個極小的更改,卻有了更廣泛的應用。CGAN在目標函數上的機率描述改寫為條件機率,這種做法有點像是在原版的目標函數上加上一個限制,讓它可以收斂得更快。 以下是CGAN跟原版GAN的目標函數對照: GAN: [5] CGAN: [4] CGAN的結構圖: [4] 條件機率中的條件y可以是任意給定的一個條件,例如(可能是Flickr的use case)一張用戶上傳的圖片,已有用戶同時寫上的caption或是hashtag,這就是一種條件。 給定條件之後再去給Generator/Discriminator Network去做學習,可以比較快速有效的收斂到穩定。 老實說,數學式上加上一個條件機率就可以發一篇論文,衍化出來的東西還真的可以用也很好用,這真的是夠神的了。
Day 46 — Conditional GAN and GAN Zoo
0
day-46-conditional-gan-and-gan-zoo-1d0cb1e928d7
2018-07-06
2018-07-06 02:19:48
https://medium.com/s/story/day-46-conditional-gan-and-gan-zoo-1d0cb1e928d7
false
63
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Falconives
null
250d8013fad2
falconives
11
19
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-28
2018-02-28 11:37:52
2018-02-28
2018-02-28 11:42:45
2
false
en
2018-02-28
2018-02-28 11:43:41
11
1d0cd95edf1d
0.621069
7
0
0
Watch interview here:
5
Invacio CEO W. James LIVE @ Blockchain Forum India Watch interview here: https://www.youtube.com/watch?v=0UHY-tyTbXU&feature=youtu.be Great Interview ! Enourmous Project Potential! Visit Invacio website: https://invest.invacio.com/ Join telegram Group: https://t.me/InvacioICO Follow Invacio on Facebook: https://www.facebook.com/InvacioNetwork/ Stay up to date with the most important news, alaysis, reviews and ICO projects! Follow us: Website: http://www.cryptoorders.com/ Facebook: https://www.facebook.com/pg/cryptoorders/about/?ref=page_internal Twitter: https://twitter.com/CryptoOrders Telegram: https://t.me/joinchat/GqIhGEsTFXqFnc9xH55kxA Medium: https://medium.com/@cryptoorders Youtube: https://www.youtube.com/channel/UCwHmDgHvXcLR8MzvD8cE62w Email: support@cryptoorders.com
Invacio CEO W. James LIVE @ Blockchain Forum India
222
invacio-ceo-w-james-live-blockchain-forum-india-1d0cd95edf1d
2018-05-19
2018-05-19 10:44:53
https://medium.com/s/story/invacio-ceo-w-james-live-blockchain-forum-india-1d0cd95edf1d
false
63
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Crypto Orders
ICO Projects, Reviews, Articles & Interviews, Trading and Market Analyse, so you can be updated with the best investment opportunities.
4bbd88303bd6
cryptoorders
68
53
20,181,104
null
null
null
null
null
null
0
null
0
58bcbdecc57a
2018-04-04
2018-04-04 22:19:32
2018-04-04
2018-04-04 22:19:29
1
false
en
2018-04-04
2018-04-04 22:19:33
1
1d0d05e73987
0.29434
0
0
0
null
4
A.(llucinazion)I. (parte seconda) QUATTRO CHIACCHIERE CON CASSANDRA Rubrica digitale a cura di Marco Calamari cassandra@cassandracrossing.org Originally published at Aneddotica Magazine — Collaborative Blog since 2012.
A.(llucinazion)I. (parte seconda)
0
a-llucinazion-i-parte-seconda-1d0d05e73987
2018-04-04
2018-04-04 22:19:35
https://medium.com/s/story/a-llucinazion-i-parte-seconda-1d0d05e73987
false
25
Magazine on line from 2012 — History, education, business, activism, tech & freedom enabling technologies.
null
aneddoticamagzine
null
Aneddotica Magazine
null
aneddotica-magazine
HISTORY,ACTIVISM,TECHNOLOGY,SCIENCE,POLITICS
aneddotica
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Aneddotica Magazine
Blog collaborativo dal 2012 Storia, economia, attivismo ed informatica - Economy, history, technology and human rights.
c97529d052de
aneddoticamagaz
71
688
20,181,104
null
null
null
null
null
null
0
null
0
eaea894300
2018-08-28
2018-08-28 14:35:29
2018-08-28
2018-08-28 14:44:00
3
false
en
2018-08-29
2018-08-29 21:36:04
5
1d109abb4818
3.738679
1
0
0
The current promise of AI is to augment sales. It’s not a distraction to the process if you know how to use it. And it’s certainly not…
2
How AI can augment B2B Sales The current promise of AI is to augment sales. It’s not a distraction to the process if you know how to use it. And it’s certainly not eliminating human ingenuity. Photo by Alex Knight on Unsplash Every time you put AI in the subject with something else, it becomes a harder topic to explain. Mainly because everybody’s understanding of the term AI and the field, in general, is vastly different. We have seen so many ups and downs in AI it’s hard to believe what we have right now is actually the AI we imagined or not. As John McCarthy said, “As soon as it works, no one calls it AI anymore”. Once something starts to work, we start thinking about something that’s not working. While this dissatisfaction is good at an industry level as it makes us push the boundaries, it also promotes an absolute level of thinking. As an example, if you are reading this article then it means you are interested in B2B sales. It’s also likely that when you read the subject you might have thought that this article will be about a dystopian future where machines are doing everything related to sales. Or you might have thought that this is another clickbait title with no real implications for your job. The article, and the answer for that matter, however, is actually in the middle. Neither are machines taking over the sales job. And neither AI is a completely useless talk with no practical implications. AI in B2C An average American household does most of its shopping from Amazon. And what you buy next is decided by a bunch of algorithms at the backend. No one from Amazon gets on a call with you to help you pick towels for your home. It’s the AI. This is unheard of from a human history perspective. Imagine Amazon as a giant bazaar and individual merchants as stall holders. Your success as a merchant in this bazaar is dependent on computer algorithms and not on physical location. There is an industry that works on making your product more visible on Amazon. And it’s predicated on understanding on how those computer algorithms work. Photo by Markus Spiske on Unsplash AI in B2B B2B is different because what you are buying 1) costs a lot of money and 2) has an impact on your company and not just you. This makes b2b sales a lot more tricky for computer algorithms. Computer algorithms work well if they have historical data of your likes and dislikes. Due to the scarcity nature of b2b buying decisions algorithms struggle to predict your next buy. And even if they become good at it as a buyer you will tend to trust less. Not because of anything wrong with the prediction itself but because of the nature of the decision itself. Big money and high stakes make it a harder decision without an actual human assistance. But just because you have to be there to make a sale happen, does not mean you have to do that in isolation. To begin with, it’s highly unlikely that any of your prospects are not on the Internet or social media. So you can use data on platforms like LinkedIn and Facebook to your advantage. Even finding new prospects could be leveraged. Second, AI gets better if it can have more and more sample data. So record everything. Better yet let AI do the entire process of inserting data into your CRM. The best use case of any AI system you build is a task that a human can do in less than 5 seconds. Computers not only will be able to do it better. They can do it a whole lot faster. Photo by Helloquence on Unsplash Also, when it comes to data there is no such thing as bad data. Even the phone calls that were unanswered and leads that result in nothing could be useful. So put them all in. The best way we have found to do this rather mundane and hectic task is to actually let a software record and track everything you do. Machine learning works on correlation. So even if a recorded email that wasn’t entertained by the intended prospect can serve a real purpose for an algorithm. For example, the algorithm can use that data to predict the chances of the same prospect of buying a similar or related product. You might have ignored the prospect, keeping in view your previous conversation. But AI might be able to change your mind. Or it can validate your assumption by providing more insights on why it won’t work. Similarly, AI can help you identify bottlenecks in your sale process. By looking at the historical data AI can identify the areas of improvement and if your sales reps need any additional training. We might be years away from Artificial General Intelligence (AGI) but the Artificial Narrow Intelligence (ANI) is already here. The former’s promise is to replace humans with machines — if possible. The latter, however, is here to help you do your job better. And b2b sales is no different.
How AI can augment B2B Sales
1
how-ai-can-augment-b2b-sales-1d109abb4818
2018-08-29
2018-08-29 21:36:04
https://medium.com/s/story/how-ai-can-augment-b2b-sales-1d109abb4818
false
845
Coglide is an AI-driven Customer Engagement Platform that helps marketers discover, engage, and connect with their ideal customers.
null
null
null
Coglide
null
coglide
null
coglide
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Barg Upender
null
28b59393f382
coglide
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-17
2018-01-17 08:24:35
2018-01-17
2018-01-17 08:34:56
1
false
en
2018-01-17
2018-01-17 08:37:56
7
1d11ad851454
2.230189
3
0
0
Data science in recent days has created a huge impact in almost all the industries.As a result Big data analytics has become top priority…
5
Why You Should Choose Data Science as your Career? Data Science Course in Bangalore Data science in recent days has created a huge impact in almost all the industries.As a result Big data analytics has become top priority in all the organizations.Here is top six reasons Why you should consider data science course as your career? Read the Full Article Here.. 1.Demand And Requirement For Data Scientist In the Next Few Years ,It is expected that the size of the data analytics market will evolve to at least one-thirds of the global IT market from the current one-tenths.All the organizations whether large and small — are clamoring to find employees who can understand and synthesize data, and then communicate these findings in a way that proves beneficial to the company and help the management to make decisions 2. Career Growth ,job Prospects And Salaries There is a shortage of data scientist at all the levels from beginner,Freshers to that of manager level.Since IT industry is at the verge of change so many middle level manager s and professional across domains are finding their career growth stagnant .Data science is the best option to overcome downturns of career stagnation. Annual pay hikes for Analytics professionals in India is on an average 50% more than other IT professionals.Salary trendsfor Data science professionals across the globe indicates a positive and exponential growth. 3. Work Options When you become a data scientist, you can work practically anywhere you wish in any domains ,any part of world.Apart from technology industry which of course employs most data scientist , data science professionals can work in other industries and domains ranging from healthcare/pharma to marketing/Sales and financial services to consulting firms to retail and CPG industries.Data scientist can also work for the government and NGOs. Check out These Videos on How to Become Data Scientist Data Science Career Option And Job Prospects Data Science Course in Bangalore-learnbay.in 3. Experience Factor Data science is such a relatively new field that organizations are not able to find the experienced profile and its a great opportunities for IT professional from different domains and streams to up-skill and learn data science.According to an industry report,40% of data scientists have less than 5 years of experience, and 69% have less than 10 years of experience. 5. Lack of Competition And Ease Of Job hunting and career Options Not only is there a shortage of data scientists,but there is lack of competition as well since data science domain is relatively new field.A entry level data scientist and an expert level will have a experience gap of very few years.So here lies the great opportunity for career growth, Data scientists are in high demand and there is shortage of skilled professional in the market So,Its relatively easy to search job in data science domain. 6. Variety of Training and Skill Upgrade Options Available There are many training options available for data science training.There are different modes of training available like online ,Classroom ,Self paced video based training ,MOOCS for data science etc. If you are looking for a full fledged data science course in Bangalore ,then you can go for data science post graduate and masters program. Looking for data science Course Curriculum: Download Data Science Course pdf Read here on How to become data scientist?
Why You Should Choose Data Science as your Career?
45
why-you-should-choose-data-science-as-your-career-1d11ad851454
2018-05-03
2018-05-03 06:51:21
https://medium.com/s/story/why-you-should-choose-data-science-as-your-career-1d11ad851454
false
538
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Learnbay
Learnbay.in is the Best Training Institute in Bangalore
df228312c9ee
learnbay
3
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-07
2017-11-07 14:24:59
2017-11-08
2017-11-08 07:01:01
2
false
en
2017-11-08
2017-11-08 07:01:01
8
1d155caecbd0
4.760692
0
0
0
With the surge in voice-controlled devices, conversational interfaces for virtual assistants will go mainstream in 2017. Ask these 5…
4
Designing for virtual assistants: 5 decisions to make in 2017 With the surge in voice-controlled devices, conversational interfaces for virtual assistants will go mainstream in 2017. Ask these 5 questions before you start. 2017 looks set to be the Year of Voice, and we couldn’t be more excited. With 4 million Amazon Echo speakers sold over the 2016 holiday season, and Google reporting a fourfold increase in Google Home users, millions of homes now have a voice interface to the internet. In the smartphone world, Siri, Alexa, Cortana and Google Voice are enjoying a surge in popularity, with 65% of smartphone users saying they use a voice interface to carry out tasks. Source: VoiceLabs Then there’s Ford, which has just announced plans to embed an Alexa interface into some models of vehicle, starting later this year. And who could resist Mayfield Robotics’ super-cute, voice-controlled “home robot” Kuri, who launched at CES this month? So if there’s one big prediction to make about 2017, it’s that people will finally get over their squeamishness about talking to machines. By the end of the year, consumers ought to be as comfortable asking Alexa for their bank balance as they are with accessing their bank account online. 5 essential questions to ask before getting started with designing for virtual assistants Voice-based interfaces are taking off so fast that no brand will want to ignore them. But before you get started, there are five critical questions you’ll need to answer. Most brands are well aware of the surging popularity of virtual assistants, and many are already experimenting with their APIs. Already, developers have created 7,000 third-party “skills” for Alexa, for example, ranging from EDF’s bill payment service to SkyScanner’s flight search engine. Those brands that haven’t already dipped a toe in the water will probably want to start experimenting in 2017, either in excitement at the opportunity to engage with customers via a new and evidently very popular channel, or in fear of potentially missing out. The key thing will be to identify the right context for any new voice-driven service. That means answering questions like: who will be the users of the service, why is this the best interface for it, which task(s) will it enable, and which platforms and devices will be most appropriate. UX research will be key to getting the information to drive the right decisions. (Read more about why UX research is so important.) Companies have done a heap of work on honing their brand personality and ensuring a consistent experience between different channels. But most of that work has been applied to visual and text-based interfaces. Comparatively little effort has been spent on defining what the brand sounds like when it speaks to customers. Yet there’s a huge amount of evidence that people automatically attribute characteristics to a speaking voice, even if it’s not a human one. Put simply, if your brand speaks in a human-sounding voice, your customers will start to assume things about your brand based on that voice persona. Things like the persona’s apparent gender, its accent, the words it uses and the rhythm of its speech will all leave a lasting impression — which may be positive or negative. So in 2017, brand guardians must start to think deeply about the brand’s voice persona, and what style of speech best embodies the brand values expressed elsewhere in its visual and text-based interfaces. Consistency between written, visual and spoken elements of the brand will be essential to maintaining authenticity and generating customer trust. From our conversations with Fortune 800 brands, combined with what we’re seeing on forums, it looks as though brands are setting up dedicated teams to focus on new types of conversational interface. Brands should be careful, however, not to repeat mistakes made at the dawn of web, social and mobile, when dedicated teams working in silos ended up designing experiences that were disconnected — both technologically and in look and feel — from other channels. We know customers want a seamless experience across channels, so these teams should be fully integrated with wider CX activities. Another important consideration will be the UX skills required by the team. While a lot of UX theory applies across all interfaces, designing for a voice interface requires additional specialist skills that are very different from designing from visual interfaces. If you don’t have these skills in-house, you may well need external help with designing a smart, connected and conversational experience for your virtual assistant channel. We’d love to help. As more brands develop services for Alexa and other virtual assistants, it will prompt a critical question: Is Alexa the voice of our brand? And if not, what exactly is her relationship to our brand? It’s critical because the research we conduct for VoxGen clients continually shows that people expect brands to feel consistent and behave in a consistent way. Alexa users expect Alexa to interact with them in a certain way, but they may expect something different of an insurance company or pharmacy. If Alexa also becomes the voice of an insurance company (or many insurance companies), our research suggests it could sow confusion and mistrust. Importantly, CX teams may also feel a loss of control over the brand experience if Alexa (an external platform) becomes the “voice” of the brand. For this reason, CX professionals will need to think seriously about what role the virtual assistant plays in the customer experience, and at what point — and how — it should “hand over” to a voice experience that’s created and managed by the brand. This will be a critical design decision, and one we’ll come back to in a future post. With all the current hype around virtual assistants, you could be forgiven for thinking that IVR may be on its way out. But it’s important to remember that not everyone wants to engage with a brand through a voice assistant — at least, not yet. Many customers still pick up the phone: in 2016, for example, the UK’s Institute of Customer Service found that 43% of consumers use the phone to make inquiries. While virtual assistants will quickly gain ground, IVR will remain an essential customer engagement channel for some time. For brands, that will mean creating a coherent voice UX strategy to ensure that IVR and virtual assistants all deliver a consistent and high-quality customer experience. That will mean understanding and applying principles of great conversational design across all channels where audiences interact with the brand using voice. For many brands, overhauling the existing IVR could be a great start point. Voice-based interactions will take off in 2017, and the key to success will be designing interfaces and interactions that feel natural and give people the help they need. If you’re looking for an expert partner to help you apply the principles of good conversational design, let’s talk. Originally published at www.voxgen.com.
Designing for virtual assistants: 5 decisions to make in 2017
0
designing-for-virtual-assistants-5-decisions-to-make-in-2017-1d155caecbd0
2018-06-06
2018-06-06 16:27:08
https://medium.com/s/story/designing-for-virtual-assistants-5-decisions-to-make-in-2017-1d155caecbd0
false
1,160
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
VoxGen
At VoxGen, we help you create IVR experiences that support, surprise and satisfy your customers, making them enthusiastic advocates of your brand.
1e06303451fe
voxgen
7
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-31
2018-08-31 14:42:42
2018-09-04
2018-09-04 20:02:13
3
true
en
2018-09-04
2018-09-04 20:02:13
1
1d17b65c639f
1.689623
0
0
0
For this week’s Makeover Monday, we were given a spreadsheet containing data as to where NIKE factories are located globally. We were also…
5
Where are NIKE Shoes Made? Just do it. — Nike For this week’s Makeover Monday, we were given a spreadsheet containing data as to where NIKE factories are located globally. We were also given the original visualization, which can be viewed here: Original Visualization from Nike Objectives What works? The interactive geographical tool is very cool. You can click on a country and the map would zoom in and show a more detailed map of that country — showing the exact locations of the factories. What doesn’t work with this chart? What if I wanted to see where most factories are located at a worldwide glance? This interactive map cannot provide me with that info immediately. It would be also more useful if the numbers were larger. It almost seems as if this tool was made for internal review. My Data Viz View the interactive visualization here Results As you can see, I created my visualization to be very heavily-infographic styled. I provided brief high-level summaries at the top — nice and big. Next, for the bottom portion of the infographic, I added a vertical bar chart of the top 10 countries with the most Nike factories. Vertical bar charts are very easy to read and comprehend compared to other types of visuals I could have chosen. Because I liked the idea of a visual map from the original visualization, I added a heat map of Asia and Australia. Now, you can easily determine what countries are filled with Nike factories. Evidently, Southeast Asia contains 85% of Nike factories’ workforce. Perhaps this is a reason why Nike is under constant scrutiny for poor labor-ethics? They base most of their factories in countries with poor labor laws and regulations. Thus, Nike is able to get away with having extremely low labor costs, while marking up their products for several hundred percent more for retail.
Where are NIKE Shoes Made?
0
where-are-nike-shoes-made-1d17b65c639f
2018-09-04
2018-09-04 22:35:40
https://medium.com/s/story/where-are-nike-shoes-made-1d17b65c639f
false
302
null
null
null
null
null
null
null
null
null
Data Visualization
data-visualization
Data Visualization
11,755
Tiffany Duong
My name is Tiffany and I am a data analyst based in Boston. I am so very passionate about global affairs and blueberry ice coffee. (Tiffany-Duong.com)
e28e09b562b9
duongtbg
3
1
20,181,104
null
null
null
null
null
null
0
null
0
3cff50f50c18
2018-03-23
2018-03-23 00:43:37
2018-04-18
2018-04-18 13:01:04
3
false
en
2018-04-18
2018-04-18 17:30:21
7
1d18bb78f35b
2.493396
2
0
0
ObEN is excited to announce today the premiere of the first-ever Personal AI (PAI) art docent experience at the museum-retail space…
5
ObEN Debuts World’s First Personal AI Art Concierge at Shanghai K11 ObEN creates world’s first Personal AI art concierge at K11 Shanghai ObEN is excited to announce today the premiere of the first-ever Personal AI (PAI) art docent experience at the museum-retail space Shanghai K11 Art Mall. Bearing the likeness and voice of K11 Founder Adrian Cheng, the AI concierge will guide visitors through K11 Art Museum’s new duo exhibit featuring works from American ceramic artist Betty Woodman and acclaimed Chinese painter Zhao Yang. “This exhibit introduces visitors to an experience that brings together art and technology in an entirely new and unique dimension,” said Adrian Cheng, founder of K11. “Together with ObEN, we are disrupting the way people navigate the exhibit experience, connecting with them on a deeper, more intimate level and fostering a closer connection between audience, curatorial expression, and artist through technology. We look forward to expanding this immersive experience to other areas within the K11 ecosystem.” Exhibit at Shanghai K11 Art Mall The brainchild of entrepreneur and business innovator Adrian Cheng, K11 is a pioneering multi-faceted brand rooted in culture and interconnected by three core values: art, nature and people. The K11 Art Mall, featuring flagship locations in Hong Kong, Shanghai, Guangzhou and Wuhan, is the world’s first museum-retail experience. Consistent with its spirit of innovation, K11 continually seeks and attracts technology partners that help elevate the immersive experience for visitors to its Art Malls, office towers K11 Atelier, and other K11 entities. The new exhibit is a unique convergence of art and technology, cultures and mediums. From now until June 17, 2018, visitors will be individually guided through the exhibit by Cheng’s PAI, from a smart device bearing an artificial intelligence based avatar. The first experience of its kind, the PAI concierge will provide visitors with a customized experience, with Cheng’s PAI providing details about select works on display, artist history, and exhibit development. Earlier this year, K11 invested $10M in ObEN in recognition of many possible applications of the company’s PAI technology in retail, hospitality, and entertainment applications. ObEN Cofounders with K11 Founder Adrian Cheng “This exhibit at K11 showcases the transformative impact PAI can have on industries including retail and hospitality — providing visitors with more engaging and personalized experiences” said Nikhil Jain, CEO and co-founder of ObEN. “We are thrilled to partner with K11 to create experiences that complement their unique conglomeration of art, entertainment, and retail.” ObEN CEO Nikhil Jain (left) and his multilingual PAI (right) The PAI art concierge is powered by ObEN’s Personal AI technology. On the consumer front, the technology allows users to easily create a 3D avatar that looks, sounds and behaves like them with just one selfie and a brief voice recording from their smartphone. Secured and authenticated on the Project PAI blockchain, ObEN’s PAI provides unprecedented levels of security, data control, and utility. In addition to K11, ObEN also works with leading organizations like S.M Entertainment, with whom they have a joint venture — AI Stars, the first agency for celebrity Personal AI’s. ObEN’s Personal AI technology will be available in mid-2018. Learn more at paiyo.com. Join our Community Our newsletter subscribers get exclusive access to beta applications and news updates. Subscribe here. Follow our journey on Twitter.
ObEN Debuts World’s First Personal AI Art Concierge at Shanghai K11
2
oben-debuts-worlds-first-personal-ai-art-concierge-at-shanghai-k11-1d18bb78f35b
2018-05-15
2018-05-15 14:00:38
https://medium.com/s/story/oben-debuts-worlds-first-personal-ai-art-concierge-at-shanghai-k11-1d18bb78f35b
false
515
Enabling every person in the world to create, own and manage their Personal AI. Tencent, Softbank Ventures Korea & HTC Vive X portfolio co.
null
obenai
null
ObEN
contact@oben.com
oben
null
obenme
Oben
oben
Oben
11
ObEN
Enabling every person in the world to create, own and manage their Personal AI. Tencent, Softbank Ventures Korea & HTC Vive X portfolio co.
921634f7855c
ObenAI
173
23
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-12
2018-06-12 11:28:35
2018-06-17
2018-06-17 11:42:57
15
true
zh-Hant
2018-09-18
2018-09-18 16:09:45
7
1d1983e40119
2.254717
27
0
0
用資料分析視角看看台灣著名的英文教學 Youtuber -阿滴英文、C’s English Corner和JR Lee Radio都在分享什麼呢?
5
[網紅大數據]台灣英語教學Youtuber都在分享什麼<Part1> 用資料分析視角看看台灣著名的英文教學 Youtuber -阿滴英文、C’s English Corner和JR Lee Radio都在分享什麼呢? Photo by pan xiaozhen on Unsplash 『 不要讓語言成為你看世界的絆腳石 』- C’s English Corner 的 Catherine 在影片尾聲常會強調的結語,也成為我們日常學習英文的座右銘。 神聖羅馬帝國的查理曼大帝也說︰To have another language is to possess a second soul.(學會另一種語言就如同擁有了另一個靈魂)。 在這社群發達的時代,使用社交媒體學外語已成為一個流行的學習途徑,而台灣也孕育了許多英語教學的 Youtuber,跳脫以往學校的教學方式,運用活潑且貼近生活的方式分享英語知識給大家,我們也是他們的忠實聽眾喔! Youtube 上相關影片五花八門,身為資料分析師的我們,想利用我們擅長的資料分析角度來看看這些 Youtuber 英文老師們都在分享什麼,大家又對哪些類型的影片特別關注呢? 以下是我們常 follow 的 Youtuber 英文老師們,也是我們這一次分析的資料來源 (Data Source)。 我們的分析資料源 (Data Source) 從哪裡來? 我們蒐集過去至 2018/6/12 中,常關心的三位英語教學 Youtuber 的作品,包含各影片的發佈時間、觀看次數、Likes數、Dislikes數、影片標題與影片描述。三位Youtuber分別為:JR Lee Radio、C’s English Corner 與阿滴英文。 1. JR Lee Radio JR Lee Radio 頻道單元介紹: 【Positive Tunes 好歌學英文】- 介紹有正能量、無髒話、無腥羶色的好英文歌,還會教裡面的歌詞喔! 【Positive Thoughts 正能量好想法】- 每週帶給你一個正面的、好的新思維…www.youtube.com 2. C’s English Corner C's English Corner 英文角落 在台灣長大,自學英文超過二十年, 我了解,學英文不是「有上課才算有學,沒上課就學不了」。 接觸英文教學快十年, 我了解,學生普遍不知道如何從被動學習中走出來變成主動思考。 經營頻道一年,…www.youtube.com 3. 阿滴英文 阿滴英文 哈囉我是阿滴,我是個創作者、教育家,在這個頻道上我會分享各種有趣又有效學習英文的方法!www.youtube.com 三位 Youtuber 發布影片整體綜合比較 以影片的平均觀看次數來看,阿滴英文高達36萬,累積的影片數與訂閱量最高。在 Youtuber 英文教育領域來說,阿滴英文相較之下是出來較久的,其知名度在台灣佔有一席之地。而 C’s English Corner 與 JR Lee 為近一年的新秀,也在這一年多累積至22萬的訂閱量,他們採用截然不同的教學方向:JR Lee 專注在『 正能量的分享 』,藉由英文教學傳遞滿滿正能量,引導大家用不同態度面對生活中的大小事 ; 而 C’s English Corner 的 Catherine 則利用『 激發語言學習靈魂 』的方式,透過活潑且實用的方式讓大家克服學習英文的障礙。 阿滴英文最夯影片解析 自從阿滴英文進入Youtuber界之後,哪一個年月份的影片最受歡迎呢? 目前平均最觀看次數最高,為於『 2015年的3月份 』發佈的影片,當月影片的平均觀看次數高達90萬次 這個月影片為什麼如此受歡迎? 幫助網友如何『完成英文自我介紹』與『有效率背單字』 有此圖可見,台灣對於背單字的需求是很普遍的,且大家也想追求高效率的記單字法。另外,台灣也有許多即將上大學的高中生有準備大學甄試入學的需求,對於英文自介的需求極高。 最受歡迎TOP 2是在哪個年月呢? 目前平均最觀看次數次高,為於『 2017年的1月份 』發佈的影片,當月影片的平均觀看次數高達61萬次 這個月影片為什麼如此受歡迎? 把『英文結合有趣的議題並與網紅們合作』 觀眾喜愛的阿滴英文教學影片,除了實用的考試用題材之外,也特別關注一些阿滴與其他網紅合作的影片,從片單中可以發現,阿滴擅用各種有趣的方式與網紅合作,甚至結合遊戲與歌曲,因此非常吸引大家。 JR Lee Radio 最夯影片解析 自從JR進入Youtuber界之後,哪一個年月份的影片最受歡迎呢? 目前平均最觀看次數最高,為於『 2017年的3月份 』發佈的影片,當月影片的平均觀看次數高達 36 萬次 這個月影片為什麼如此受歡迎? 與網友『分享自身的英文學習經歷』,尤其著重在能讓英文流利的方法 觀眾對於『 英文學習方法 』相關主題是非常關注的,此結果可呼應上面阿滴英文觀看率最高的影片。大家在追求各個 Youtuber 最優先關注的議題就是找尋正確有效的學習方法,希望能夠突破以往在學校死背、能夠閱讀文章卻說不出口的瓶頸。 最受歡迎TOP 2是在哪個年月呢? 目前平均最觀看次數次高,為於『2017年的3月份』發佈的影片,當月影片的平均觀看次數高達13萬次 這個月影片為什麼如此受歡迎? 『 邊聽歌邊學英文』很讓人舒壓且對於英文更記憶深刻、『如何克服人生的挫折、如何讓人生更好?』是網友重視的人生課題 『 好歌學英文』主題,亦為相當受關注的,尤其此部經典名曲『 美女與野獸 Beauty And The Beast 』的英語歌曲學習主題。除了歌曲學習之外,觀眾也特別在意一些人生課題,尤其關於克服人生遇到挫折以及如何養成好習慣等等,這些也確實是時下的大家特別在意的人生課題。 C’s English Corner 最夯影片解析 自從Catherine進入Youtuber界之後,哪一個年月份的影片最受歡迎呢? 目前平均最觀看次數最高,為於『 2017年的4月份 』與『 2017年的1月份 』發佈的影片,影片的平均觀看次數分別高達 71 萬次與 46 萬次 這兩個月的影片為什麼如此受歡迎? 網友希望能『在旅遊中講出一口順暢的英文』且可以用在『機場』和『購物』 C’s English Corner 主要專注在各種類型英語實務教學上,Catherine將影片類型又細分成 — 旅遊英文、英文實戰技巧、練口說、練文法、練發音、生活休閒英文等,目標讓大家可以更針對想加強的部分對症下藥。從上圖得知,大家可能最期待能在旅遊中更靈活的使用英語能力,因此關注度最高的影片中,就有兩篇是針對『旅遊』的部分,且對於 [機場篇]順利過海關的八句實用句型最為關注。其次大家也注重自身英文實戰技巧的提升,且對於如何在台灣自學英文的方法也特別關注。 從下圖統計中,我們也可以看到『 旅遊英文 』確實也是平均點閱率最高的影片類型、其次是英文實戰技巧與練口說,這也普遍反應了現在社會中大家認為自己在實際的英文應用以及口語上有較大的加強空間,因此主動關注這些類型的影片提升自身英語能力。 以上是針對三位我們喜愛的英文教學Youtuber的影片觀看數綜合解析。接下來,我們會在下一篇: [網紅大數據]台灣英語教學Youtuber都在分享什麼<Part2>的部分,針對網友點下LIKES按鈕與影片觀看數作綜合解析,你可以點閱下方連結繼續閱讀part2。 [網紅大數據]台灣英語教學Youtuber都在分享什麼<Part2> 用資料分析視角看看台灣著名的英文教學 Youtuber -阿滴英文、C’s English Corner和JR Lee Radio都在分享什麼呢?medium.com 謝謝你閱讀完這篇文章,『我們期待能把資料分析的技術工作與時下有趣的事物相結合,帶給大家新穎的感受』,未來也將會有更多的嘗試。 有任何問題也歡迎在底下留言或是來信告訴我們: roboii0612@gmail.com
[網紅大數據]台灣英語教學Youtuber都在分享什麼<Part1>
356
english-teaching-youtube-data-analysis-part1-1d1983e40119
2018-09-18
2018-09-18 16:09:45
https://medium.com/s/story/english-teaching-youtube-data-analysis-part1-1d1983e40119
false
200
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
資料分析大小事
『為流行而分析流行』。我們是美商資料分析公司的小小資料科學家,日常喜歡關心流行議題。
c865f2f92953
roboii0612
66
17
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-17
2018-05-17 14:51:42
2018-05-17
2018-05-17 17:34:49
6
false
en
2018-05-17
2018-05-17 19:25:45
1
1d1984bdf023
9.678302
2
0
0
This article was co-authored by Adit Gupta and Eric Zhang. They are engineers and Senior Microsoft Student Partners at Drexel University…
5
Microsoft Build 2018 : The Student Experience This article was co-authored by Adit Gupta and Eric Zhang. They are engineers and Senior Microsoft Student Partners at Drexel University and the University of Pittsburgh respectively. Here, we will discuss our experience at the 2018 Microsoft Build Conference as a whole, and share our key takeaways and thoughts on the future of core technology fields including ML/AI, IoT, and more. What is Microsoft Build Build is Microsoft’s annual developer conference, hosted this year in Seattle, WA from May 7–9, 2018. This year, we were invited to attend and help represent the US among approximately 50 other Microsoft Student Partners (MSP), from a pool of over 2,500 MSPs around the world. Build is an opportunity for developers to learn about new technologies and provide constructive feedback on existing ones, and a platform for professionals to mingle ideas and interact with each other. We had the opportunity to speak with developers to constructively talk about some of the most used Microsoft products such as Azure, Office, and Machine Learning Solutions, and how they could be enhanced to ease the development experience. Hence, while the goal of Build is to showcase some of Microsoft’s latest and greatest technologies, we encountered many user experience researchers at the event — gathering feedback and working hard to improve existing products. Event Format For the ultimate pre-planned experience, we were given a schedule for the entire week, well before the start of this conference. Each detail and workshop was planned brilliantly, and we were given the opportunity to choose from some 500 + odd workshops, sessions, and talks, based on our interests and curiosities. Below, we break down our experiences and encounters from the inaugural kick-off dinner to Build Closing Ceremony at the Chihuly Gardens. The Greetings As we gathered in a well decorated ballroom, of the Sheraton Hotel, we were greeted with custom Build apparel, VIP passes for the week, maps, and more items to get us ready for the Build conference. The room was well diverse, with student from 30+ countries across 5 continents, from unique universities all around the world. From Australia, to Canada, and Thailand, to Korea — each student, flown into Seattle with the opportunity to learn from some of the brightest and most engaging minds in technology. This entire experience, so delicately planned by Teresa Greiner, Susan Ibach, Justin Garrett, Tracey Salem and their team, was finally in action as we heard the team speak about their their backgrounds and how we, the students, can make the most out of this event. With scheduling workshop tips, to finding the best nap times — the team had covered it all. From left to right: Justin Garrett, Susan Ibach, Teresa Greiner, and Tracy Salem The Keynotes: Imagine a huge convention center, almost the size of a football field, with technologists from all over the world ~ about 15,000 attendees all together. The keynotes were perhaps the most exciting part of Build — where Satya Nadella (CEO), Scott Guthrie (Executive VP of Cloud and Enterprise Growth) and Joe Belfiore (Corporate Vice President) very enthusiastically releases some of the latest products to the public. While there were many technology demos and exhibits, here are some of the top picks of the new items and most important conversations, chronologically, over the two days of keynotes at Build: Day 1 Keynote Announcements: Ethical AI: Emphasized concentration on privacy preserving AI products. Microsoft plans on creating a special board to discuss the ethical dilemmas and issues using the collection and manipulation of big data. Azure IOT Edge: Be able to deliver cloud intelligence locally, by deploying and running AI services right on your gadgets. This means virtually no latency in information travel time to various servers around the world; Your IOT device will be able to perform computations locally. Azure Conversational AI: There were many new updates to the conversational AI products — BotBuilder v4, BotFramework v4, and the all new “BotBuilder Tools” which help enable an end-to-end conversational app-centric development workflow. Cosmos DB: While Cosmos DB was first introduced in 2014, this year Microsoft announces some very ambitious new capabilities such as unlimited elastic write scalability from around the world and a guaranteed lower write latency from around the world. All this means is developers will be able to develop products which can write data to the database much quicker than ever before. Project Brainwave: This was perhaps the most exciting new feature from first day of Keynotes. Project Brainwave takes a hardware-centric approach to solving AI calculation latency issues. Project Brainwave is a hardware architecture designed to accelerate real-time AI calculations. You will be able to make use of artificial intelligence on the fly (literally and figuratively), with all features available on the cloud and on edge. Live Share Visual Studios: Perhaps the feature to induce the most productivity amongst developers — Live Share. Microsoft announced a very practical add on in Visual Studios to let developers be able to share their code and collaborate on projects. In short — imagine Google Docs, but in your favorite IDE. Day 2 Keynote Announcements: Timeline: Who doesn’t like technology with good flow? ? Imagine being able to deep link all of your applications to create a seamless and uninterrupted experience across all your devices. This feature can be very useful as the deep linking and machine learning coupled together in Operating System can make you a lot more productive on your work day! Sets: Sets are a Windows 10 feature that will allow you to organize all of your applications on one page. Let’s say you need to do a certain task that requires a few specific websites, Word, Excel and maybe another app. With Sets, you will be able to bundle that workflow into a “set” and restart the workflow with just a click. While this feature is not available just yet, Microsoft will release it “When they think it’s great”. Phone Notifications: This is a feature that puts Apple’s iMessages directly to competition. I [Adit] personally still use my Mac because of the flawless integration and accessibility to my iPhone. Windows 10 will be rolling out a new application to connect any cellular device to your phone and view and interact with the notifications. .NET Core 3.0: The favorite desktop application framework just got an update! NET Core 3 will mainly focus on Windows desktop applications, specifically Windows Forms, Windows Presentation Framework (WPF), and UWP XAML. Adaptive Cards: Now your User Interface can get much smarter! Imagine being able to interact with social media updates, write GitHub comments, and much more through applications like Email! Adaptive cards let’s developers think outside the box on how to create a seamless flow of coupling between the applications for a much faster workflow. Eric’s Hardware Driven Perspective “As a systems engineer in the Internet of Things (IoT) space, I need to keep abreast of the latest developments in tech if I want my work to be impactful, relevant and cutting edge. At Build last week, I was intrigued to learn that Microsoft was both exploring the possibility of integrating machine learning and artificial intelligence with IoT, and paving the way for future developers. The idea of having a fleet of devices in the field harvesting data for Cloud, to provide processed and actionable data for customers, is something that is becoming increasingly common in today’s age of smart devices and a connected world, however solutions to manage this data struggle to keep pace.” “The world is becoming a computer. Computing is becoming embedded in every person, place, and thing.” ~Satya Nadella, Keynote Day 1 With unthinkably vast amounts of data at our fingertips, we’re led to the question of “What do we do with this?”. Data science, through machine learning and AI is one good answer. At Build 2018, Microsoft announced a partnership with Qualcomm, to create a vision AI developer kit, which will allow real-time AI on local devices without constant cloud connectivity or expensive machinery. In a similar vein of thought, Microsoft also debuted Project Brainwave, enabling real-time AI in the cloud or on the edge through Intel FPGAs (field programmable gate arrays). This is exciting news, because it expands on the range of projects and applications that can now be built, with both high performance and low cost. Adit’s Software Driven Perspective Microsoft’s mission is to “Empower every person and every organization on the planet to achieve more”. In 2017, Microsoft pledged a little more than $50 Million towards developing AI for Earth — a program to help solve some of the most pressing issues that our planet faces. This year, Microsoft pledged $25 Million towards “AI for Accessibility” — a five year program, aimed to help more than a billion people around the world with disabilities. Opportunity & Responsible — Satya Nadella on Ethical AI at Microsoft Build Many tech leaders such as Elon Musk and Mark Zuckerberg, often quarrel about the ethical concerns behind AI. Some have even gone as far to say that “AI might wipe out humanity”. This year at Build, I was pleased to see that Microsoft has started an initiative to bring some of the brightest minds in AI Research and Philosophy to Art and Public Policy to create an “AI Ethics Board of Directors”. This is a very necessary arrangement that I believe governments around the world should emulate — to start thinking about the laws and policies behind artificial intelligence. What if I told you that 90% of the worlds entire data was created in the last 2 years. An autonomous car alone produces upwards of 5,000 Gigabytes every day. With so much data, things start to slow down exponentially. Something I found very intriguing was that today we are able to use servers around the entire planet to work together to process this data quicker than ever! Breakout Sessions & Expo Attendees had the opportunity to roam through the Expo hall and demo new technologies by Microsoft teams and it’s affiliated partner companies, or attend talks, labs, and smaller break-out sessions on specific topics led by domain experts. Microsoft Build 2018: Expo hall during the afternoon of Day 1 During this time, we were able to spend time testing proof of concept demos from Mixed Reality patient ultrasound simulation with Hololens to exploring an IoT drone challenge with DJI. As we made our way through the booths, we stopped by both Azure User Experience and Power BI to have 1-on-1 conversations with Microsoft UX researchers and give feedback and suggestions for improvements and future features. Multiple talks on a variety of different topics, ranging from .NET, to Kubernetes, to IoT and Quantum Computing, ran simultaneously as well. Since these talks are also recorded and posted on Microsoft’s Channel 9, we prioritized meeting and interacting with other developers, designers and engineers at the conference. That said, these talks were also an opportunity to learn face to face, and ask direct questions to some of the core engineers working on Microsoft tech. Networking Opportunities As MSPs, we had the rare opportunity to network with some amazing people, from company veterans like Larry Osterman (Principal Software Engineering Lead) to Clint Rutkas (Sr. Technical Product Manager) who helped create 700 lb Kinect-controlled boxing robots for a conference demo. It was a fantastic experience speaking to this select set of people of which some had been working at Microsoft from its humble beginnings, and over 100 cumulative years of experience. Jennifer Marsman, Principal Software Engineer Clint Rutkas, Sr. Technical Product Manager Anthony Chu, Cloud Developer Advocate Richard Campbell, Entrepreneur, Architect, Podcaster Raymond Chen, Programmer & 24 year veteran, Windows Team Larry Osterman, 32 Year Veteran, Principal Software Engineering Lead Clint Rutkas recounting past successes and failures to MSPs from Australia, Brazil and the US Final thoughts and Lessons Learned In summary, Build was a once-in-a-lifetime opportunity to learn, grow, and develop as both young engineers and as people living and working in a global environment. We have compiled some of the lessons learned from the numerous conversations we had, and hopefully, some of these will help you too! Don’t be afraid to say “No” to something or someone you don’t agree with. We all have unique minds and perspectives, and generally the epitome of productivity occurs when everyone is one the same page, sailing along. Always make sure you “Trust your gut”. Whether it’s making claims, or giving demos — be prepared, be covered — and do what your gut tells you to. Go for your passions, not the money. While money is awesome, after a while when you have enough money you’re not going to want to wake up bright and early with a smile on your face to get to work — whatever that may be. Act on your ideas, and treat them as calculated risks. It’s better to beg for forgiveness than to ask for permission. Learn from failure, and iterate quickly to correct it. Feedback is only a valuable tool if acted upon. If you are still reading this — thank you so much for reading! It was an immense pleasure getting to experience Microsoft Build 2018 — and to be able to share with you our highlights. A huge shout-out to the entire Microsoft Student Partner team across the world for their amazing work and dedication towards empowering young developers and everyone around them to achieve more! In the next coming weeks, we will be continuing these blogs with more specialized content to help you learn about the most interesting and advanced technologies in the world. Stay tuned for special interview posts — with some of the managers and team leads at Microsoft who have agreed to share what they are working on and how they hope to change the world, one product at a time. Click here to learn more about the Microsoft Student Partner program, and how you can be a part of it! “Be passionate and bold. Always keep learning. You stop doing useful things if you don’t learn.” ~ Satya Nadella
Microsoft Build 2018 : The Student Experience
4
microsoft-build-2018-the-student-experience-1d1984bdf023
2018-05-17
2018-05-17 19:49:42
https://medium.com/s/story/microsoft-build-2018-the-student-experience-1d1984bdf023
false
2,313
null
null
null
null
null
null
null
null
null
Microsoft
microsoft
Microsoft
19,490
Adit Gupta
Senior Microsoft Student Partner (MSP), Researcher, CTO (MyVyB.io) https://www.linkedin.com/in/guptaadit https://aditgupta.azurewebsites.net
fa1f85e00212
guptaadit25
51
35
20,181,104
null
null
null
null
null
null
0
null
0
ae27dbcad639
2018-09-20
2018-09-20 16:31:19
2018-09-20
2018-09-20 16:34:18
2
false
en
2018-09-20
2018-09-20 16:34:18
1
1d1a335b53e5
1.088994
0
1
0
Download the full report here
4
New Media Trial Puts Machine Learning to the Test Download the full report here The process in which marketers identify and reach the ideal demographic for their product has remained fairly static over time. Historically, primary research is conducted, followed by a media buy that is deployed against that demographic. However, the latest advancements in technology, specifically machine learning, are opening up the possibilities for optimization beyond what humans can achieve on their own. In our latest media trial, we put machine learning to the test to uncover whether human-driven media buys would benefit from this additional optimization. Key findings include: Machines working with humans beat humans alone in virtually every category measured, including brand interest (machines +8.6%, humans +1.9%) and purchase consideration (machines +5.7%, humans +0.8%). Machine learning was also more efficient in doing so with only 3.08 ad exposures per consumers vs. 4.13 for humans. Humans couldn’t simply mimic the machine’s learning. Even when controlling for key audiences (e.g. those in-market for the product), the machine performed better by seemingly optimizing towards those most receptive to the ad. Download the full report here.
New Media Trial Puts Machine Learning to the Test
0
new-media-trial-puts-machine-learning-to-the-test-1d1a335b53e5
2018-09-20
2018-09-20 16:34:19
https://medium.com/s/story/new-media-trial-puts-machine-learning-to-the-test-1d1a335b53e5
false
187
The media futures agency of IPG Mediabrands
null
IPGMediaLab
null
IPG Media Lab
richard@ipglab.com
ipg-media-lab
TECHNOLOGY,TECHNOLOGY STRATEGY,DISRUPTION,ADVERTISING,MARKETING
ipglab
Machine Learning
machine-learning
Machine Learning
51,320
IPG Media Lab
Keeping brands ahead of the digital curve. An @IPGMediabrands company.
95932246fcba
IPGLAB.com
1,710
616
20,181,104
null
null
null
null
null
null
0
null
0
ce53b6db279f
2017-10-25
2017-10-25 07:21:13
2017-10-24
2017-10-24 00:00:00
0
false
en
2017-10-25
2017-10-25 07:23:38
4
1d1b80eec2c
1
3
0
0
Since I heard it for the first time, I was struggling to understand what “Functions As A Service” like AWS Lambda really is. I heard people…
5
I Finally Understood Functions As A Service Since I heard it for the first time, I was struggling to understand what “Functions As A Service” like AWS Lambda really is. I heard people explaining it on podcasts and read what it said on the AWS Lambda landing page but it just didn’t click. Last week me and Henning recorded the lastest episode of our podcast REACTIVE. On that episode Henning talks about how he uses AWS Lambda and an AWS database to build an API for their app at work. This made me finally understand what this is all about. They built the API by writing some code that parses request parameters, retrieves some data from the database and then sends that data back as JSON in the JSON API format. That code is the function that is being provided “as a service”. That is it. The HTTP layer, security and scalability is all provided by AWS services. Functions As A Service also means that you only pay for computing time when the function is used. When there is no requests to the API then you don’t pay. This is an incredibly fast and efficient way to build an API that is production ready in no time. On the podcast we also talked about how more and more of these “solved problems” like security and scalability will be packed up into some service and how the usage of them will certainly be very widespread in the not so distant future. @codepo8 said it best on Twitter yesterday: Originally published at www.kahlillechelt.com on October 24, 2017.
I Finally Understood Functions As A Service
7
i-finally-understood-functions-as-a-service-1d1b80eec2c
2018-06-12
2018-06-12 15:13:52
https://medium.com/s/story/i-finally-understood-functions-as-a-service-1d1b80eec2c
false
265
Kahlil Lechelt’s Blog
null
kahlil.lechelt
null
Kahlil Lechelt’s Blog
hello@kahlil.info
kahlillechelt
JAVASCRIPT
kahliltweets
Reactive Podcast
reactive-podcast
Reactive Podcast
13
Kahlil Lechelt
JavaScript developer. http://kahlillechelt.com
bc86b0d1db2
kahlil
859
812
20,181,104
null
null
null
null
null
null
0
null
0
276cba94e3e1
2018-09-27
2018-09-27 02:45:16
2018-09-27
2018-09-27 02:49:05
3
false
en
2018-09-27
2018-09-27 02:49:05
9
1d1c43644288
3.100943
0
0
0
By Charlotte Kng
5
Ingenious Blockchain-based Data Protocol, Symphony, Concludes Europe Tour with Hallmark Connections By Charlotte Kng 21 August 2018, Singapore — Ingenious blockchain-based data protocol, Symphony , wrapped up the final leg of its Europe tour in the heart of Berlin yesterday evening. The token project has been sending waves across Europe through the past week as it rummaged through Paris, Zurich and Berlin, leaving the crowd intrigued with a rare glimpse into the world of ICO through the eyes of an insider. The tour was a celebrated collaboration between tokenized platform Himalaya Capital Exchange and game-changing data protocol ecosystem Symphony, with a vision to empower businesses around the world with smarter data intelligence. Here’s a quick roundup on the key takeaways of the tour: First stop: London. Creation of new asset classes with Blockchain. Symphony’s stop in London was a cozy one hosted at Rise London along Luke Street, with a colorful profile of institutional funds, private investors, project owners and media. London stood out as a district with a very high “Blockchain IQ”; many were well-informed of the inner-workings of protocols like Symphony, allowing for fruitful conversations and rewarding experiences. Key topics that stirred the night included thresholds on decentralization as well as how to balance the tradeoffs between scalability, security and decentralization, with a good handful showing interest in Symphony’s marketing plan and user acquisition strategy with the initial resources. Second stop: Paris. What makes an ICO worth investing and an outlook of ICO market and future directions Paris included 2 meetups: A casual meetup at Hippopotamus pub at 6 Avenue Franklin Delano Roosevelt, as well as a formal presentation the following day at Hotel Sofitel Arc de Triomphe. The Paris meetups were teeming with dialogue about the evolving state of the ICO market, and corresponding evolutions of marketing strategies. Many also radiated interest in the data-driven protocol Symphony is based on, and offered to contribute both connections to resources as well as managing Symphony’s community in France. Overall, the affair gained an optimistic amount of traction — a promising sign for a project still in its infancy. Third stop: Zurich. East to West, observations in the Crypto market Zurich was a gift by itself; it held one of the most complete crypto ecosystems to date: project owners, lawyers, investors, crypto-specific auxiliary service providers and academic researchers. With an intellectual crowd came a myriad of reasonable technical parleys, setting a friendly stage for Symphony to share its cutting-edge technical builds and mechanisms that are engineered to safeguard the biggest worry in data: privacy Last stop: Berlin. View of an ICO insider and what makes an ICO worth investing Symphony’s final stop in Berlin saw vivacity that was far from lacking. Attendees were intrigued by the opportunity to sneak a peek through the eyes an insider in the ICO space and were pleasantly surprised by the solutions Symphony has to offer as a protocol engineered to safeguard data-privacy. The overall reception was visibly positive, closing the tour on a promising note. “Broadly speaking, we look at events as a chance to build support for the project from investors, partners and community members, and to collect feedback to fine tune our ideas. With respect to Europe, Feida and myself have had a productive week on our European roadshow, covering 5 events in 4 cities: London, Paris, Zurich & Berlin. We have come away with a strong appreciation for the strength of and the diversity of the blockchain and community within Europe. For Symphony, we got a lot of useful feedback that we can incorporate into the project. We also came away with some meaningful contacts, and look forward to developing those further in the future. ” - Eleanor Jones, Co-Founder of Symphony Symphony would like to thank all who have attended the events and showed interest in our project. To find out more about Symphony Protocol, do visit the following social media links: Website: https://symphonyprotocol.com/ Medium: https://medium.com/symphonyprotocol Twitter: https://twitter.com/SymphProtocol Reddit: https://www.reddit.com/r/SymphonyProtocol/ YouTube: https://www.youtube.com/channel/UCWBUmCG3MJ9iAaBTVKD5tTA/featured Telegram: https://t.me/symphonyprotocol Weibo: https://www.weibo.com/p/1006066584695966?is_hot=1 GitHub:https://github.com/symphonyprotocol/
Ingenious Blockchain-based Data Protocol, Symphony, Concludes Europe Tour with Hallmark Connections
0
ingenious-blockchain-based-data-protocol-symphony-concludes-europe-tour-with-hallmark-connections-1d1c43644288
2018-09-27
2018-09-27 02:49:05
https://medium.com/s/story/ingenious-blockchain-based-data-protocol-symphony-concludes-europe-tour-with-hallmark-connections-1d1c43644288
false
676
A Next-Generation, Blockchain-based Protocol to Empower A Data-Driven Economy For more information, please visit our official website: symphonyprotocol.com
null
null
null
SymphonyProtocol
contact@symphonyprotocol.com
symphonyprotocol
null
SymphProtocol
Blockchain
blockchain
Blockchain
265,164
Symphony Protocol
null
bfec5852caa8
contact_44902
1
1
20,181,104
null
null
null
null
null
null
0
null
0
813ae44f2c57
2018-08-22
2018-08-22 09:38:36
2018-08-22
2018-08-22 10:10:33
5
false
en
2018-08-22
2018-08-22 10:19:39
0
1d1d85f3e4f3
3.667296
1
0
0
It’s that time of the year again when the new version of Android starts to roll out. While Android updates have been rather slow (you may…
5
Android 9 Pie: Check Out What’s New And Exciting! It’s that time of the year again when the new version of Android starts to roll out. While Android updates have been rather slow (you may say lethargic) to roll out to the majority of devices, there is hope this time round that the updates should roll out a bit quicker. That’s thanks to Google’s Project Treble which is aimed squarely at speeding up system updates. If you have a phone that launched with Android Oreo, you should be getting the update to Android 9 Pie pretty soon. Keeping that in mind, here are the top features you should look out for. 1. Navigation Gestures Google has taken a leaf out of Apple’s book by incorporating navigation gestures into the UI. This is the biggest change to the OS and a big move away from the three button navigation system Android users have been so familiar with. In our experience this makes getting around much faster and more intuitive. 2. Recent Apps The view for the recent apps is now much more functional. Apps now show their full windows, text can be selected, copied, and pasted from one app to another, and there’s the handy Google search bar at the bottom. You can now do much more than just switching between apps. 3. AI Suggestions Google has generally been ahead of the industry in incorporating AI into their products. Their AI push takes another step forward with Android 9 Pie. Call someone often at a particular time of the day? Google will pop that contact to the top of your dialer then. Plug in your headphones? Google will ask if you want to continue listening from where you last left off. Soon, you may see suggestions from other apps as well. This usage of AI makes your life simpler in small ways, but all of it adds up to an OS which is more intelligent and constantly evolving. 4. Digital Wellbeing While this is not yet out, it will be rolling out soon. If you are concerned at how much time you spend looking at your phone screen all day, this nifty feature can help you set time limits for apps, show you how much time you spend in which app and help you to do more with your life beyond your screen. It remains to be seen how many of us will successfully curb our addiction to the screen, but we’re eager to try it and find out. Maybe it’ll mean we can finally catch up with friends in person. 5. Battery Life Improvements As we saw earlier, AI has a big role to play in terms of proactively suggesting what to do next on your phone. Here, Google is using AI behind the scenes to assess which apps draw most power, then correlate this with your app usage patterns and then proactively cut them off when they feel like you won’t really be using them. The result is that background tasks consume far less battery and saving you from battery life anxiety at the end of your day. 6. Screenshot Improvements If you’re the kind of person who likes to take screenshots, crop them, annotate them, and then send them out, your juggle across multiple apps just got a bit less tedious. Once you take a screenshot in Android 9 Pie, you will be able to immediately crop and edit your screenshots right then and there. Supported Devices While the track record of device manufacturers isn’t great, you can expect quicker update roll outs this time round. In fact, the update is already live on Pixel and Essential smartphones and will be rolling out to devices from Sony Mobile, Xiaomi, HMD Global, Oppo, Vivo and OnePlus. These manufacturers were a part of a beta program where they got early access to pre-release versions of the OS so they could start their implementation work early. This is a step in the right direction with Google giving importance to faster roll out of new OS versions, especially since Android Oreo is currently used only on barely 12% of all Android devices. Conclusion Android 9 Pie looks like it will be the most advanced and intelligent version of the world’s most popular OS yet. It is a forward looking vision of what personal computing could be. Expect the themes touched upon in this version to become foundational for the next few iterations. Just remember, always practice safe updating: backup first, update later.
Android 9 Pie: Check Out What’s New And Exciting!
1
android-9-pie-check-out-whats-new-and-exciting-1d1d85f3e4f3
2018-08-22
2018-08-22 10:19:39
https://medium.com/s/story/android-9-pie-check-out-whats-new-and-exciting-1d1d85f3e4f3
false
751
The intersection where technology meets consumers
null
null
null
DeCodeIN
null
decodein
TECHNOLOGY,TELECOM,INDIAN STARTUP ECOSYSTEM,ARTIFICIAL INTELLIGENCE,INTERNET OF THINGS
IndiaDecode
Android
android
Android
56,800
DeCode Staff
null
cd21a288af07
decodejournal
29
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-04
2018-02-04 15:01:01
2018-02-05
2018-02-05 06:23:38
2
false
en
2018-02-05
2018-02-05 06:23:38
1
1d1e2afefad3
3.477673
25
1
0
We’re excited to announce our newest team member, Lorna Aine, who will lead Pollicy’s data programs. Much of Pollicy’s work is based on…
5
Lorna Maria Aine We have a new Data Lead: Meet Lorna! We’re excited to announce our newest team member, Lorna Aine, who will lead Pollicy’s data programs. Much of Pollicy’s work is based on building the community of data enthusiasts here in Uganda as well as improving the data skills of government and civil society organizations. We are all very excited to have Lorna on our team and would love for you to get to know her as well! Who is Lorna? Lorna Maria Aine is a 22 year old passion-driven techie, data geek and a major contributor to building developer communities across Africa. She has a Bachelor’s degree in Computer Engineering from Busitema University. “I am committed to contributing data science content for the next generation.” -Lorna What attracted you to this position? It was the fact that I finally felt that it was one place I would learn and work at the same time, since it is really hard to have that balance, yet our work lives are our learning lives, too. When Iread through the job opportunity, the job description was evidently letting me exercise my creativity and community building. It was not something that I had to carry alone, the job is really engaging! What are you looking forward to now that you have joined Pollicy? There are very many things I am looking forward to but I shall specify on a few. The data curriculum development is something I am really passionate about and I just feel that it’s something that I really want to fulfill now that I have a platform. Also, the community engagement! The Data Club, the data group for ladies and open data discussions, it’s something that I love doing and I am so grateful that I am finally doing it as part of my everyday work. How do you intend to impact the Data Trek in Uganda now that you are Data Lead at Pollicy? Well, I’d say first of all I intend to do a lot of documentation on what existing datasets, tools and projects are out there. I feel like things written down can actually come to life rather than things talked about and pushed under the carpet. I intend to do a lot of meet-ups and open conversations about inclusion of data-driven decision making. People who know me will attest to the fact that I am all about meet-ups! Meet new people, talk and collaborate. If we can achieve this then we are putting our hands together to build a tall tower that can last for a long time. Sometimes, I feel like we don’t need to dismiss what exists but simply have to understand it better and shape it. What’s your take on the civic technology society in Uganda? I totally believe that we have a future. Where there is a will, there’s a way.I must admit it’s not an easy one but we have to put in lots of work, we have to put in our all, we have to make much more awareness but yes, we totally have a future. Who is your role model? *Sighs! Laughs for a few seconds* They are very many and there are really a couple of people doing amazing things but of the many I would totally still pick Sheryl Sandberg, the COO of Facebook. She is totally amazing. I have read her books, I follow her, I have literally dug into her life. I just love the way she balances life and work. Her way of life is something I definitely look up to. My mother too! She’s tough, iron hard and yet the sweet, loving person at the end of the day. I wish I was all that in one. Before Pollicy, what were you upto? I have been a system developer with Planet Systems where we were working closely with developing an e-procurement system with Uganda Revenue Authority. My work was not really data-related but in one or two scenarios, I would have to do tasks that were data related because there always came a time when they need a data service and I always volunteered though I was initially hired to do system development. What are some of the challenges that you have experienced working in data science? It’s something that is so broad, has no definite definition and even when you go to Wikipedia and find a definition, with time you realize that Data Science is many things and you are looking at learning all those things in a short period of time and it doesn’t work that way. My biggest challenge was starting and finally giving myself the confidence to say that “Okay, I have learnt this and I can now go on and focus with this”. Lorna on Medium To know more about Lorna, follow her on Twitter @lornamariak or follow her Medium posts for all things R Lorna Maria A
We have a new Data Lead: Meet Lorna!
221
we-have-a-new-data-lead-meet-lorna-1d1e2afefad3
2018-06-01
2018-06-01 07:36:54
https://medium.com/s/story/we-have-a-new-data-lead-meet-lorna-1d1e2afefad3
false
820
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Pollicy
Re-designing Government for Citizens
13d8e0dcaaa5
pollicy
293
124
20,181,104
null
null
null
null
null
null
0
null
0
e690cc199aa4
2017-09-14
2017-09-14 17:52:25
2017-09-14
2017-09-14 18:10:42
2
false
en
2017-09-22
2017-09-22 17:19:42
3
1d1e37a05c80
2.760692
4
0
0
What will happen when all data becomes intelligent and computers have cognition?
4
Gearing Up for the ‘Intelligent Enterprise’ What will happen when all data becomes intelligent and computers have cognition? By Paula Klein The intelligent enterprise may finally be in sight. Fueled by a confluence of machine learning, big data analytics, and APIs, advances previously available only to consumers are now advancing into business environments. This “new era of intelligent computing” and how enterprises can benefit, was the subject of Mark Gorenberg’s recent MIT IDE seminar. By combining huge volumes of data with learning algorithms, he said, “the software used in every existing enterprise line of business is being disrupted, and new industries that were previously data-starved are now open to optimization.” Mark Gorenberg Gorenberg, who has 26 years of venture capital experience and has funded and served on the boards of numerous successful start-ups, is now betting that the timing and technologies are ripe for AI to move into enterprise environments. He is currently a founder and Managing Director of Zetta Venture Partners, an early-stage fund focused on the intelligent enterprise. Zetta, founded last year, has invested in 21 companies so far including Kaggle, which was acquired by Google. It currently has $185 million under management. The company believes that in the current, fourth era of computation, huge volumes, or zettabytes, of software can “learn” from data. In other words, all data will become intelligent and computers will have cognition. The year 2005 was a “tipping point” in terms of recommendation engines that allowed Amazon to take advantage of network effects to move beyond consumers and into the enterprise market. While cloud services marked the most recent generation of computing, “now, it’s about the data,” he said. Businesses must have an AI playbook, and hire data scientists and machine learning experts to get in the game. “For enterprises to compete, they will need to re-architect to include new cloud platforms, micro and data services, collaborative hubs, real-time business optimization dashboards, and new intelligent applications,” he said. And the best new applications, primarily being developed by startups, he said, will include a ‘virtuous loop’ — software that continuously transforms anonymous customer data, and public data “into machine learning algorithms to generate both cleaner data and insights.” Gorenberg sees many new opportunities arising for vendors and enterprises alike. Enterprises will have to open their software services to allow for non-proprietary apps and new development tools. The reward? “Everyone in the organization is a data analyst; even the CEO can check KPIs in real time on their phone.” For investors, he said, “the playbook for startups is changing. It’s not just about apps anymore.” Startups are developing products that engage data collection and crowdsourcing of public and private data that can be applied in all industry sectors. For example, Marketing Evolution very specifically recommends to clients the best mix of media spending based on AI analysis of real-time customer patterns and behaviors. Another startup augments insurance claims assessors with deep learning images of auto parts damaged in an accident. Zetta’s presentation points are detailed through a series of posts on https://medium.com/@Zetta . Highlighting a recent thesis being codified by Zetta Associate, Ivy Nguyen, Gorenberg summed up some key insights for both enterprises and entrepreneurs seeking gains from machine learning and data analytics, as follows: KEY INSIGHTS: 1. Define your minimal algorithm performance to make it viable. 2. Race to get critical mass. Make sure you have the right data and analyze it quickly. 3. Choose the right business entry point. 4. Monitor for diminishing returns and keep updates flowing. Kill the old models of software development bottlenecks. 5. Maintain data rights to secure leadership. Data rights are the new intellectual property; contracts will be written around data ownership moving forward. Watch a video of the MIT IDE presentation here.
Gearing Up for the ‘Intelligent Enterprise’
13
boning-up-on-the-intelligent-enterprise-1d1e37a05c80
2017-12-29
2017-12-29 01:40:16
https://medium.com/s/story/boning-up-on-the-intelligent-enterprise-1d1e37a05c80
false
630
The IDE explores how people and businesses work, interact, and prosper in an era of profound digital transformation. We are leading the discussion on the digital economy.
null
null
null
MIT Initiative on the Digital Economy
ide_social@mit.edu
mit-initiative-on-the-digital-economy
MIT,INNOVATION,DIGITAL,INCLUSIVE INNOVATION,AI
MIT_IDE
Machine Learning
machine-learning
Machine Learning
51,320
MIT IDE
Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.
dd9c51c40b05
mit_ide
2,792
28
20,181,104
null
null
null
null
null
null
0
The two gradient steps are made simultaneous 1. Updating θ(D) to reduce J(D) 2. Updating θ(G) to reduce J(G) Pass a: Train Discriminator and freeze generator (No backpropagation for generator). Pass b: Train Generator and freeze Discriminator. (No backpropagation for Discriminator). Repeat process 1 and 2 for the number of iteration ddefined.
6
null
2018-03-19
2018-03-19 01:38:28
2018-03-31
2018-03-31 19:37:32
19
true
en
2018-04-09
2018-04-09 02:46:42
16
1d1f099dc4a7
6.983019
12
0
0
“(GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” Yann Lecun
5
Quick Introduction to GANs “(GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” Yann Lecun GANs have been getting a lot of attention since their introduction by Ian Goodfellow, the results are impressive and also promising. From generating images to increasing resolutions to image translation, GANs have not failed to surprise everyone. So why are GANs so helpful? It is often that the probability distribution of the data is very complicated and difficult to infer, but GANs can learn to generate samples from the nasty probabilistic distribution of the data without even us dealing with it. Isn’t it nice! Don’t know much about GANs? I know that’s why you are here. In this series we will go through the basic concepts of GANs, how do they work, drawbacks and also implement some. Toy-GAN DCGANs Generative Adversarial Network GANs are modeled after a mini-max game in Game Theory which is called Adversarial Process. The usual structure consist of two nets, where the first net generates data (Generator) and the second net (Discriminator) tries to tell the difference between the real and the fake data generated by the first net. The generator creates samples that are intended to come form the same distribution as the training data to fool the discriminator. At the equilibrium, the Generator will model the real data while the Discriminator will output a probability of 0.5 as it would not discriminate between the real and generated data anymore. How Does GANs Work The trick is that the neural network we use as a generative model has a number of parameters significantly smaller than the amount of data we have to train them on. This results in generator to discover and efficiently internalize the essence of the data in order to generate it. The two player are represented by two functions, each of which is differentiable both with respect to its input and with respect to its parameters. The Discriminator is a function D that takes input x and uses θd as parameters while Generator is function G that takes z as input and uses θg as parameters. Both try to minimize the cost by only controlling their parameters. GANs are defined as an game rather than optimization problem as both player (nets) needs to reduce the cost but the cost depends on the other player’s parameters, but they cannot control other player’s parameters. The solution to this problem is a Nash equilibrium, which is here a tuple of (θd,θg) that is a local minimum of J(D) with respect to θd and a local minimum of J(G) with respect to θg. This is the only way both the players (Nets) will be happy. Generator is a simple differential function G, where G(z) yields a sample of x drawn form probabilistic distribution of model, when z is sampled from some prior distribution. Let’s think of true data distribution p(x), in the image below, green region shows real data and black dot indicates the data points. The generator model output distribution is shown in red that is defined by taking points from a Gaussian distribution and mapping them through the generator. By tweaking the network parameters θg, it will tweak the generated distribution. The goal is to find the parameters of the network that produces a distribution that closely matches the true data distribution The generated distribution (red) starting out random and the training process iteratively changing the generator parameters to better match it to real distribution (green). (The data distribution matching will depend on the discriminator capability here). Cost Function Several different cost can be used for GANs framework. Discriminator’s cost The cost function is the standard cross-entropy cost that is minimized when training a standard binary classifier with a sigmoid output. However the classifier is trained on two minibatches instead of one; one coming from the data set and the other one from the generator. GANs make approximation based on using supervised learning to estimate a ratio of two densities. Estimating ratio enables us to compute a wide variety of divergence and their gradients. The Nets are adversarial but also cooperative as the discriminator estimates the ratio of densities and freely shares this information with generator. The discriminator is more like a teacher instructing the generator in how to improve the adversary. Minimax The simple version for this game is zero-sum game / minimax game( the sum of all player’s cost is always zero). Here as the generator’s cost is tied to discriminator’s loss, we can summarize the entire game with a value function specifying the discriminator’s payoff It is also called minimax because its solution involves minimization in an outer loop and maximization in inner loop. The learning in this game resembles minimizing the Jensen-Shannon divergence between the data and the model distribution. The game converges to the equilibrium if both players’ policies update directly in function space. In the value function V(G,D) the discriminator tries to maximize the first term which is the entropy of the data from real distribution passing through the discriminator to 1, and also the second term, which is the entropy of generated samples passing through the discriminator by making the prediction 0 (The log probability of generated samples is 0). So the discriminator tries to maximize the V(G,D) function. While the task of the generator is exactly opposite, as it tries to minimize the function V(G,D) so that the differentiation between the real and generated data is minimum. So the Generator tries to minimize the V(G,D) function. Training GANs Made it so far, Yaayyy, now lets see how we can train GAN. For the traing, the training samples x are randomly sampled from the training set and used as an input for the discriminator whose goal is to output the probability that its input is real rather than fake making D(x) to be near 1 for the first scenario. In the second scenario, the discriminator receives fake samples from the generator , in this scenario both the players participate, where the discriminator strives to make D(G(z)) approach 0 while the generator strives to make the same quantity approach 1. On each step two mini-batches are sampled: x values from the dataset and a mini-batch of z values drawn from their model’s prior over latent variable. Visualization of the flow for training If you want to see the code please checkout the next post. Challenges with GANs Although GANs are powerful, still there are many challenges with training of GANs. You can know more about this problems and their solutions in Improved technique used to train GANs to avoid them. Mode Collapse: Generator learns one data that fools the discriminator and produces several copies of exactly the same data. Bad Initialization: The network try to take successive steps to minimize a non-convex objective and end up in an oscillating process rather than decreasing the underlying true objective. Problem with Counting: Sometimes it fails to differentiate number of particular objects that should occur at a location. (The number of eyes in the head) 4. Problem with Perspective: GANs sometime are not capable of differentiating between different views (e.g. Front and back view). 5. Problem with Global Structure: GANs don’t understand a holistic structure. Applications Represent and manipulate high dimensional probabilistic distribution Reinforcement Learning Semi-Supervised Learning Inverse Reinforcement Learning Multi-modal output optimization. Generative Image Manipulation Super-resolution from single image Image-to-image translation (sketches to images, aerial photos to maps) Well still not satisfied? Here are some more awesome resources Really awesome GANs GAN Zoo GANs awesome applications GAN Implementations Conclusion GANs are generative models that use supervised learning to approximate an intractable cost function. It can simulate many cost function including the one used for maximum likelihood. GAN framework pits two adversaries against each other in a game which are controlled by set of parameters. Typically this functions are implemented as neural networks. The goal of the discriminator is to output the probability that its input is real or fake, while the generator tries to model samples which matches the true data distribution. Reference Ian Goodfellow NIPS 2016 GANs Tutorial (slides) OpenAI Generative Modeling Stanford Generative Modelling GANs, Some Open Questions (Sanjeev Arora) AV introduction to GANs
Quick Introduction to GANs
43
quick-introduction-to-gans-1d1f099dc4a7
2018-05-21
2018-05-21 16:39:13
https://medium.com/s/story/quick-introduction-to-gans-1d1f099dc4a7
false
1,400
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Sanket Gujar
Computer Science Graduate Student at WPI, Former Perception Intern at Luminar tech, PA. sanketgujar.github.io
ae0b7e498efe
sanketgujar95
28
22
20,181,104
null
null
null
null
null
null
0
null
0
73c37236e0e3
2018-05-17
2018-05-17 10:07:39
2018-05-17
2018-05-17 10:13:56
1
false
en
2018-05-17
2018-05-17 10:13:56
6
1d1f22fdf337
0.833962
4
0
0
We told you it was for today, DAN token is now tradable on the Hong Kong based exchange FUBT.top. They are organizing a contest to win DAN…
5
DAN is now tradable on FUBT.top We told you it was for today, DAN token is now tradable on the Hong Kong based exchange FUBT.top. They are organizing a contest to win DAN tokens for the top traders, see the rules here below: 1. Registration delivery: Users who are registered and authenticated during the activity period can get 5 DAN (60000 on a first come first served basis). 2. Recharge: DAN users will be rewarded according to the top 80 VOL rankings. 1–20 to get 20000 DAN 20–50 to get 18000 DAN 50–80 to get 15000 DAN The multiplayer award is issued on the basis of the percentage of the volume of turnover. 3. Candy rain: Net purchases of DAN at least 3000 items during the event can divide up 50000DAN candy. (the number is limited, first come first) & We negociate free transactions during the contest if you trade DAN. Spread the word guys and stay tuned: Twitter: https://twitter.com/daneelproject Telegram: t.me/DaneelCommunity Facebook: https://www.facebook.com/daneelproject LinkedIn: www.linkedin.com/company/11348931/ Reddit: https://www.reddit.com/r/Daneel_Project/
DAN is now tradable on FUBT.top
151
dan-is-now-tradable-on-fubt-top-1d1f22fdf337
2018-08-02
2018-08-02 07:22:35
https://medium.com/s/story/dan-is-now-tradable-on-fubt-top-1d1f22fdf337
false
168
In this publication you will find all the official announcements and communication related to the Daneel Company.
null
daneelproject
null
Daneel Corporate
information@daneel.io
daneel-corporate
ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,MACHINE LEARNING,BIG DATA,TRADING
daneelproject
Cryptocurrency
cryptocurrency
Cryptocurrency
159,278
Daneel Assistant
Your future personal crypto assistant ! https://daneel.io
dc883054551c
daneel_project
463
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-07
2018-08-07 12:35:37
2018-08-07
2018-08-07 12:37:40
3
false
en
2018-08-07
2018-08-07 12:40:23
3
1d209b6c69b2
11.414151
1
0
0
1. Objective
2
Introduction to Data Mining and Machine Learning Techniques 1. Objective In this blog, we will study what is Data Mining. Also, will study data mining scope, foundation, data mining techniques and terminologies in Data Mining. As we study this, will learn data mining architecture with a diagram. Further, will study knowledge discovery. Along with we will also learn data mining applications and pros and cons. 2. Introduction to Data Mining Data Mining is a set of method that applies to large and complex databases. This is to eliminate the randomness and discover the hidden pattern. As these data mining methods are almost always computationally intensive. We use data mining tools, methodologies, and theories for revealing patterns in data. There are too many driving forces present. And, this is the reason why data mining has become such an important area of study. 3. Data Mining History In 1960s statisticians used the terms “Data Fishing” or “Data Dredging”. That was to refer what they considered the bad practice of analyzing data. The term “Data Mining” appeared around 1990 in the database community. 4. Foundation of Data Mining We use data mining techniques for a long process of research and product development. As this evolution was started when business data was first stored on computers. Also, it allows users to navigate through their data in real time. We use data mining in the business community because it is supported by three technologies that are now mature: Massive data collection Powerful multiprocessor computers Data mining algorithms 5. Type of data gathered a. Business transactions In this business industry, every transaction is “memorized” for perpetuity. We can say many transactions are dealing with time and can be inter-business deals such as purchases, exchanges, banking, stock, etc., b. Scientific data Everywhere, our society is amassing colossal amounts of scientific data. As that scientific data need to be analyzed. Unfortunately, we have to capture and store more new data faster. Then we can analyze the old data already accumulated. c. Medical and personal data As we can say from the government to customer and for personal needs, we have to gather large information. That information is required for individuals and groups. When correlated with other data, this information can shed light on customer behavior. d. Surveillance video and pictures As with the collapse of video camera prices, video cameras are becoming ubiquitous. Also, we can recycle cameras, videotapes from surveillance. However, it’s become a trend to store the tapes and even digitize them for future use and analysis. e. Games In societies, a huge amount of data and statistics is used. That is to collect about games, players, and athletes. As this information data is used by commentators and journalists for reporting. f. Digital media There are too many reasons for causes of the explosion in digital media repositories. Such as cheap scanners, desktop video cameras, and digital cameras. Associations such as the NHL and the NBA. That have already started converting their huge game collection into digital forms. g. CAD and Software engineering data There are multiple CAD systems for architects present to design building. As these systems are used to generate a huge amount of data. Moreover, we can use S.E is a source of considerable similar data with code and objects that needs to be powerful tools for management and maintenance. h. Virtual Worlds Nowadays many applications are using three-dimensional virtual spaces. Also, these spaces and the objects they contain have to describe with special languages such as VRML. Ideally, we have to define virtual spaces as they can share objects and places. Also, there present the remarkable amount of virtual reality object available. i. Text reports and memos (e-mail messages) As communications are based on the reports and memos in textual forms in many companies. As they are exchanged by e-mail. Although, we use to store it in digital form for future use. Also, reference creating formidable digital libraries. 6. Uses of Data Mining a. Automated prediction of trends and behaviors We use to automate the process of finding predictive information in large databases. Questions that required extensive hands-on analysis can now be answered from the data. Targeted marketing is a typical example of predictive marketing. As we also use data mining on past promotional mailings. That is to identify the targets to maximize return on investment in future mailings. Other predictive problems include forecasting bankruptcy and other forms of default. And identifying segments of a population likely to respond similarly to given events. b. Automated discovery of previously unknown patterns As we use data mining tools to sweep through databases. Also, to identify previously hidden patterns in one step. There is a very good example of pattern discovery. As it is the analysis of retail sales data. That is to identify unrelated products that are often purchased together. Also, there are other pattern discovery problems. That includes detecting fraudulent credit card transactions. It is identified that anomalous data could represent data entry keying errors. 7. Data Mining Techniques a. Artificial neural networks We use data mining in non-linear predictive models. As this learn through training and resemble biological neural networks in structure. b. Decision trees As we use tree-shaped structures to represent sets of decisions. Also, by this rules are generated for the classification of a dataset. These decisions generate rules for the classification of a dataset. As there are specific decision tree methods that includes Classification and Regression Trees and Chi-Square Automatic Interaction Detection (CHAID). c. Genetic algorithms There are the present genetic combination, mutation, and natural selection for optimization techniques. That is design based on the concepts of evolution. d. Nearest neighbor method A technique that classifies each record in a dataset based on a combination of the classes of the k record(s) like. it in a historical dataset (where k ³ 1). Sometimes called the k-nearest neighbor technique. e. Rule induction The extraction of useful if-then rules from data based on statistical significance. 8. Data Mining Terminologies a. Notation Input X: X is often multidimensional. Each dimension of X is denoted by Xj and is referred to as a feature variable or , variable. Output Y: called the response or dependent variable. A response is available only when learning is supervised. b. Nature of Data Sets a. Quantitative: Measurements or counts, recorded as numerical values, e.g. Height, Temperature, # of Red M&M’s in a bag b. Qualitative: Group or categories c. Ordinal: possesses a natural ordering, e.g. Shirt sizes (S, M, L, XL) d. Nominal: just name of the categories, e.g. Marital Status, Gender, Color of M&M’s in a bag 9. Why Data Mining As data mining is having spacious applications. Thus, it is the young and promising field for the present generation. It has attracted a great deal of attention in the information industry and in society. Due to the wide availability of huge amounts of data and the imminent need for turning such data into useful information and knowledge. Thus, we use information and knowledge for applications ranging from market analysis. This is the reason why data mining is also called as knowledge discovery from data. 10. Data Mining Architecture We need to apply advanced techniques in the best way. As they must be fully integrated with a data business analysis tools. To operate data mining tools we need extra steps for the extracting, and importing the data. Furthermore, new insights need operational implementation, integration with the warehouse simplifies the application. We have to apply analytic data warehouse to improve business processes. Particularly in areas such as promotional campaign management, and so on. Below figure illustrates an architecture for advanced analysis in a large data warehouse. The ideal starting point is a data warehouse that must contain a combination of internal data tracking all customer contact. This should coupled with external market data about competitor activity. Background information on potential customers also provides an excellent basis for prospecting. Although, this warehouse can be implemented in a variety of relational database systems. Such as Sybase, Oracle, Redbrick, and so on, and should be optimized for flexible and fast data access. An OLAP (On-Line Analytical Processing) server enables a more sophisticated end-user business model. That need to apply when navigating the data warehouse. Although, multidimensional structures allow the user to analyze the data. As they want to view their business. Such as summarizing by product line, region. Further, the Data Mining Server must be integrated with the data warehouse. And, the OLAP server to embed ROI-focused business analysis directly into this infrastructure. Also, integration with the data warehouse enables the operational decisions. That is to be implemented and tracked. Also, keep warehouse grows with new decisions and results. Thus, the organization can mine the best practices and apply them to future decisions In the OLAP, results enhance the metadata. That is by providing a dynamic metadata layer. As this layer is used to represents a distilled view of the data. Reporting, visualization, and tools can then be applied to plan future actions. And confirm the impact of those plans. 11. Data Mining Process Data Mining, also popularly known as Knowledge Discovery in Databases (KDD). Also, nontrivial extraction of implicit information from data in databases. This process comprises of a few steps. That is to lead from raw data collections to some form of new knowledge. The iterative process consists of the following steps: a. Data cleaning This is also called as data cleansing. As in this phase noise data and irrelevant data are removed from the collection. b. Data integration In this multiple data is combined at the same place. c. Data selection We have to decide the data relevant to the analysis is decided on and retrieved from the data collection. d. Data transformation It is also a data consolidation method. Also, it’s a phase in which the selected data is transformed into forms. That are appropriate for the mining procedure. e. Data mining In this, we have to apply clever techniques to extract patterns potentially useful. f. Pattern evaluation In this process interesting patterns representing knowledge are identified based on given measures. g. Knowledge representation It is the final phase. Particularly in this phase, knowledge is discovered and represented to the user. This essential step uses visualization techniques. That help users understand and interpret the data mining results. 12. Categories of Data Mining Systems As there are too many data mining systems available. Also, some systems are specific that we need to dedicate to a given data source. Further, according to various criteria, data mining systems have to categorize. a. Classification according to the type of data source mined According to the type of data handle, have to perform classification of data mining. Such as spatial data, multimedia data, time-series data, text data, World Wide Web, etc. b. Classification according to the data model drawn on In this classification is done on the basis of a data model. Such as relational database, object-oriented database, data warehouse, transactional, etc. c. Classification according to the king of knowledge discovered In this classification it is been done on the basis of the kind of knowledge. Such as characterization, discrimination, association, classification, clustering, etc. d. Classification according to mining techniques used As data mining systems employ are used to provide different techniques. According to the data analysis, we have to done this classification. Such as machine learning, neural networks, genetic algorithms, , etc. 13. Issues in Data Mining a. Mining methodology issues These issues to the data mining approaches applied and their limitations such as versatility of the mining approaches that can dictate mining methodology choices. b. Performance issues As there is much artificial intelligence and statistical methods exist. That is use for data analysis. However, these methods were often not designed for the very large datasets. And data mining is dealing with today. As Terabyte sizes are common. We can say this raises the issues of scalability and efficiency of the data mining methods. That would process considerably large data. . Moreover, Linear algorithms are usually the norm. In the same theme, sampling can be used for mining instead of the whole dataset. However, issues like completeness and choice of samples may arise. Other topics in the issue of performance are incremental updating and parallel programming. We use parallelism to solve the size problem. And if the dataset can be subdivided and the results can be merged later. Incremental updating is important for merging results from parallel mining. That the new data becomes available without having to re-analyze the complete dataset. c. Data source issues We must know that there are many issues related to the data sources. Some are practical such as the diversity of data types. While others are philosophical like the data glut problem. We certainly have an excess of data since. Also, we already have more data than we can handle. Then we are still collecting data at an even higher rate. Although, If the spread of database management systems. That has helped in increasing the gathering of information. And the advent of data mining is certainly encouraging more data harvesting. The current practice is to collect as much data as possible now and process it or try to process it, later. Regarding the practical issues related to data sources, there is the subject databases. Thus, we need to focus on diverse complex data types. We are storing different types of data in a variety of repositories. It is difficult to expect a data mining system to achieve good mining results on all kinds of data and sources. As different kinds of data and sources may require distinct algorithms and methodologies. Currently, there is a focus on relational databases and data warehouses. It’s a versatile data mining tool, for all sorts of data, may not be realistic. Moreover, data sources, at structural and semantic levels, poses important challenges. That is not only to the database community but also to the data mining community. 14. Applications of Data Mining Weather forecasting. E-commerce. Self-driving cars. Hazards of new medicine. Space research. Fraud detection. Stock trade analysis. Business forecasting. Social networks. Customers likelihood. More applications inlcude: A credit card company can leverage its vast warehouse of customer transaction data. As we perform this to identify customers. It shows more interest in a new credit product. Moreover, we use small test mailing. So the attributes of customers with an affinity for the product have to identify. Recent projects have indicated more than a 20-fold decrease in costs. That is target for mailing campaigns over conventional approaches. As diversified transportation company used to apply data mining. That is to identify the best prospects for its services. Further, need to apply this segmentation to a general business database. Such as those provided by Dun & Bradstreet can yield a prioritized list of prospects by region. Large consumer packaged goods company. That can apply data mining to improve its sales process to retailers. Although, data from consumer panel, and competitor activity have to apply. That is to understand the reasons for brand and store switching. Through this analysis, we have to manufacturer it. Then select promotional strategies that best reach their target customer segments. 15. Areas where Data Mining had Good and Bad Effects a. Good Effects Predict future trends, customer purchase habits Help with decision making Improve company revenue and lower costs Market basket analysis Fraud detection b. Bad Effects User privacy/security Amount of data is overwhelming Great cost at implementation stage Possible misuse of information Possible inaccuracy of data 16. Data Mining advantages and Disadvantages Data mining advantages To find probable defaulters, we use data mining in banks and financial institutions. This is done based on past transactions, user behavior and data patterns. It helps advertisers to push right advertisements to the internet. That surfer on web pages based on machine learning algorithms. This way data mining benefit both possible buyers as well as sellers of the various products. The retail malls and grocery stores peoples used data mining. That is to arrange and keep most sellable items in the most attentive positions. It has become possible due to inputs obtained from data mining software. This way data mining helps in increasing revenue. As data mining is having different methods. That are cost-effective compare to other applications. We use data mining in so many areas. Such as bio-informatics, medicine, genetics,etc. We use data mining to identifying criminal suspects. That is by law enforcement agencies as mentioned above. Data Mining disadvantages Security: The time at which users are online for various uses, must be important. They do not have security systems in place to protect us. As some of the data mining analytics use software. That is difficult to operate.Thus they require a user to have knowledge based training. The techniques of data mining are not 100% accurate. Hence, it may cause serious consequences in certain conditions. 17. Conclusion As a result, we have studied Data Mining introduction. Also, have studied about it’s all concepts. We have covered each and everything with pros-cans and applications. Furthermore, if you feel any query feel free to ask in a comment section.
Introduction to Data Mining and Machine Learning Techniques
1
introduction-to-data-mining-and-machine-learning-techniques-1d209b6c69b2
2018-08-07
2018-08-07 12:40:23
https://medium.com/s/story/introduction-to-data-mining-and-machine-learning-techniques-1d209b6c69b2
false
2,879
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Harshali Patel
Big Data Trainer at Dataflair Web Sevices Pvt. Ltd., blogger at https://data-flair.training/blogs/ and a technology freak. Knowledge sharing is my passion.
7be83b13481c
patelharshali136
23
1
20,181,104
null
null
null
null
null
null
0
null
0
eac29a2591
2018-06-28
2018-06-28 14:08:24
2018-07-02
2018-07-02 11:03:46
5
false
en
2018-07-02
2018-07-02 11:03:46
9
1d2108910bf3
3.323899
5
0
0
At Dalia, our amazing data science team came together to form a new Meetup Group to place data analytics and machine learning in context…
5
Intro to Data Science at Dalia At Dalia, our amazing data science team came together to form a new Meetup Group to place data analytics and machine learning in context. With demonstrations of how to address challenges in short, bite-sized workshops, the meetup is welcoming everyone from absolute beginners to seasoned data scientists. Our first meetup was on June 18th and we had a great time going over some introductory topics with the participants. Topics we covered I had the pleasure as a Senior Data Engineer to explain the basics of data science and to provide a guided walkthrough of data analysis with Python. Next, Irati R. Saez de Urabain, Senior Data Scientist, explained what Dalia’s data science team does on a daily basis, provided some general information on our larger projects and showed how we put machine learning algorithms into production. We saved some time for a Q&A, snacks and general networking at the end of the session. Walking through the data process with Kostas We started the walkthrough by helping the group to get ready: 1. The group got started by downloading Anaconda, an open source python distribution that includes all the basic libraries needed for data processing and data analysis. 2. Next, we set up a Jupyter Notebook by running the following in our terminals: conda create -n dalia-meetup python=3.6 source activate dalia-meetup (conda activate dalia-meetup) conda install anaconda jupyter notebook Then we started a new Python 3 notebook in the directory: 3. Then we downloaded two datasets from Kaggle, a great resource for free datasets and data science exercises and competitions. We used the Superhero dataset and the International Football Results from 1872 to 2017. 4. We discussed how to read, clean and transform data using our downloaded datasets. 5. Finally we explored tools like groupby and shape to get a better feel for the data before moving on to graphing line plots, scatter plots and histograms. If you want to follow along, you can complete the first three steps and then download our notebook and files from Github to see the code used in our data analysis. How we use data science at Dalia Research with Irati Irati explained that the majority of the work of the data science team at Dalia can be split into two main groups: the first being Ad Hoc analyses to support the needs of other departments. These Ad Hoc analyses are usually business related, are done in Python or R, and are produced in the form of analytical reports. The second type of work that Dalia’s DS team focuses on are pure data science projects that involve the implementation of algorithms that support different parts of our business: Sometimes this means building a prototype, and other times it means implementing algorithms that go to production. Some of the data science team’s current projects include: dealing with fraud, MRP (an estimation method to better predict user responses), working on algorithms that help us determine the trustworthiness of users, working to improve the algorithm that matches users to the appropriate survey, and data accessibility and visualization for business teams. Irati continued the presentation by introducing the way the team uses machine learning algorithms to improve our survey platform, and how machine learning works in production. Check out the the full presentation for all the details! Our upcoming meetups In our upcoming meetings we’ll focus on business-related challenges that Dalia’s DS team tackle every day, as well as detailed explanations of our solutions. All levels are welcome to attend, but those with a general to more developed knowledge of data science may find it easier to follow than absolute beginners. How to join us at our next meetup You can join our data science meetup by signing up here. Our next meetup will be on July 16th, but we’ll send you a reminder a few weeks before it starts! We look forward to seeing you there :)
Intro to Data Science at Dalia
58
intro-to-data-science-at-dalia-1d2108910bf3
2018-07-03
2018-07-03 10:32:52
https://medium.com/s/story/intro-to-data-science-at-dalia-1d2108910bf3
false
660
Founded in 2013, Dalia is a Berlin-based technology startup that distributes millions of surveys in over 90 countries to provide research agencies, academia, public institutions, brands and other organizations access to high-quality market & opinion data.
null
DaliaResearch
null
@daliaresearch
contact@daliaresearch.com
daliaresearch
DATA,BRANDS,GLOBAL,MARKET RESEARCH,TECH
DaliaResearch
Data Science
data-science
Data Science
33,617
Kostas Christidis
Data Engineer / Data Scientist @ Dalia Research
5b4db3105d8a
kostas.christidis
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-24
2018-09-24 14:10:57
2018-09-24
2018-09-24 14:14:20
1
false
en
2018-09-24
2018-09-24 14:14:20
1
1d211bbd9d54
1.169811
0
0
0
This write-up focuses on understanding the current state of blockchain development — how projects have grown, where are the most compelling…
4
Blockchain in Context: Building an Ecosystem of the Future This write-up focuses on understanding the current state of blockchain development — how projects have grown, where are the most compelling use-cases, what are the most pressing challenges, and what is the best way forward. History Lesson The Blockchain ecosystem is evolving every single day, with numerous use cases emerging across organizations in all industry verticals. The academic hypotheses that was born almost five years ago, is steadily gaining momentum. The past few years have seen a range of blockchain-based concepts being put to test. However, only a handful of these ambitious experiments will actually see the light of day. The China Academy of Information and Communications Technology (CAICT) claims that only 8% of the over 80,000 blockchain projects ever launched are still active today. Furthermore, blockchain projects only average a lifespan of roughly 1.22 years. This is concerning, especially with the importance being placed on the still so nascent technology. These numbers indicate that we might be moving in the wrong direction, but is that really the case? If yes, why is this happening? Why is there a large number of failed projects despite huge potential? Does the technology itself is proving to be a bottleneck, or is this is a case of cultural rejection? If no, what is really the state of current blockchain affairs? Why is there so much noise around it? Is it making any meaningful progress, or is it just massively overhyped? Let’s find out. Read the full report here.
Blockchain in Context: Building an Ecosystem of the Future
0
blockchain-in-context-building-an-ecosystem-of-the-future-1d211bbd9d54
2018-09-24
2018-09-24 14:14:20
https://medium.com/s/story/blockchain-in-context-building-an-ecosystem-of-the-future-1d211bbd9d54
false
257
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Phronesis Partners Gist
We partner with clients to ‘simplify growth’ by leveraging our research and intelligence capabilities. Write to us at: info@phronesis-partners.com
2e3abb7b217f
Phronesis_inc
66
134
20,181,104
null
null
null
null
null
null