audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0
| null | 0
|
ea8d68f07867
|
2018-08-12
|
2018-08-12 10:59:30
|
2018-08-12
|
2018-08-12 11:24:42
| 0
| false
|
en
|
2018-08-12
|
2018-08-12 11:24:42
| 0
|
1e4d2fbe583a
| 0.901887
| 1
| 0
| 0
|
Starting this week slow in terms of work was a really nice change. We started by watching movies including elements of AI in them. In these…
| 2
|
Reflection (Week — 3)
Starting this week slow in terms of work was a really nice change. We started by watching movies including elements of AI in them. In these movies AI was shown in both, positive and negative, perspectives. Understanding how AI could potentially make or destroy the future really gives a insight as to how we might be focused on creating something complex and conscious and how we might/might not be able keep the consequences of creating something like this in mind.
Next to this after this we refreshed our concepts and skills in python in order to keep track of what we were learning and how our progress is going.
It was good for me to have such a session since having a bad memory, I tend to forget minute concepts from time to time, which this sessions really helped in avoiding.
Following that having a “meta-class” really introduced me to the concept itself and made me realize the importance of it all. According to my understanding meta in its crux means “studying about studying” or something that might be in sorts self-referential. T have a class like this turned out to be interesting since it was a new experience.
All-in-all this week really strengthened my earlier concepts and skills along with giving me a new insight as to how badly well-made and complex AI or Conversational Agents can turn out.
|
Reflection (Week — 3)
| 1
|
reflection-week-3-1e4d2fbe583a
|
2018-08-12
|
2018-08-12 11:24:43
|
https://medium.com/s/story/reflection-week-3-1e4d2fbe583a
| false
| 239
|
A series of studios to explore, design and learn the practice of prototyping with coding
| null | null | null |
Design with code
| null |
design-with-code
|
DESIGN,INTERACTION DESIGN,PROGRAMMING,PROTOTYPING
| null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Arsh Kumar
| null |
67c84bf3ad16
|
arsh479
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-28
|
2018-01-28 14:30:43
|
2017-11-27
|
2017-11-27 10:42:23
| 1
| false
|
en
|
2018-01-28
|
2018-01-28 14:36:12
| 7
|
1e4e32f1ceab
| 4.486792
| 0
| 0
| 0
|
The progression seen in the fashion and beauty industry is that both consumers and retailers are looking towards AR as a way to enhance the…
| 5
|
How augmented reality is revolutionizing the way people shop
The progression seen in the fashion and beauty industry is that both consumers and retailers are looking towards AR as a way to enhance the shopping experience and drive sales
The progression seen in the fashion and beauty industry is that both consumers and retailers are looking towards AR as a way to enhance the shopping experience and drive sales
A brief look at augmented reality in beauty and fashion
Over the past few years, augmented reality (AR) has become more mainstream than ever before. From games such as Pokémon Go to Amazon’s AR view, there is a growing trend around being able to virtually ‘try before you buy’.
With recent news that Target, the American discount retailer, has rolled out a new way to shop using augmented reality technology, initially on its mobile website, being able to change between outfits and makeup products is now becoming a reality.
The development of AR in products and apps is revolutionising the way we shop by helping consumers ‘try on’ various outfits and products.
>See also: AR in the industrial environment: the benefits and challenges
In theory it will be possible to enter any shop in the world, browse and try on clothing and products to see how it looks. It will also allow the notion of online shopping to take on a whole new meaning. Even shops that are exclusively online will be able to allow customers to ‘try on’ clothes and make informed decisions.
How AR is revolutionising the way people shop
Shoppers are now able to virtually try on clothing and even make-up to ensure what they are getting is right for them, saving them wasting time, money and effort returning products they don’t like or even letting them sit idle.
Augmented reality is becoming increasingly popular and beginning to hit the mainstream. We are seeing more and more high street companies and big-name brands, such as IKEA, opting to enhance consumers’ experiences through AR — the technology is on the cusp of becoming a standard method of shopping.
Recent years have seen an explosion of YouTube and online tutorials — AR is the next logical step, showing in real-time how items will look in your home, how clothes will look on your body and how make-up will look on your face.
The advantages of AR for the consumer and retailer
1. Less waste
There are three main advantages to being able to try before you buy. The first is a reduction in waste. This goes for time, effort and resources. Being able to order an item of clothing knowing that the size you have ordered is indeed correct will lower the number of items being sent back and thus lower emissions and the amount of time wasted. This eliminates the need for replacements being ordered or people feeling the need to order duplicates.
>See also: The value of AR to the enterprise
Equally with make-up, UK consumers are throwing away tonnes of unused make-up having ordered it online, or even buying in store without trying on, and it inevitably not being the colour or texture it appeared to be. Being able to virtually model the make-up through AR, people can make an informed decision.
2. Integrated ordering system
AR systems are increasingly becoming used as part of an integrated retail journey. With many of these AR products, such as the HiMirror Plus+ and the Converse Shoe Sampler being linked to the internet, it is possible to preview the product through AR and then order it to be delivered through the same user interface. This streamlines the process as well as making it more efficient for those that are time poor.
Finally, the increased use of AR in fashion and beauty will allowed retailers to collect greater amounts of big data about the products individuals purchase and therefore can tailor advertising to an even greater extent than they already do.
Being able to tell what products consumers have looked at, which sizes they have tried on and how long they tried it on for, collect a large amount of data about consumer habits, will then allow business to better suggest other products.
3. Increase sales
The ability to try on clothes and accessories will help to better convince the consumer what they are buying is right for them and will reduce indecision. This will invariably lead to people make more informed purchases and ultimately more likely to purchase.
>See also: VR and AR device market ready to soar
Being able to offer more cautious online shoppers the opportunity to try on a myriad of clothing options whilst not having to leave the house will make it easier than ever before to convince consumers to make the purchase.
Smaller retailers will be the greatest beneficiaries of the AR revolution. Being able to drive sales through online AR will help these small retailers exhibit their clothing in a similar way to that of larger brands, thus democratising the clothing and make-up market by helping people try on a multitude of clothes and make-up products from a huge number of retailers.
4. Invites smaller retailers to go exclusively online
One of the great advantages — rather, one of the only advantages — that physical shops have over online stores is the ability for customers to try on clothes in a number of sizes to find the correct one. For many smaller retailers, the costs associated with owning a shop are very high and are likely to be their single biggest expense.
Being able to remove this, a retailer can save a huge amount of money by having an online changing room which also acts as an online warehouse where all sizes are stored so that whatever size a consumer would like to try on and ultimately buy is available. Conversely, in a physical store, not all sizes are always available.
>See also: Augmented reality: a look at its potential in financial services
The progression seen in the fashion and beauty industry is that both consumers and retailers are looking towards AR as a way to enhance the shopping experience and drive sales. The innovations that we are seeing in AR lend themselves to shopping.
There are undoubtedly more developments needed before consumers see these AR enabled devices in every shop and home.
However, with the advancements that there have been in recent years, such as the Amazon AR view, there is undoubtedly a shift towards the future of shopping. A future in which people are able to imagine the world around them and then go out and make that image a reality.
Sourced by Cin-Yee Ho, head of Marketing & PR Europe at HiMirror
Nick Ismail is a reporter for Information Age. He has a particular interest in smart technologies, AI and cyber security. Originally published at www.information-age.com on November 27, 2017.
Stewart irvine, #techtalk983, #stewartirvine, stewartirvine, @stewarteirvine
|
How augmented reality is revolutionizing the way people shop
| 0
|
how-augmented-reality-is-revolutionizing-the-way-people-shop-1e4e32f1ceab
|
2018-01-28
|
2018-01-28 14:36:13
|
https://medium.com/s/story/how-augmented-reality-is-revolutionizing-the-way-people-shop-1e4e32f1ceab
| false
| 1,136
| null | null | null | null | null | null | null | null | null |
Ecommerce
|
ecommerce
|
Ecommerce
| 46,740
|
Stewart Irvine
|
Digital Media Pioneer & Technologist
|
4fc973d4bc8c
|
stewartirvine
| 271
| 634
| 20,181,104
| null | null | null | null | null | null |
0
|
sc.textFile("README.md")
lines.count() # Count the number of items in this RDD
### Python filtering example
lines = sc.textFile("README.md") # Create an RDD called lines
pythonLines = lines.filter(lambda line: "Python" in line)
pythonLines.first()
### Output-
#'high-level APIs in Scala, Java, Python, and R, and an optimized #engine that'
def hasPython(line):
return 'Python' in line
pythonLines = lines.filter(hasPython)
pythonLines.first()
### Output-
#'high-level APIs in Scala, Java, Python, and R, and an optimized #engine that'
cd spark-2.3.0-bin-hadoop2.7/
bin/spark-submit /Users/bsuvro/helloWorld.py
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("local").setAppName("My App")
sc = SparkContext(conf = conf)
sc.stop()
cd spark-2.3.0-bin-hadoop2.7/
bin/spark-submit /Users/bsuvro/WordCounter.py
# shows the first two records of the dataframe
myRange.show(2)
# gives total number of records in the dataframe
divisBy2.count()
# Returns all the records as a list
divisBy2.collect()
git clone https://github.com/databricks/Spark-The-Definitive-Guide
cd Spark-The-Definitive-Guide
head data/flight-data/csv/2015-summary.csv
| 17
|
7c64d76ec852
|
2018-06-21
|
2018-06-21 05:32:46
|
2018-06-21
|
2018-06-21 13:11:14
| 14
| false
|
en
|
2018-09-17
|
2018-09-17 07:56:42
| 4
|
1e4ec879b9af
| 9.151887
| 22
| 1
| 1
|
Exploring how Drivers and Executors work and building a standalone application using Structured APIs like DataFrame and RDD
| 5
|
Introduction to Core Spark Concepts
Exploring how Drivers and Executors work and building a standalone application using Structured APIs like DataFrame and RDD
My Previous Blog in this Series
Downloading Spark and getting started with Python notebooks
Table of Contents
Drivers and Executors
Spark Context and Configurations
Transformation and Actions
Building an end to end application
Introduce Directed Acyclic Graph (DAG)
Drivers and Executors
At a high level, every Spark application consists of a driver program that launches various parallel operations on a cluster. Typical driver program could be the Spark shell itself, and you could just type in the operations you wanted to run.
Driver program access Spark through a SparkContext object, which represents a connection to a computing cluster. In the shell, a SparkContext is automatically created for you as the variable called ‘sc’.
Once you have a SparkContext, you can use it to build RDDs
This RDD represents the lines of a text in a file and subsequently we can run more operations on these lines like -
To run these operations, driver programs typically manage a number of nodes called executors. For example, if we were running the count() operation on a cluster, different machines might count lines in different ranges of the file. The below schematic diagram shows how Spark executes on a cluster.
Finally, a lot of Spark’s API revolves around passing functions to its operators to run them on the cluster. For example, we could extend our README example by filtering the lines in the file that contain a word, such as Python
Now if you are not familiar with the “lambda” function in this example, it is shorthand to define functions inline in Python. So, here “line” variable will contain each line in the text file one at a time. Alternatively, you could have used the following which would give the same output-
The interesting part of this example is that, function-based operations like
“filter” also parallelize across the cluster, so you can write code in a single driver program and automatically have parts of it run on multiple nodes.
Standalone Applications
Apart from running interactively, Spark can be linked into standalone applications in either Java, Scala, or Python. The main difference from using it in the shell is that you need to initialize your own SparkContext. After that, the API is the same.
Let’s take an example-
Say, you have written code in Python, lets call it as helloWorld.py and which is located in the path “/Users/bsuvro” (I am using a MacBook). Now if I want to run this code using Spark, we need to do use spark-submit as it includes the spark dependencies in Python. In short, spark-submit sets up the environment for Spark’s Python API to function. In the command-prompt we do the following. The output is below as well.
(Note that you will have to use backslashes instead of forward slashes on Windows.)
Initializing a SparkContext
Once you have linked an application to Spark, you need to import Spark packages in your program and create a SparkContext. You do so by first creating a ‘SparkConf’ object to configure your application, and then building a SparkContext for it.
These examples show the minimal way to initialize a SparkContext, where you pass two parameters:
A cluster URL, namely local in these examples, which tells Spark how to connect to a cluster. local is a special value that runs Spark on one thread on the local machine, without connecting to a cluster.
An application name, namely My App in these examples. This will identify your application on the cluster manager’s UI if you connect to a cluster.
After you have initialized a SparkContext, you can use all the methods we showed before to create RDDs (e.g., from a text file) and manipulate them.
Finally, to shut down Spark, you can either call the stop() method on your SparkContext, or simply exit the application
Building a Standalone Applications
With this knowledge let’s try to build a standalone application.
Problem Statement : Count the number of words in a textfile.
Solution : The below code is written in PySpark which is the Python API to Spark.
Word Counter using RDD
This piece of code you can run on any interactive editor (IDE) like Jupyter Notebook and can immediately get the output.
If you run it as a .py file, then we need spark-submit like below-
DataFrames, the most common Structured API
A DataFrame simply represents a table of data with rows and columns. The list that defines the columns and the types within those columns is called the schema. A Spark DataFrame can span thousands of computers. However, Python/R DataFrames (with some exceptions) exist on one machine rather than multiple machines. This limits what you can do with a given DataFrame to the resources that exist on that specific machine. However, because Spark has language interfaces for both Python and R, it’s quite easy to convert Pandas (Python) DataFrames to Spark DataFrames, and R DataFrames to Spark DataFrames.
Note: Spark has several core abstractions: Datasets, DataFrames, SQL Tables, and Resilient Distributed Datasets (RDDs). These different abstractions all represent distributed collections of data. The easiest and most efficient are the DataFrames.
To allow every executor to perform work in parallel, Spark breaks up the data into chunks called partitions. A partition is a collection of rows that sit on one physical machine in your cluster.
Transformations
In Spark, the core data structures are immutable, meaning they cannot be changed after they’re created. To “change” a DataFrame, you need to instruct Spark how you would like to modify it to do what you want. These instructions are called transformations.
Creating a DataFrame and doing a transformation
Notice in the above code, it doesn’t return any output. It is because we specified only an abstract transformation, it only acts when we perform an Action. This sort of transformation is also called as Narrow Transformation where one input partition will contribute to only one output partition. With Narrow Transformation, Spark will automatically perform an operation called Pipelining meaning that if we specify multiple filters (transformation) on DataFrames, they will all be performed in-memory.
On the other hand a Wide Transformation will have input partitions contributing to many output partitions. Spark will exchange partitions across the cluster using Shuffle. This is when Spark writes results to disk. This leads to the topic called “lazy evaluation”.
Lazy Evaluation — Spark will wait until the very last moment to execute the graph of computation instructions. In Spark, instead of modifying the data immediately when you express some operation, you build up a plan of transformations that you would like to apply to your source data. By waiting until the last minute to execute the code, Spark compiles this plan from your raw DataFrame transformations to a streamlined physical plan that will run as efficiently as possible across the cluster. It is efficient and Spark can optimize the entire data flow from end to end.
Actions
To trigger the computation, we run an Action. An action instructs Spark to compute a result from a series of transformations.
Actions are typically three types-
Actions to view the data in the console
Actions to collect the data to native objects in the respective language
Actions to write to output data sources.
In specifying this action, we started a Spark job that runs our filter transformation (a narrow transformation), then an aggregation (a wide transformation) that performs the counts on a per partition basis, and then a collect, which brings our result to a native object in the respective language.
We can monitor these steps using SparkUI (monitor Spark jobs running on both cluster and standalone). It is available on port 4040 of the driver node. If you are running on local mode, this will be http://localhost:4040
An End-to-End Example
First thing we will get the data. My suggestion would be to get the entire Git repository so that we can follow it as we go along learning Spark.
Using CMD or Terminal i.e. any command prompt based on which computer you are using, fire the below line of code.
This will download the entire repository for you in the location from where you fire this command. The folder structure will be as follows-
Folder structure of the Git repo
Here, we will use Spark to analyze some flight data.
So, first open up a command prompt. I am on Mac so I am using the Terminal, if you are on Windows, use windows command prompt and go to the folder where you cloned the Git Repo.
Then let’s go the data folder and explore what we have-
Let’s see what we have in one of these csv files -
Now, let’s read that as a Spark DataFrame so that we can work on it. To do this I am just running a Jupyter Notebook from the location where I cloned the Git Repo. You can also do it using PySpark shell, but I like Jupyter Notebook as it’s interactive. I can anytime run it form the shell as well using spark submit (the reference is given above)
PySpark code for the end to end app
Let’s go over step-by-step explaining the above code-
The DataFrame have a set of columns with an unspecified number of rows. It is unspecified as reading data is a transformation, and is therefore a lazy operation. Spark peeked at only a couple of rows of data to try to guess what types each column should be. Now if we perform the take operation on the DataFrame it will show the results.
flightData2015 DataFrame snapshot
Let’s sort our data according to the count column. Sort is a transformation that returns a new DataFrame. We can see that Spark is building up a plan for how it will execute this across the cluster by looking at the explain plan. The top part of the plan is the end result and bottom being the sources of the data.
Note: Sort is a wide transformation because rows will need to be compared with one another.
As sort is a wide transformation, by default we perform a shuffle with 200 shuffle partitions. Let’s set this value to 5 to reduce the number of output partitions from the shuffle. Now to kick this plan, we just specify an Action.
The Explain plan which Spark creates
Action performed after sorting
The process of logical and physical DataFrame manipulation
The logical plan of transformations that we build up defines a lineage for the DataFrame so that at any given point in time, Spark knows how to recompute any partition by performing all of the operations it had before on the same input data.
Now let’s use SparkSQL, and you can register any DataFrame as a table or view (a temporary table) and query it using pure SQL. There is no performance difference between writing SQL queries or writing DataFrame code, they both “compile” to the same underlying plan that we specify in DataFrame code.
This is our first multi-transformation query and let’s see the output-
Multi-transformation query
This execution plan is a Directed Acyclic Graph (DAG) of transformations, each resulting in a new immutable DataFrame, on which we call an action to generate a result.
The entire DataFrame DAG of transformations
My next blog in this Series
In my next blog I will go over Apache Spark’s vast ecosystem of tools and libraries and also an introduction to the DataBricks platform.
Sources
Learning Spark by Matei Zaharia, Patrick Wendell, Andy Konwinski, Holden Karau
Spark, The Definitive Guide by Bill Chambers & Matei Zaharia
|
Introduction to Core Spark Concepts
| 44
|
introduction-to-core-spark-concepts-1e4ec879b9af
|
2018-09-17
|
2018-09-17 07:56:42
|
https://medium.com/s/story/introduction-to-core-spark-concepts-1e4ec879b9af
| false
| 2,041
|
Explore AI through the principles of Machine/Statistical Learning, Mathematics and Computer Science.
| null |
suvro.banerjee.37
| null |
Explore Artificial Intelligence
|
suvrobaner@gmail.com
|
explore-artificial-intelligence
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,MATHEMATICS,COMPUTER SCIENCE,DATA SCIENCE
|
Suvro_Banerjee
|
Apache Spark
|
apache-spark
|
Apache Spark
| 877
|
Suvro Banerjee
|
Founder of Explore Artificial Intelligence (https://medium.com/explore-artificial-intelligence) & Machine Learning Engineer @ Juniper Networks
|
ac3247b15c91
|
suvro.banerjee16
| 514
| 32
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-17
|
2017-09-17 03:27:02
|
2017-09-17
|
2017-09-17 06:34:14
| 1
| false
|
en
|
2017-10-22
|
2017-10-22 17:02:18
| 2
|
1e5097db3fca
| 2.4
| 1
| 0
| 0
|
It seems that robots has been an interesting topic with an ethical perspective to people since the first ‘robot’ was created, and it has…
| 4
|
Emotions in Machines
It seems that robots has been an interesting topic with an ethical perspective to people since the first ‘robot’ was created, and it has definitely never been more relevant than now. The word ‘roboethics’ will most likely pop up more as modern research is progressing in fields of artificial intelligence, cognitive and neuroscience. These seemingly intelligent systems that are becoming better at imitating human qualities. We do not yet talk about ‘roborights’, so I will attempt to explore it without sounding too serious.
When AI is more embedded in society, I think there will be an increasingly demand for our attention towards these machines in other areas than marketing — just like your friends wants you to talk. Last year we spend around 4 hours per day looking at our smartphones. 65% of those hours are spend communicating. Only a portion of this is today used talking to chat-bots, but potentially it is 80 hours per month worth of data that we feed machines to eventually pass the Turing test (there has been many attempts but under biased circumstances). Your waitress, Spanish teacher, fitness instructor and psychiatrist can today be made of blabbering algorithms, and through this human-computer interaction they analyze and attempt to reflect our behavior. We are establishing new laws to prevent drones, self-driving cars and other artificial intelligent beings from harming us in the future, and it seems that we solely focus on this perspective for now, which makes it interesting to talk about if there ever will be a different aspect of ‘who is harming who’.
Even though human behavior in machines today is superficial, they eventually will achieve complexity to a point where it will become hard to distinct human from machine. HBO’s remake of Westworld makes a good example in its premise: we are well aware that our beloved characters are machines, yet we connect the most with these characters. A questions arose in my mind seeing this:
If we are developing machines who crave and averse, is it in any case correct to say that a machine suffers?
The robot Clementine sometimes gets roughed up pretty bad in Westworld
It may sound stupid, because it is not rational to actually adopt laws and dealing with it in 2017 — but as we simultaneously try to unlock the mystery of the mind and advance intelligent technology, Westworld-like machines may be possible in the (very) distant future. If machines were not solely programmed (as human inheritance), but their programming based on their environment as well, and their materials not being made of hardware but biological materials from where their ‘hormones’ were changeable by an external source, would there still be no suffering involved?
I was drawn to the acts of the robots in Westworld. When something happened to them, I felt just the same as if it were a ‘human character’. What if the solution to introvert kids will be robots who can act as best friends; ones that they can be truly emotionally invested in. If this robot one day is taken away from them, the robot might not feel at all, but suffering will exist in kids letting go of their friend. This issue is not only applicable to kids, but adults are also more and more attracted to things that are not real.
If machines that we agree upon is feeling, perceiving entities ever gets created, it becomes relevant to decide how we treat them and how they should treat us. Not long before that, and definitely not now.
|
Emotions in Machines
| 1
|
machine-emotions-1e5097db3fca
|
2018-01-22
|
2018-01-22 18:11:13
|
https://medium.com/s/story/machine-emotions-1e5097db3fca
| false
| 583
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Simon Nielsen
|
Currently pursuing a degree in Computer Science. Interested in AI.
|
5c7bc73aedea
|
post.smn
| 7
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f5af2b715248
|
2018-05-30
|
2018-05-30 16:14:34
|
2018-05-30
|
2018-05-30 18:06:38
| 2
| false
|
en
|
2018-10-31
|
2018-10-31 15:43:19
| 26
|
1e509a5c4112
| 3.541824
| 95
| 0
| 0
|
An introduction from Sequoia's Data Science team on why data-informed product building has never been more important & how to get started.
| 5
|
Data-Informed Product Building
Image Source: Sequoia Capital
As the world changes and ecosystems evolve, companies are growing faster and products are easier to create than ever before. The time it takes a product to reach 100M monthly active users has shortened dramatically (see Figure 1) — and continues to shrink today.
(From Multiple Sources)
The barrier to entry in creating software is decreasing exponentially as more people learn to code. Cloud services are eliminating the need for tedious development and infrastructure maintenance. As predicted by Moore’s law, the cost of computation is on the decline. Consumer purchasing power continues to grow. Platforms such as Google, Facebook and Amazon are making it easier to reach target audiences, and app stores make distribution a cinch.
As a result, products are creating more data — and strategically collecting and analyzing that data has never been more important. Analytics and data science are now must-haves, not afterthoughts. They are invaluable not only for “counting numbers” and building dashboards, but for helping define goals, roadmaps and strategies. The success of a company is increasingly dependent on the strength of its data science team.
But despite the indispensable nature of data science, there is a scarcity of literature that explains how to conduct useful product analysis. We intend to help fill this knowledge gap with a series of posts on how to build both data-informed products and world-class data science teams (with particular attention to the consumer space).
Our goal is to give you an understanding of how a product evolves from infancy to maturity; a holistic sense of the product metric ecosystem of growth, engagement and monetization; a framework to define goals for your company; and a toolkit you can use to analyze your product’s performance against those goals.
We will offer guidance on the analytical tools, approaches and methods required to build data-informed products. In future posts, we will also cover exploratory analysis, forecasting techniques and machine learning methods — all of which are indispensable in creating product roadmaps and strategies.
In addition, we will provide context on building world-class data science organizations: What is the role of a data scientist? When should you hire them? What skills should they have?
We plan to share several posts per month for the foreseeable future. This introduction is a living document; the table of contents below will be updated as new posts are published.
In a future series, we will discuss similar themes in the context of marketplaces and enterprise companies.
We hope you find these articles useful, and we welcome your feedback: data-science@sequoiacap.com.
Table of Contents
Evolution of a Product: Understand the characteristics of successful products from conception to maturity.
Measuring Product Health: Metrics to diagnose and analyze product health.
Defining Product Success: Metrics and Goals Setting the right goals and metrics is imperative to product success.
Retention: Techniques to improve user retention and drive growth.
Sustainable Product Growth: Learn about growth pitfalls that can limit long-term success.
Frameworks for Product Success: Understand the need for frameworks by exploring product-focused examples.
Analyzing Metric Changes: Learn how to diagnose shifts in metrics and develop an action plan for monitoring changes in your product.
Analyzing Metric Changes Part II: Product Changes: Understand how to diagnose shifts in metrics resulting from product changes.
Analyzing Metric Changes Part III: Seasonal Factors: Consider how behavioral changes can affect metrics.
Analyzing Metric Changes Part IV: Competition and Other External Factors: Consider how external factors can affect metrics.
Analyzing Metric Changes Part V: Mix Shift: Learn how mix shift can drive metric changes and the techniques used to analyze its effects.
Analyzing Metric Changes VI: Data Quality: Consider how to ensure consistent data quality that enables effective analysis.
Analyzing Metric Changes Part VII: Action Plan: Develop an action plan for monitoring shifts in your metrics.
Leveraging Data To Build Consumer Products: Our story so far.
Engagement Drives Stickiness Drives Retention Drives Growth: Understand the connection between engagement, stickiness, retention and growth.
Engagement: Engaging experiences provide value.
Engagement Part I: Introduction to Activity Feeds: Engagement is the earliest indicator of product market fit.
Engagement Part II: Content Production: Content production is the single most important factor that influences engagement.
Engagement Part III: Connections and Inventory: Connecting people with the right content will drive greater relevant inventory
Engagement Part IV: Activity Feed Ranking: Activity Feed Ranking is critical for driving Engagement in high inventory situations.
Engagement Part V: Consumption: Delightful consumption of stories leads to higher engagement and ultimately to stickiness, retention and growth.
Engagement Part VI: Feedback: Understanding the various types feedback of feedback and the role feedback plays in building an engaging product.
Engagement Part VII: Summary and Product Implications: Driving a sustainable highly engaging product requires careful considerations.
Check back next week for more updates!
This work is a product of Sequoia Capital’s Data Science team and originally published at www.sequoiacap.com. Jamie Cuffe, Avanika Narayan, Chandra Narayanan, Hem Wadhar and Jenny Wang contributed to this post. Please email data-science@sequoiacap.com with questions, comments and other feedback.
|
Data-Informed Product Building
| 428
|
data-informed-product-building-1e509a5c4112
|
2018-10-31
|
2018-10-31 15:43:19
|
https://medium.com/s/story/data-informed-product-building-1e509a5c4112
| false
| 837
|
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
| null | null | null |
The Startup
| null |
swlh
|
STARTUP,TECH,ENTREPRENEURSHIP,DESIGN,LIFE
|
thestartup_
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Sequoia
|
From idea to IPO and beyond, Sequoia helps the daring build legendary companies.
|
1877e74f630c
|
sequoia
| 2,237
| 7
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-28
|
2017-09-28 10:15:35
|
2017-09-28
|
2017-09-28 10:27:07
| 1
| false
|
fr
|
2017-09-28
|
2017-09-28 10:27:07
| 0
|
1e5119edc4d1
| 3.049057
| 0
| 0
| 0
|
A grands coups de campagnes SEO, les startups de la LegalTech squattent les sommets des résultats de recherche Google, pour inciter les…
| 3
|
Intelligence artificielle et robots avocats : quelles opportunités pour l’avocat de demain ?
A grands coups de campagnes SEO, les startups de la LegalTech squattent les sommets des résultats de recherche Google, pour inciter les justiciables à utiliser leurs plateformes. Création d’entreprises en 3 clics, génération instantanée de baux commerciaux ou procédures gratuites de résolution de litiges, les outils numériques se placent sur tous les fronts. Côté avocat, les tensions s’apaisent peu à peu. Parce que les professionnels du droit préparent eux aussi leur révolution numérique ? Discrètement, les robots avocats se font embaucher par les plus grands cabinets internationaux. Ross, LiZa ou Peter : l’intelligence artificielle, une opportunité de taille ?
Meet Ross, LiZa, Peter & Co
Dans la course aux nouvelles technologies, les Tech semblent vampiriser peu à peu l’ensemble des secteurs du marché des services, droit compris. C’est sans compter l’avènement d’un nouveau type d’innovation, à mi-chemin entre le logiciel et l’humain : les robots, dotés de la fameuse intelligence artificielle AI. Plus pointu qu’un algorithme éditeur de documents légaux, plus rapide qu’une procédure automatisée de saisine en ligne, le « legal bot » se veut polyvalent… et incontournable. Que valent ces robots avocats ?
Ross, un « artificial lawyer » embauché dans un cabinet d’avocats en France
2016 : à la suite de BakerHostetler aux Etats-Unis, le cabinet parisien Latham & Watkins embauche Ross en phase test. Ce robot avocat développé par IBM est spécialisé dans les faillites d’entreprises. Son job : analyser de manière exhaustive la jurisprudence en la matière et fournir aux avocats toutes réponses à leurs questions. Gain de temps, fiabilité : Ross est considéré par un associé du cabinet comme une réelle opportunité.
2017 : Gavelytics, aux US, met également son intelligence artificielle au service des avocats. En analysant les décisions judiciaires rendues par les juges américains, il évalue les chances de succès de l’avocat plaidant au procès. Très récente, la plateforme Gavelytics n’est pas sans rappeler la justice prédictive telle qu’elle se profile en France avec le projet de mise en ligne des décisions de justice…
LiZa, un robot virtuel qui comprend le langage humain
Pur produit français, LiZa se revendique comme une intelligence artificielle capable de régler « toutes questions ou litiges pour lesquels la représentation par Avocat n’est pas obligatoire ». Avocat électronique, elle comprend et parle l’humain. Mais à la différence de Ross, LiZa n’a pas forme humanoïde : elle se rapproche du logiciel conversationnel DoNotPay développé en 2015 au Royaume-Uni, destiné à contester les contraventions à moindres frais.
Dans la même veine, le prodige français Louison Dumont crée Peter : cette intelligence artificielle spécialisée dans la création et la gestion de startups répond par email à toutes les questions posées par les jeunes entrepreneurs.
En quoi ces systèmes de messagerie diffèrent-ils des LegalTech classiques ? En intégrant l’intelligence artificielle, ils sont capables de tenir une conversation avec les justiciables et de leur proposer des réponses juridiquement fondées, sans l’intervention d’un humain. Une grande avancée qui diminuerait encore le rôle de l’avocat…
Le robot : l’avocat de demain ?
L’arrivée des LegalTech sur le marché du droit faisait planer une menace sur la profession juridique. Il semblait pourtant que ces startups n’étaient pas assez pointues pour accomplir des actes complexes ou prodiguer des conseils, la LegalTech apparaissait alors davantage comme une opportunité pour les avocats. Le robot avocat serait pour sa part doté d’une intelligence artificielle. Cette avancée technologique est susceptible de changer la donne : si le robot est capable de conseiller les clients, si son niveau de compétence et de polyvalence lui permet d’accomplir tous types d’actes et de procédures, il ne resterait aux avocats que la plaidoirie — et encore, lorsque la représentation par avocat est obligatoire ?
Autre problématique, l’AI est au service des cabinets — Ross en est l’illustration parfaite — mais aussi des startups innovantes — DoNotPay et Peter sont des robots « indépendants » : le monopole des avocats semble une fois encore sérieusement menacé.
Les professionnels du droit devront nécessairement composer avec l’intelligence artificielle et les legal bots. Pour le moment, les experts semblent confiants : les machines pour dégager du temps libre aux avocats, les compétences « humaines » de l’avocat — intuition, psychologie, négociation — irremplaçables, un « robot assistant » plutôt qu’un robot avocat… autant d’arguments pour se rassurer.
En attendant le verdict, il semble évident que l’avenir des cabinets qui vivent de missions de base, à l’image de celui des métiers para-juridiques, n’est pas très reluisant.
|
Intelligence artificielle et robots avocats : quelles opportunités pour l’avocat de demain ?
| 0
|
intelligence-artificielle-et-robots-avocats-quelles-opportunités-pour-lavocat-de-demain-1e5119edc4d1
|
2018-04-24
|
2018-04-24 07:14:03
|
https://medium.com/s/story/intelligence-artificielle-et-robots-avocats-quelles-opportunités-pour-lavocat-de-demain-1e5119edc4d1
| false
| 755
| null | null | null | null | null | null | null | null | null |
Legaltech
|
legaltech
|
Legaltech
| 1,791
|
Case.one France
|
Le logiciel de gestion dédié aux avocats. Un outil en ligne pour gagner en efficacité. https://case.one/fr
|
1de28a43ccee
|
caseonefrance
| 44
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-15
|
2018-06-15 04:38:42
|
2018-06-15
|
2018-06-15 04:55:17
| 3
| false
|
en
|
2018-06-15
|
2018-06-15 05:53:27
| 0
|
1e512da939d6
| 1.436792
| 5
| 0
| 0
|
1. Simplify to Classify
| 3
|
DOMAIN EXPERTISE TO ENGINEERED MODELS — (Jugaad)
1. Simplify to Classify
Eulerian to Lagrangian
Converting relative frame of reference, from time to position, i.e. from Eulerian Approach to Lagrangian Approach.
There is a fundamental difference/assumption in tackling behavioral vs business challenges. In case of a Commercial/ Corporate scenario business parameter keeps changing with time unlike behavioral parameters where customer taste remains same with time, for example Recommending movies, clothes etc.
One such use case example is explained below. Time series data is converted to a classification problem where the coefficient of the solution act as a proxy for time series data.
2. Directional Dimensionality Reduction
Images to Vector
Dimensionality Reduction using Matrix Decomposition to capture true vectors of the image. Following are the 3 representations of digit “4” with increasing order of noise around the image. Positional vectors indicate consistent repetition of values and directions for all 3 images. These archetype features can be used as a true representation of image for a Classification Problem.
Consistency Check:
To check the consistency of approach below is the representation of digit “2” with the same methodology of matrix decomposition.
Results below indicate a different archetype feature set to represent image “2”.
CNN and RNN, though are the “perfect methods” for images and time series respectively, but the above jugaads though seem imperfect can fetch you better results.
“Take control of your models. Don’t let model take control”
|
DOMAIN EXPERTISE TO ENGINEERED MODELS — (Jugaad)
| 56
|
domain-expertise-to-engineered-models-jugaad-1e512da939d6
|
2018-06-17
|
2018-06-17 02:23:29
|
https://medium.com/s/story/domain-expertise-to-engineered-models-jugaad-1e512da939d6
| false
| 235
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Rahul Kharat
|
11 Indian Patents | 2 International Publications | Corporate Trainer | AI Solutions Architect | Business Consulting | Energy Auditor/Manager | Six Sigma BB
|
ece4a3aadf
|
iRahulKharat
| 16
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-11
|
2018-07-11 14:41:54
|
2018-07-13
|
2018-07-13 07:39:04
| 0
| false
|
en
|
2018-07-13
|
2018-07-13 07:39:04
| 0
|
1e522c6425cf
| 2.132075
| 10
| 0
| 0
|
This 500 words essay was chosen as the best entry submitted for the prompt “Discuss a current technological trend/issue and provide…
| 4
|
The Art of Automation
This 500 words essay was chosen as the best entry submitted for the prompt “Discuss a current technological trend/issue and provide original personal insights on the impact it has on other aspects of the society (economy, business, environment, psychology, etc)” as a part of the Jitheshraj Scholarship application 2017–2018.
Written by Himanshi Gupta, NIT Trichy- class of 2021.
“You are either the one that creates automation or you are getting automated.” -Tom Preston Werner
The machines haven’t taken over. Not yet at least. However, they are seeping their way into our lives, affecting how we live, and work and entertain ourselves. From voice-powered personal assistants like Siri and Alexa, to more underlying and fundamental technologies such as behavioural algorithms, suggestive searches and autonomously-powered self-driving vehicles boasting powerful predictive capabilities, there are several examples of automation in use today.
The concept of automation is that computer systems can be used to perform tasks that would normally require a human. These can range from speech recognition and translation into different languages, all the way to visual perception and even decision making. A true artificially-intelligent system is one that can improve on past iterations, getting smarter and more aware, enhancing its capabilities and its knowledge.
Automation has affected lives both individually and globally.
To speak of social lives of humans in particular, automated chat-bots have distanced us from our basic virtues of social interaction. The popularity of chat-bots is increasing, but they often straddle a fine line between helpful tool and clunky distraction in the customer experience. Chat-bots should be a supplement to human agents, not a replacement. But it cannot be denied that chat-bots provide great user experience and act as a factotum for all business needs.
Inception of automation was merely done so that the employees could focus on bigger things and not waste their time doing menial jobs. Automation can be positive for business by increasing productivity, reducing wage costs, increasing profit margins, and also filling labour shortage. While the computer revolution has created hundreds of thousands of new jobs, it has threatened as many other jobs with obsolescence and has often caused the displacement of workers by computer-based machines. Automation threatens 69% of the jobs in India, according to a World Bank report. Such an effect on economy would also have a major impact in the political scenario.
News organizations would soon use automated bots to sort and tag articles in real time. We’ll see advanced bots manipulating social media and stocks simultaneously. In the sports industry AI seeks to evolve technology in hopes of bringing automation and increased data analysis to business decisions, sponsorship activations, ticket sales, athlete training etc.
“The development of full artificial intelligence could spell the end of the human race,” Stephen Hawking told the BBC in 2014. In addition, there’s also a spiritual dimension to AI which increases the chances of theological disruption.
The truth is that, whether or not AI is out there or is actually a threat to our existence, there’s no stopping its evolution and its rise. Humans have always fixated themselves on improving lives across every spectrum, and the use of technology has become the vehicle for doing just that. Hence with the aid of AI, the next 100 years is set to pave the way for a multi-generational leap forward.
|
The Art of Automation
| 26
|
the-art-of-automation-1e522c6425cf
|
2018-07-13
|
2018-07-13 07:39:05
|
https://medium.com/s/story/the-art-of-automation-1e522c6425cf
| false
| 565
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Jitheshraj Scholarship for promising freshmen
|
A scholarship exclusively for college freshmen in India. Currently active in NIT Trichy.
|
73283cc80e5f
|
info.jrscholarship
| 3
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-07
|
2018-08-07 16:53:58
|
2018-08-08
|
2018-08-08 17:56:01
| 1
| false
|
en
|
2018-08-08
|
2018-08-08 17:56:01
| 1
|
1e551283a370
| 0.988679
| 0
| 0
| 0
|
Every time you want to acquire a new product or service from a physical store or business, you may go there and ask to speak to a…
| 5
|
What are Recommender Systems and Why Should I Care?
Every time you want to acquire a new product or service from a physical store or business, you may go there and ask to speak to a salesperson. You will tell them what exactly it is you want from said product, such as a laptop specifically with an SSD and a touchscreen, and they, in turn, will ask you questions to better understand what it is you want. From there, they’ll make some recommendations based off of your conversation. Sometimes you rely on this discourse to decide on what product is right for you.
That job of a salesperson is what a recommender system does but in the digital world. Whenever you search for a product or service it will provide advice based on your past purchases and searches, and based on the purchases that customers who bought similar items as you and what they bought after those items. A recommender system will try to find the best products or services according to your needs.
In the case of a recommender system, instead of asking you about your needs, it will infer them based on your searches, your purchases, and the data available about you and your behavior.
READ MORE
|
What are Recommender Systems and Why Should I Care?
| 0
|
what-are-recommender-systems-and-why-should-i-care-1e551283a370
|
2018-08-08
|
2018-08-08 17:56:01
|
https://medium.com/s/story/what-are-recommender-systems-and-why-should-i-care-1e551283a370
| false
| 209
| null | null | null | null | null | null | null | null | null |
Recommendation System
|
recommendation-system
|
Recommendation System
| 429
|
#ODSC - The Data Science Community
|
Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.
|
2b9d62538208
|
ODSC
| 665
| 19
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-15
|
2018-02-15 11:54:30
|
2018-02-15
|
2018-02-15 11:58:02
| 1
| false
|
en
|
2018-02-15
|
2018-02-15 11:58:02
| 1
|
1e5616485e97
| 2.422642
| 1
| 0
| 0
|
What is Artificial Intelligence (AI)?
| 5
|
Where AI, ML and NLP can be used in your business?
What is Artificial Intelligence (AI)?
Simply stated, AI enables machines to reason and perform tasks in ways that humans do. Artificial Intelligence is a collection of advanced technologies that allow machines to sense, comprehend, act and learn. Artificial intelligence is the key to machines understanding human languages. John McCarthy says “AI is the science and engineering of making intelligent machines, especially intelligent computer programs”.
Artificial Intelligence allows machines to sense, comprehend, act and learn. Whereas Machine Learning gives computers the ability to learn without being explicitly programmed. They both, together with Natural Language Processing (NLP) can transform any business and thus can help in solving many business problems. Let’s see where you can use Artificial Intelligence, Natural Language Processing and Machine Learning in your business :
1. Detecting User Intent
You can use Machine Learning together with NLP to identify the sentiment, language and topics from any given text. This can be useful in call centres or customer service department where you want to analyze customer interactions by using support tickets, emails, social media posts and phone transcription.
2. Classifying Images Correctly
If you are manually classifying images then you know how time taking it is. But with AI and ML, you can classify a large number of images appropriately in no time. In fact, you can detect any inappropriate content without involving any human eye. It can be used to identify the objects, people, text, scenes or faces. So, if you want to try new generation of attendance system in your office then Machine Learning can detect, analyze, and compare faces for you.
3. Predicting Results
No, Machine Learning is not a fortune teller but yet it can help you to predict the outcome of your activities. If you supply it past few years sales records then it can predict how much you are going to sell this year! The more the data, the better would be the prediction. It can also be used to predict the customer churn based on statistics and customer pattern.
4. Matching Candidates
You can use AI and ML for matching the best resumes for your job requirements. With the use of NLP and ML, you can easily classify the resumes based on experience, skills and regions. In fact, you can automatically filter out all the inappropriate resumes based on your rule sets and can select the best one that matches your requirements.
5. Analyzing Videos
AI and Machine Learning can be used to analyze videos in your organization. You can do people counting, gender identification or even user tracking by analyzing videos in your office or store. Check the interesting Case Study where we did customer counting via webcam in retail stores in Australia. The analysis can be done on real-time videos or pre-stored videos in batches from a cloud storage. It can be used to automatically detect explicit or suggestive content in videos too.
With the new set of frameworks and tools, gone are the days of costly and lengthy implementation of AI and ML applications. Now any experienced developer can design and deploy AI/ML application within two-three months time. And you need not have big budget for such application as there are plenty of affordable API based options available for use. Amazon, IBM and Google are the leaders in this space but there are other small but very specific providers too like if you want to build chatbots then you can quickly use wit.ai or gupshup.io.
|
Where AI, ML and NLP can be used in your business?
| 2
|
where-ai-ml-and-nlp-can-be-used-in-your-business-1e5616485e97
|
2018-02-15
|
2018-02-15 18:05:10
|
https://medium.com/s/story/where-ai-ml-and-nlp-can-be-used-in-your-business-1e5616485e97
| false
| 589
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
valendu
|
Artificial intelligence is the future
|
1a09d2c5cca3
|
valendu
| 2
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
6d6dcd6ce26d
|
2018-02-28
|
2018-02-28 06:39:04
|
2018-03-01
|
2018-03-01 03:43:24
| 15
| false
|
en
|
2018-03-12
|
2018-03-12 06:36:06
| 18
|
1e562d40f108
| 3.360377
| 0
| 0
| 0
|
This was first published on January 19th, 2018 at Subir Mansukhani’s Mailing List, Subirority Complex.
| 5
|
Subirority Complex — Issue #19
This was first published on January 19th, 2018 at Subir Mansukhani’s Mailing List, Subirority Complex.
General
Data as jet fuel: An interview with Boeing’s CIO | McKinsey & Company
It isn’t always comfortable, but data analytics is helping Boeing reach new heights.
Don’t Dismantle Data Silos, Build Bridges
Dismantling data silos sounds like a fine aim. But after years of trying and failing, the author suggests a new approach: build bridges
Competing with BigCo: 2018 Edition — Learning By Shipping
Conventional wisdom for 2018 is focused on mega-cap tech and many seem to say startups are in a bit of a lull. History tells us now s the best time to create innovative companies and products.
Machine Learning
Welcoming the Era of Deep Neuroevolution — Uber Engineering Blog
Square off: Machine learning libraries — O’Reilly Media
Top five characteristics to consider when deciding which library to use.
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile
Introduction to Various Reinforcement Learning Algorithms I (Q-Learning, SARSA, DQN, DDPG)
Reinforcement Learning (RL) refers to a kind of Machine Learning method in which the agent receives a delayed reward in the next time step to evaluate its previous action. It was mostly used in games…
eCommerceGAN : A Generative Adversarial Network for E-commerce
Yann LeCun — OK, Deep Learning has outlived its usefulness… | Facebook
OK, Deep Learning has outlived its usefulness as a buzz-phrase.
Deep Learning est mort. Vive Differentiable Programming! Yeah, Differentiable…
Technology
Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective — Facebook Research
Building data infrastructure in Coursera — Zhaojun Zhang — Medium
I’ve been working on building data infrastructure in Coursera for about 3.5 years. This week, I had an opportunity to speak at Data Engineering in EdTect event at Udemy about our data infrastructure…
CES Was Full of Useless Robots and Machines That Don’t Work
This year’s electronics expo promised a ‘better life’ and ‘better world.’ It instead offered a folding machine that can’t fold sweatshirts.
This is what a 50-qubit quantum computer looks like
DataViz/UI/UX
Here’s How Amazon’s Alexa Hooks You — Startup Pulse
Nir’s Note: This guest post is by Darren Austin, Partner Director of Product Management at Microsoft. Last year we added a new member to our household. I must admit that upon first meeting her, our…
A Year of Learning and Leading UX at Google — Google Design — Medium
I joined Google as a Vice President of User Experience (UX) a little over a year ago. To be frank, Google wasn’t at the forefront of my mind for a next career step. I even passed on a job offer in…
Visualizing the Uncertainty in Data | FlowingData
The Art of Effective Visualization of Multi-dimensional Data
Descriptive Analytics is one of the core components of any analysis life-cycle pertaining to a data science project or even specific research. Data aggregation, summarization and visualization are…
|
Subirority Complex — Issue #19
| 0
|
subirority-complex-issue-19-1e562d40f108
|
2018-03-12
|
2018-03-12 06:36:07
|
https://medium.com/s/story/subirority-complex-issue-19-1e562d40f108
| false
| 493
|
Change gears with Clutch.AI: the best in enterprise AI
| null |
clutchai
| null |
Clutch.AI
|
jerry@khoslalabs.com
|
clutch-ai
|
AI,STARTUP,MACHINE LEARNING,DEEP LEARNING,PREDICTIVE ANALYTICS
|
clutch_ai
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Clutch.AI
|
4 tools & 1 service that will change the way AI is used across industries, from #fintech & #sales to #insurance & beyond! Incubated @KhoslaLabs #AI #ML #startup
|
9f35ffb4ed61
|
clutch_ai
| 35
| 52
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-16
|
2018-08-16 16:46:55
|
2018-08-16
|
2018-08-16 16:48:30
| 2
| false
|
en
|
2018-08-16
|
2018-08-16 16:49:02
| 3
|
1e56549041cc
| 1.447484
| 0
| 0
| 0
|
The AI & Big Data Expo North America is arriving in one of the world’s tech hubs (Silicon Valley) this fall (28–29 November), aiming to…
| 5
|
Examine the future of Enterprise Technology in Silicon Valley this fall
Learn about the convergence of new technologies: AI, Big Data, IoT, Blockchain, Cyber Security & Cloud
The AI & Big Data Expo North America is arriving in one of the world’s tech hubs (Silicon Valley) this fall (28–29 November), aiming to deliver AI & Big Data for a smarter future. The AI & Big Data Expo is co-located with 3 other events giving attendees the opportunity to explore four enterprise technologies in one place. The event series is co-located with the leading IoT Tech Expo, the largest Blockchain Expo and the Cyber Security & Cloud Expo.
13,000 attendees. 4 events. 24 conference tracks. 500+ speakers. 350+ exhibitors, including SAS, Intel, Cisco and more…
Speakers across the shows include:
Mike Vladimer, Co-Founder, IoT Studio, Orange
- Joseph M. Bradley, Global Vice President, Digital & IoT Advanced Services, Cisco Systems
- Richard Han, Director of Business Development, Sigfox USA
- Bryan Semkuley, Vice-President of Global Innovation (Former), Kimberly-Clark Professional
- Martin Fink, Chief Technology Officer, Western Digital
- Tom White, Chief Product Officer, AI Data Innovations
- Daniel Caraviello, Global Leader, Strategic Alliances & Talent Acquisition, Data Science &
Informatics, Dow Agrosciences
- Anju Gupta, Head of Sustainability Campaign, Syngenta
- David Ledbetter, Senior Data Scientist, Children’s Hospital Los Angeles
- Marta Induni, Senior Director of Operations, Cancer Registry of Greater California (CRGC)
- Chris Ballinger, CFO & Head of Mobility, Toyota
- Jenny Elfsberg, Director Innovation Lab Hub US, Volvo Group
- Rahul Vijay, Head of Global Strategic Sourcing, Uber
- Jack Hanlon, VP of Analytics, Jet.com.
Some of the expert speakers you can expect to see
Super early bird registration ends this Friday, August 17th. Save $450 on your all-access Ultimate Pass. You can view the ticket options and register for the event here: https://www.ai-expo.net/northamerica
We hope to see many of you there!
|
Examine the future of Enterprise Technology in Silicon Valley this fall
| 0
|
examine-the-future-of-enterprise-technology-in-silicon-valley-this-fall-1e56549041cc
|
2018-08-16
|
2018-08-16 16:49:02
|
https://medium.com/s/story/examine-the-future-of-enterprise-technology-in-silicon-valley-this-fall-1e56549041cc
| false
| 282
| null | null | null | null | null | null | null | null | null |
Big Data
|
big-data
|
Big Data
| 24,602
|
AI & Big Data Expo
|
AI & Big Data Expo - Conference & Exhibition exploring Artificial Intelligence, Big Data, Deep Learning, Machine Learning, Chatbots & more.
|
42760402f1a7
|
ai_expo
| 254
| 100
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-22
|
2018-02-22 14:50:24
|
2018-02-22
|
2018-02-22 15:03:19
| 3
| false
|
en
|
2018-02-22
|
2018-02-22 15:03:19
| 1
|
1e56f6c63ccc
| 3.610377
| 0
| 0
| 0
|
The opportunity is in closing the empathy gap between artificial intelligence and human empathy
| 5
|
AI in Healthcare @ Foolproof
The opportunity is in closing the empathy gap between artificial intelligence and human empathy
Once again, Foolproof hosted The UX Crunch for an event discussing the challenges and opportunities in designing UX for the healthcare sector with artificial intelligence (AI). The speakers included Foolproof’s very own Tim Caynes, Jane Vance, and Joseph Wolanski from Ada Health. For those of you who missed out, shame on you, but here are some of my key takeaways from the evening.
Foolproof’s hypothesis for AI in healthcare
One of our biggest challenges and opportunities is closing the gap between artificial intelligence and human empathy (the empathy gap). How do we use potentially intelligent systems to enable patients to get they help they need with care and compassion in potentially difficult times, even when they don’t know they need it? Foolproof are leveraging AI to bring systems and humans together for better outcomes and are using a thorough understanding of context to infer those better outcomes.
Three questions to ask at any point in a journey to help understand that context
Where are users in a journey? What did they do to get there? Where can they go next?
Use dynamic pathways
When designing the journey through a system, most designers might use static pathways that are set to be the same for each user, guiding where they go from one screen to another. Foolproof are using dynamic pathways with some degree of intelligence, so that users can manage their own pathways and effectively create their most desired path through the system.
Tim Caynes on dynamic pathways
Better still, use behavioural pathways
Behavioural pathways enable designers to understand and infer certain information about users from their interaction with the service, such as where they’ve been, where they are and where they might go next (remember the three questions for context above). We can infer users’ behaviours to understand what their next desired outcome might be.
Implement AI in systems with “the experience stack”
An invention of Tim Caynes, the experience stack abstracts the experience from the platform itself (where the intelligence is applied), from the data that is used. Separating these three elements, we can architect systems that allow for behavioural pathways much more clearly because of the ability to follow a pathway from information provided by the user, through to an outcome.
With the limited resources of healthcare practitioners, designers can improve the experience of patients by:
Understanding their experiences of people with various health conditions.
See the journey that they go on.
Define the phase of that journey.
Identify the courses of information that they need across those phases.
However when deciding when users need specific types of information, it can be very difficult to identify exactly where people are in their journey and when they are transitioning from one stage to another.
Opportunities to improve experiences in healthcare using AI
Provide relevant information at the relevant time.
Provide a frictionless experience for people who are unwell and are feeling stressed (absorbing information whilst stressed due to a health condition can be very difficult).
Foolproof’s work with SimplyHealth
Foolproof applied these methodologies to a product called Care For Life, which is run by SimplyHealth. The idea was to provide relevant information to support carers of elderly family members over the long term and across the various stages of care.
In the research stage, they identified three groups of users whose behaviour and needs differed depending on where they were in relation to their loved one’s ageing:
Pre-active carer: they are aware that care might be needed in the future so are starting to look into it.
Active carer: an incident has happened which means that they have needed to increase the levels of care.
Committed carer: they have been caring for their elderly family member for some time now and may have even transitioned into full-time care for them.
The experience stack enabled the Foolproof team to develop a different interface for each user type, changing depending on where they are throughout that journey. Whilst the general theme and brand of course remained the same, changes between the three interfaces manifested in colours, content and the copy used.
One key challenge was to assess which group users fit into as they came to the website. As a general rule of thumb, most people are not particularly proficient or impartial when identifying with a group, so system needed to use well-crafted questions to present users with the most important of the three interfaces.
In this way, users progressed along the journey being provided with the most relevant information to them, when they needed it with minimal sorting or searching on their part — using intelligent systems for the desired outcome, from a bedrock of behavioural data.
The Gartner Hype Cycle as it related to the adoption of AI in the healthcare ecosystem
|
AI in Healthcare @ Foolproof
| 0
|
ai-in-healthcare-foolproof-1e56f6c63ccc
|
2018-02-22
|
2018-02-22 15:03:21
|
https://medium.com/s/story/ai-in-healthcare-foolproof-1e56f6c63ccc
| false
| 811
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Avi Mair
|
At the intersection of UX and business
|
a3afca5af9e0
|
avimair
| 26
| 42
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-07
|
2018-08-07 03:46:46
|
2018-08-07
|
2018-08-07 03:47:07
| 0
| false
|
en
|
2018-08-07
|
2018-08-07 03:57:30
| 0
|
1e583e3e9afd
| 1.343396
| 0
| 0
| 0
|
“A field where your entire day can be ruined by a misplaced comma.” A better description of programming has yet to be found. Yet there’s…
| 4
|
Misplaced Comma
“A field where your entire day can be ruined by a misplaced comma.” A better description of programming has yet to be found. Yet there’s something alluring in the possibility that this failure in your logic may be a fundamental misunderstanding . . .or just a clumsy finger.
The point of programming, of course, is much larger. Space race, facial recognition, predicting the future, artificial intelligence — the world of science fiction abounds with ideas that stretch the limits of believability. Ultimately futile attempts by mankind to capture the intricacies of how our brains function, how they identify and combine patterns and relate to one another, and to force various hunks of metal to do the same.
What is a program after all? It is a series of steps, so explicit it could be told to a baby. Well not a baby — they have a human mind, and therefore a human’s computation power. The proverbial rubber duck then, that childhood memento all beginner programmers are told to translate their thoughts to.
We all know that learning French or Mandarin gives you access to a world that plain old English wouldn’t. Although we are all human there are a million different ways to live and to experience that life, and over 6500 ways people have attempted to put labels on it. When communication is garbled due to two people not speaking the same language the resulting misunderstanding is referred to as being “lost in translation”. Even though we have not, as a species, perfected the art of communicating with each other, we have moved on ahead to the next great adventure. Like Frankenstein, programmers believe that they can instill knowledge about the world at large and directions for how to behave on to their own creation.
Programming languages let a human teach a machine to think how a human would, but better, faster, on a grander scale. What could be more exotic than that? What could be more eye-opening?
And perhaps that is why it is only fitting that this kingdom can often crumble on that most human of errors — a typo.
|
Misplaced Comma
| 0
|
misplaced-comma-1e583e3e9afd
|
2018-08-07
|
2018-08-07 03:57:30
|
https://medium.com/s/story/misplaced-comma-1e583e3e9afd
| false
| 356
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Devi
| null |
d7f2248d7d2d
|
natashamathur
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-23
|
2018-07-23 14:02:04
|
2018-07-23
|
2018-07-23 14:02:46
| 0
| false
|
en
|
2018-07-23
|
2018-07-23 14:02:46
| 2
|
1e58c2166ea4
| 1.581132
| 0
| 0
| 0
|
Over the last few years, Artificial Intelligence has become a pivotal role in our lives and the business landscape. Irrespective of the…
| 3
|
The impact of Artificial Intelligence in Transportation
Over the last few years, Artificial Intelligence has become a pivotal role in our lives and the business landscape. Irrespective of the industry, AI has simplified processes and empowered business decisions with prediction. When it comes to the transportation and logistics industry, this technology has started impacting the businesses at a greater level.
What challenges does AI solve in transportation?
Be it passenger or goods transportation, the industry has been constantly faced with major challenges. Fleet owners, especially those who operate and manage long-haul trucks face these common problems, such as,
i) Knowing the exact location of their fleet
ii) Educating their drivers about safety protocols and proper handling of trucks
iii) Periodic maintenance and fleet health management.
With the growing intervention of technology in the transportation and logistics industry, businesses have been leveraging innovative solutions to address these obstacles. The GPS-based fleet management system has already been successful in solving the first two challenges to an extent through driver behavior monitoring and analyzing metrics like acceleration, braking and speeding. These metrics are being used by the fleet owners as a prescriptive data points to make corrective actions.
But, what the fleet owners actually require is that all this information to be used as collective and integrated data points that, in turn, automates the entire maintenance life cycle of the fleet and predicts the maintenance cycle and vehicle health.
Artificial Intelligence has been the frontrunner in this aspect with its ability to identify patterns and provide prediction. And that’s where AI can make an enormous difference for the fleet owners.
Predictive Maintenance
With all the possible data obtained from GPS and IoT devices, and based on the maintenance records of the fleet, AI can predict the vehicle health based on its utilization and how it is being handled by the operators.
Currently, most of the fleet owners find it very tedious to manage and maintain the maintenance schedule of their fleet. Creating scheduled reminders and keeping a track of them by designating resources is even more complicated. When telematics data come into place, it becomes easy to solve this business problem. Telematics data, generally obtained from the devices and other sources, can be processed to provide appropriate schedule suggestions. This will play a major role in drastically cutting down the operational cost.
With such intelligence at their disposal, the logistics and transportation industry now have the ability to broaden their opportunities and gain increased cost-efficiency and bottom-line results.
|
The impact of Artificial Intelligence in Transportation
| 0
|
the-impact-of-artificial-intelligence-in-transportation-1e58c2166ea4
|
2018-07-23
|
2018-07-23 14:02:46
|
https://medium.com/s/story/the-impact-of-artificial-intelligence-in-transportation-1e58c2166ea4
| false
| 419
| null | null | null | null | null | null | null | null | null |
Startup
|
startup
|
Startup
| 331,914
|
Shruthi Rakesh
| null |
f46deaf2c33c
|
cerebrox2018
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-08
|
2018-01-08 11:42:21
|
2018-01-08
|
2018-01-08 11:42:55
| 1
| false
|
en
|
2018-01-12
|
2018-01-12 06:43:44
| 1
|
1e5afba8bca1
| 1.558491
| 0
| 0
| 0
|
8 Great Applications Of “Machine Learning”
| 1
|
8 Great Applications Of “Machine Learning”
8 Great Applications Of “Machine Learning”
“In this article, I will put a brief light on some areas where machine learning is revolutionizing the state of affairs and that are most likely to adopt machine learning like never before.”
1. Cyber Security
Existing system to monitor traffic coming from outside nodes or the traffic exchanged between internal PC and servers exchanged can be defeated by the volume and variety of traffic.In nutshell, all existing IDS (information detection system) have a limit to produced alerts per unit time but the way IOT is coming up big, these limits will be insufficient to safeguard the enterprise system.Use of Machine Learning can make Cyber Security Strategy aligned to new age threats.
2. Malware
The rate at new malware are getting generated, it will be impossible for existing tools & methods to detect all of them. The bigger challenge is the mutation of malware, where most of the new malware differ less that 2% of previous malware. This slight change in the definition of malware on a gigantic scale throws a tough challenge.New age deep learning models are capable of fighting this challenge.
3. Legal Documents
Legal documents are very lengthy & complex to study for a normal executive unless you hire a costly lawyer, many times these legal documents are not fully studied in a belief that everything will be all right.By using deep learning and topological data analysis a complex legal document can be translated into big strings of numbers.So the number of documents or complexity of a document can be tackled by Machine Learning.
4. Health Care
Continuous Improvement of medicines against countless patterns is a job Machine Learning can do a better. Medical experts along with Data Scientists are making a regression model to look at the relationship between independent variables that drive future events based on the hypothetical analysis that indicates the future events.The medical pattern of a person can also give lots of information regarding the future health risks.Deep learning of these patterns can also reduce the avoidable hospitalization or emergency situations.
For more details please visit our full page Click here…
|
8 Great Applications Of “Machine Learning”
| 0
|
8-great-applications-of-machine-learning-1e5afba8bca1
|
2018-01-12
|
2018-01-12 06:43:44
|
https://medium.com/s/story/8-great-applications-of-machine-learning-1e5afba8bca1
| false
| 360
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Sana Mulla
| null |
147112e3b5f1
|
sanamulla_33415
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
ebb0a73f952
|
2018-07-31
|
2018-07-31 17:08:28
|
2018-08-03
|
2018-08-03 18:44:45
| 2
| false
|
en
|
2018-08-03
|
2018-08-03 18:44:45
| 4
|
1e5b2b19f038
| 3.617296
| 11
| 0
| 0
|
This week we are back at work with our research and the group consisting of, Vivian Nguyen, Huy Nguyen, Tien Tran, and Citlalin Galvan. In…
| 3
|
Team Aladdin — The journey this week!
Source: androidauthority.com
This week we are back at work with our research and the group consisting of, Vivian Nguyen, Huy Nguyen, Tien Tran, and Citlalin Galvan. In the first week we focused on malicious apk files and their behaviors inside android phones. This week we wanted to go broader and see the kind of data that has been collected previously. As we looked for the data, we were surprised at how much data is out there, but how little information there is for those datasets especially on data pertaining to cyber security.
Mission Statement
Do data quality analysis on existing data sets and publish a report with a program for others to gain knowledge.
One of the biggest problems in Machine Learning is the quality of training datasets in Cyber Security. The available data signals that the focus is quantity over quality datasets, but someone should still check those datasets and see what is important and what can be discarded. That is where we come into play!
The questions that we are working to get answers for this week are:
How to analyze the data with pandas?
How can we see if the data is still relevant?
How to profile the data?
How to create a data monitoring scorecard?
Our Progress
Our obstacle when it came to analyzing the data in the datasets was that no one in the team had any prior experience with data science and analysis. So, we made it our first priority to research as much as we could on the same to gain as much knowledge as we can. We were able to find two great courses that were really helpful for us — Data Camp and Edx. These were greatly useful for us to learn on how to analyze datasets using python and pandas.
To not feel overwhelmed with how much information there is out there and everything we wanted to do with the project and report, we broke down the process of analyzing into smaller, easier to handle steps.
Data Requirements — Even though there is a lot of data shared online, we had to focus on data related to cybersecurity. That proved to be a challenge all on its own. Not many companies are allowed to share data related to security, and open source projects were limited on information.
Data Collection — In our pursuit to find any data set related to security, we stumbled upon SecRepo, a website that has loads of samples of security related data that would be very useful for our project.
Data Processing — Log files and JSON files found inside SecRepo were converted to CSV files for clarity and ease of analysis. Also, the python tool for data analysis, Pandas, facilitates a lot of manipulations on CSV files. Refer to Log to CSV File Converter Program below for more information.
Data Cleaning — This step is where we check what features are important and learn their essence, since many of the datasets do not have any features labeled.
Research Performed this week
To network and connect with companies and people leading in this field, we reached out to owner of SecRepo, Mike Sconzo, on how we could contribute to his open source project and on ways to improve the quality of the data sets found in his website. His website contains Network, Malware, and System datasets he created and has found from third party websites.
A good place to study on dataset analysis was UC Irvine’s DataSet Repository. They have very detailed analysis reports on every data set and it is open to the public to view. It is a great example of the kind of analysis we want to do for our own dataset report.
We are currently on the process of emailing two professors who work on adversarial machine learning and will be asking for their guidance and asking for resources that could help us with cybersecurity using machine learning.
Log to CSV File Converter Program
LogtoCsvConverter.py(https://github.com/cyberdefenders/MachineLearning)
In this program, we convert a log file into a CSV file. First, we download the log file, and check the file path. Once the program runs, we can input the file path. We then wait a bit while the file gets converted and once it is converted it will ask you to rename the file to your preferred name. You can now see your new CSV file inside the current path of the code. We have it easy and user friendly, so that everyone can use the converter from command line.
Plans for Week 3
Now that we can read the files in pandas, we will focus on analyzing the data we have downloaded from SecRepo. We will focus on how to clean up the data by learning what the features mean so we can label them and what are the primary features we should look at.
Check out our Github repository!
References
Sonzo, Mike. SecRepo.com — Samples of Security Related Data. Tue Jul 10 http://www.secrepo.com/
Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
|
Team Aladdin — The journey this week!
| 256
|
team-aladdin-the-journey-this-week-1e5b2b19f038
|
2018-08-03
|
2018-08-03 18:44:46
|
https://medium.com/s/story/team-aladdin-the-journey-this-week-1e5b2b19f038
| false
| 857
|
Cyber Defenders Program
| null |
thecyberdefendersprogram
| null |
cyberdefenders
|
cyberdefendersprogram@gmail.com
|
cyberdefenders
|
CYBER SECURITY AWARENESS,CYBERSECURITY,EDUCATION TECHNOLOGY,SECURITY,TRAINING
|
ProgramCyber
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Citlalin Galvan
|
I enjoy learning and talking about exciting topics in tech and cyber security!
|
26293d41e3c6
|
citlaling
| 10
| 8
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
863f502aede2
|
2018-06-07
|
2018-06-07 15:39:27
|
2018-06-07
|
2018-06-07 15:43:31
| 4
| false
|
en
|
2018-06-07
|
2018-06-07 15:43:31
| 2
|
1e5b340f33b7
| 2.477358
| 5
| 0
| 0
|
Chinese tech conglomerate Alibaba announced in April that it would launch Alibaba Law School, integrating big data teaching methods with…
| 5
|
Law Schools Are Catching Up With AI Legaltech
Chinese tech conglomerate Alibaba announced in April that it would launch Alibaba Law School, integrating big data teaching methods with legal education. As we await updates, law programs around the world have already begun re-designing their curriculums to better fit the age of big data and AI.
Tsinghua University Law School aims to educate interdisciplinary talents that can integrate legal capabilities with cutting-edge technologies, and has issued a series of curriculum reforms. Backed by Tsinghua Law and Big Data Research Center, the university will conduct research on the application of big data and AI in the legal industry.
One of the newest programs is a Masters in Law and Computing, which will include fundamental technical courses in networks, big data, and artificial intelligence; along with relevant legal courses in these areas. Graduates will be expected to safeguard national interests in ICT and participate in governance and policy-making.
Tsinghua University Announced its Masters in Law and Computing Program in April 2018
Beijing’s Renmin University has set up a series of cross-disciplinary lectures in fintech and blockchain, while courses such as Introduction to Big Data Analytics are also open to law students.
Across the ocean, US law schools have been focusing on technology-related IP law for several decades now. Legal practitioners seizing this opportunity. Berkeley Law, for instance, ranks number one in the field of IP law in the 2018 U.S. News Grad School ranking. The school’s reputation in STEM education is supercharging its innovation in law: the Samuelson Law, Technology & Public Policy Clinic program teaches students about lawyering, government institutions, and the complexities involved in technology-related law, such as biotech, copyright, patent, etc.
Also based on the West Coast, Stanford’s LLM in Law, Science & Technology (LST) augments the traditional study of law with courses in science, e-commerce, cybersecurity, biotech, health sciences, and intellectual property problems. Applicants require a law degree and two years of related work experience.
George Washington University Law School set up its own Patent Law Program in 1895. Its alumni helped patent the Bell telephone, Mergenthaler typewriter, and Eastman film cameras. The GWU Law School specializes in copyright, trademark, communications, e-commerce, biotech, etc.
The New York University School of Law meanwhile has established a Competition, Innovation, and Information Law Program, which offers students more than 40 courses in intellectual property.
The law industry is clearly evolving, setting the stage for further tech-friendly cross-disciplinary efforts such as Alibaba Law School.
* * *
Localization: Meiling Wu | Editor: Michael Sarazen
* * *
Subscribe here to get insightful tech news, reviews and analysis!
* * *
The ATEC Artificial Intelligence Competition is a fintech algorithm competition hosted by Ant Financial for top global data algorithm developers. It focuses on highly-important industry-level fintech issues, provides a prize pool worth millions. Register now!
|
Law Schools Are Catching Up With AI Legaltech
| 92
|
law-schools-are-catching-up-with-ai-legaltech-1e5b340f33b7
|
2018-06-20
|
2018-06-20 14:14:45
|
https://medium.com/s/story/law-schools-are-catching-up-with-ai-legaltech-1e5b340f33b7
| false
| 471
|
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
| null |
SyncedGlobal
| null |
SyncedReview
|
global.sns@jiqizhixin.com
|
syncedreview
|
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
|
Synced_Global
|
Law
|
law
|
Law
| 20,355
|
Synced
|
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
|
960feca52112
|
Synced
| 8,138
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-24
|
2018-03-24 08:44:35
|
2018-03-24
|
2018-03-24 08:48:01
| 3
| false
|
en
|
2018-03-24
|
2018-03-24 10:55:58
| 2
|
1e5e1427c1ef
| 0.810377
| 2
| 0
| 0
|
The year is 2018, right on your couch.
| 5
|
Artificial Intelligence Remote Control — holy-chip #3
The year is 2018, right on your couch.
This holy-chip moment remind us that AI systems will struggle with our own personal issues.
Have a smart laugh!
The holy-chip series is a narrative between 2 Artificial Intelligence characters. They do not have names. They are black and white. The date and place posted on the header are absolutely real.
|
Artificial Intelligence Remote Control — holy-chip #3
| 2
|
artificial-intelligence-remote-control-holy-chip-3-1e5e1427c1ef
|
2018-03-29
|
2018-03-29 19:31:34
|
https://medium.com/s/story/artificial-intelligence-remote-control-holy-chip-3-1e5e1427c1ef
| false
| 69
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Ricardo Mello
| null |
6df18de4f6fd
|
rm_90700
| 18
| 19
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
ec73089917ed
|
2018-04-02
|
2018-04-02 14:36:23
|
2018-04-02
|
2018-04-02 14:44:58
| 1
| false
|
en
|
2018-04-02
|
2018-04-02 15:06:15
| 3
|
1e5f42a3a470
| 4.803774
| 1
| 0
| 0
|
In the past two months the collection of episodes I would have loved to tell about has grown out of all proportions. But with Robotics…
| 4
|
Double whammy plus a GitHub bonus
In the past two months the collection of episodes I would have loved to tell about has grown out of all proportions. But with Robotics Nanodegree and a new training routine going full speed in addition to the usual work-commute-household pattern, finding the time to write is surely a challenge. Fortunately, I love challenges! For the last dozen years my response to the little whining inner voice of “You will never manage to accomplish this” has been “hold my coffee”. So, without further ado, it is my pleasure to present to you two lovely episodes. These are possibly not the episodes on really new amazing developments in the world, but they are about very important topics: data bias, and precision health.
The first episode is from Super Data Science hosted by Kiril Yeremeko. He is the author of several Udemy courses which provided a lot of value to me back when I finished my PhD and transitioned to data science in industry. While I did’t learn much new material (a PhD at MPI is worth quite a bit in terms of knowledge and skills after all), it was a great way to systematise my knowledge and make sure that, yes, I do have a real data scientist’s skillset even though I didn’t study physics during my PhD . The episode itself is about two directors of data science who are also a couple. I think it is really romantic :) Their stories of doing PhDs and then transitioning to industry and growing there certainly rang the bell.
Another important aspect this episode highlights is bias in trained models and how to investigate your model for it. The case here is automatic evaluation of pre-recorded video interviews that can be scored for various metrics a potential employer cares about in a candidate for a particular role open. Even when you don’t use ethnicity, gender, age as direct features (and in HR context you should not), this information can still “leak” through other features like facial expressions, speech melody, accent and so on. The potential danger is that unless you are careful your model will start discriminating people left and right. E.g., from training data it builds association that people with strong square chins are more trustworthy, and the next thing you get is men a rated as more trustworthy overall because they tend to have square chins. One way to investigate your model for such pitfalls is to plot feature distributions after classification, and compare these distributions across different assigned classes and then see if this results in gender / age / ethnicity bias. This kind of investigation is routinely done for error analysis in all machine learning projects (if you want to learn more, check out Andrew Ng’s excellent course on Deep Learning, it has a whole module on ML strategy). But here I would actually do this investigation regardless of your classifier’s performance, also before and after training – there is no guarantee your input data is without bias too! One obvious question however is – ok, our model is behaving like a jerk, what do we do? If you have a dataset with well defined input features, you can probably figure out which of them are feeding the bias and remove them. However, in case of neural nets, it is a bit harder to control the learned weights, so for now I don’t have a good answer for that. In any case, due diligence and vigilance help.
The other episode is a much shorter one, from the highly popular a16z podcast. Their shows are usually good and don’t need advertising but this particular episode is very exciting for me. I remember I listened to it while at the gym and I hope my muttered comments like “cool”, “right” and “awesome” did not disturb the other gym goers.
The overall conversation in that episode is about the many aspects in which medicine and healthcare could undergo improvement. Among other things, the speakers talked about precision medicine. Precision medicine is like personalised medicine, that is, tailored to your body to a much higher degree than traditional medicine usually does. Think about it – on the instructions for most medicaments you see dosage as a function of age at best, and additionally some information about possible side effects, all clumped together, some warnings about pregnancy and lactation, possible interactions. with other drugs… and that’s it really. It is a lot of information for sure, but how much of it is applicable to any particular individual? Interestingly, in this scenario you do want to take age, gender, weight, body composition, ethnicity, lifestyle, genetics – all of this into account. It won’t be discrimination, it will be useful (as long as the outcomes are accurate). Like personalised medicine, precision medicine allows the healthcare system to tailor the treatment to your particular case. But in addition to maybe a more detailed health profile that is stored, say, in an electronic health record, precision medicine is based on more sources of data some of which can be longitudinal. The data sources range from a generic screening (full or partial), activity tracking devices or other sensors, food and sleep logs, etc. All this data of course should be treated responsibly, making sure it first and foremost benefits the person themselves. When we talk about benefits to the healthcare in general, they should come from the fact that providers can now help more people better, and not from the fact that they can suddenly refuse to cover certain conditions they could not know about without this data.
And before you start thinking that someone tracking themselves so much is surely a hypochondriac and possibly a vane person, and that going to a doctor once a year to have your blood pressure measured is perfectly fine, think about this too – one of the speakers had a smart watch for a long time and had collected a significant amount of baseline data. As the result, he was able to detect lime disease before he became clearly symptomatic. The treatment came in early, complications where avoided and in general he was able to stay active and productive as the result. Another aspect precision medicine will enable is not just living longer, at whatever cost, but living longer *and* healthier. So instead of paying your hospital bills, you will have the money to go to mountain hikes, and cool gadgets, and presents for your teenage great-grandchildren :)
This episode has encouraged me to do something I have wanted to do for a long long time – analyse some of my Fitbit data. I even posted the Jupyter notebook on GitHub which you can find here:
https://github.com/evolk/data_science_projects
I set out to ask a question – does my physical activity affect my resting heart rate and if yes then how? The answer. – I’m probably not active enough to see any effect. :D Out of privacy consideration I do not provide the datasets themselves but I give a link on how to extract the data if you have a fitbit yourself. Also, the dataset I extracted is from 2016 and describes activity levels that will hopefully not allow any health provider to deny me cover in the future.
Stay healthy!
Image source
|
Double whammy plus a GitHub bonus
| 1
|
double-whammy-plus-a-github-bonus-1e5f42a3a470
|
2018-04-14
|
2018-04-14 22:06:30
|
https://medium.com/s/story/double-whammy-plus-a-github-bonus-1e5f42a3a470
| false
| 1,220
|
KPD stands for “Kati’s Podcast Digest” and captures the purpose of this publication with 100% accuracy — I’m subscribed many dozens of podcasts on {data, neuro-, popular} science and some episodes are just too good to stay unshared and undiscussed.
| null | null | null |
100 percent KPD
| null |
100-percent-kpd
|
PODCAST,SCIENCE,DATA SCIENCE,DIGEST
| null |
Health
|
health
|
Health
| 212,280
|
Kati Volk
|
Data Scientist in Switzerland. All opinions expressed in my posts are my own.
|
e70f76bf9994
|
kati_volk
| 16
| 20
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-12
|
2017-10-12 06:00:19
|
2017-10-12
|
2017-10-12 06:04:24
| 1
| false
|
en
|
2017-10-12
|
2017-10-12 06:04:24
| 1
|
1e60f1f0df2
| 1.090566
| 0
| 0
| 0
|
Artificial intelligence and the future of elearning
| 5
|
Artificial Intelligence and the future of learning
Artificial intelligence and the future of elearning
Learning is deeply embedded in the process of developing artificial intelligence. AI agents start out as young learners and deep learning takes place through a repetitive process where a particular piece of information is tried and tested multiple times by the agent before it starts doing it better than the human counterpart.
But what impact will AI have in the learning and development going forward? Well, the term AI immediately brings to mind the image of robot teachers thronging our classrooms and taking classes on everything from microbiology to history. However, before we actually start having them in classrooms, and which might not be anytime soon, AI has already started to percolate our learning programs in multiple ways.
While we have adaptive learning platforms coming up to provide a customized learning experience to people, there are platforms that are using AI to generate assessments. In a not so far future, these two kinds of programs might collaborate with each other to provide a holistic knowledge platform without any human intervention. On the other hand, when it comes to creating content — we are still dependent on human instructional designers, but there are platforms that are trying to automate both research, chunking, reading motivation and final delivery of articles. How long before the physical storyboard gets automated?
Read the full article here: http://manipaldigital.info/ai_in_elearning_industry
|
Artificial Intelligence and the future of learning
| 0
|
artificial-intelligence-and-the-future-of-learning-1e60f1f0df2
|
2018-01-22
|
2018-01-22 22:57:16
|
https://medium.com/s/story/artificial-intelligence-and-the-future-of-learning-1e60f1f0df2
| false
| 236
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Manipal Digital
| null |
f85990586e17
|
manipal.mds
| 0
| 23
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-13
|
2017-10-13 04:15:07
|
2017-10-13
|
2017-10-13 12:30:07
| 1
| false
|
en
|
2017-10-16
|
2017-10-16 16:47:55
| 0
|
1e61137e472a
| 2.977358
| 17
| 0
| 0
|
“Todd, there are only two things you control in your career, your skills and your network. Keeping your skills up to date, and your network…
| 5
|
High Plains Geographer
“Todd, there are only two things you control in your career, your skills and your network. Keeping your skills up to date, and your network close, and you will always be fine. “ -Elvis Fraser, 1999 — I think, this is why I measure things by Presidential administrations
As I announced on Twitter the other night I was laid off from my position. This isn’t the first time, and odds are it won’t be the last. It’s all part of the consulting path that I chose. I hold no ill will toward my soon to be X firm. They treated me and my family well, and were stellar while the relationship worked for both of us. I hope the opportunity arises in the future for us to work together again. I only wish them the best of luck.
The only thing I am upset about is we were getting ready to launch (well like half way done) a FOSS4G Permanent Crop Management Solution. Working the cogs with Geoserver, GeoWave and PostGIS. I might still make it in my sudden free time, or if you’re interested in such a thing ping me.
Remember kids, don’t burn bridges. It’s just business, unless they actively try to deny you from advancing, or fire you for sticking to your morals, don’t burn your bridges.
Like the quote from Elvis says, its always about keeping your network warm and your skills sharp. This is especially true now. With the BLS predicting that 50% of the workforce is going to be freelancers by 2020, its all going to be YOU building your skills and YOU keeping a network going. No one is going to hold your hand, but people might lend you one or give you a helping one.
Keeping your network warm isn’t about when you lose a job yelling at everyone to help you. It’s about making friends, helping them out when you can, however, you can. I have people who only reach out when they are in “situations” where they need me. I don’t respond to these people very much. But the person who pings me randomly to see how I’m doing when things aren’t going wrong- those people I’ll change the budget for. People I have private jokes with, and pushing out blogs, tweets and advice to help people also work. I don’t do these things because of networking, at this point it’s because people are my friends and I like sharing and talking to them. I didn’t start here, but showing up at #geobeers, presenting and being yourself does get you a long way.
Skills, you should know how I feel about this. Whenever I see someone saying stuff like “I don’t want to learn X.” or “I’m a Y I don’t need to know Z.” I realize how if their life changes, even a slight bit, they’re going to be scrambling to find employment. Even with a strong network, your skills need to back-up your words. If your frozen up about learning something like R OR Python, learn the fundamentals of both and go with the one you prefer/you seem to be using more at work. Learn and stay sharp. Maybe get that tattoo’d on your arm
Shamless plug: I’m teaching a class on Spatial R at CSU in November. Ping me for details if you find yourself in the Napa Valley of Microbrewing around GISDay.
One of the hardest things I had to realize in my 20s, was that my career was mine to steer. I chose what paths to follow, and I chose what skills I wanted to develop. There wasn’t a teacher to give me assignments. I found mentors where I could, made guesses and got lucky here and there.
Go to coffee, go for drinks, show up at socials. Invite people out to do stuff, and don’t be “work friends” and “friend friends.” Sure, social firewalls are nice, but if you can get your friends to talk, it makes life simple, and expanding strengthening and reinforcing your connections is a good thing.
Lastly, when you reach a point where you’re really established in your profession, mentor some students or recent graduates. You need to pay it forward, always pay it forward.
Remember, when it all comes down and things go sideways, you have yourself to rely on. So be the best you you can be, and always #levelupyourshit
|
High Plains Geographer
| 100
|
high-plains-geographer-1e61137e472a
|
2018-05-07
|
2018-05-07 01:50:21
|
https://medium.com/s/story/high-plains-geographer-1e61137e472a
| false
| 736
| null | null | null | null | null | null | null | null | null |
Life Lessons
|
life-lessons
|
Life Lessons
| 253,527
|
Todd Barr
|
GIS/Data Science Dork
|
c20e06a6c799
|
Spatial_Impressionism
| 787
| 565
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
69c20edac0bc
|
2018-06-01
|
2018-06-01 15:00:00
|
2018-06-01
|
2018-06-01 15:06:24
| 0
| false
|
en
|
2018-06-01
|
2018-06-01 15:09:26
| 0
|
1e623a94e443
| 0.30566
| 1
| 0
| 0
|
Submit your story to this publication. We would love to share it with our community.
| 3
|
Have something Data Science professionals should know?
Submit your story to this publication. We would love to share it with our community.
Guidelines
Your content should be original, high quality and relevant to the modern data science professional
Content should be casual toned and be backed by data or your personal experiences.
Be crisp. Don’t worry about the length.
If in doubt, submit. We will respond within a week and give feedback if we can’t publish your story.
Bring it on!
|
Have something Data Science professionals should know?
| 12
|
have-something-data-science-professionals-should-know-1e623a94e443
|
2018-06-09
|
2018-06-09 13:47:10
|
https://medium.com/s/story/have-something-data-science-professionals-should-know-1e623a94e443
| false
| 81
|
Get clear insights for a successful "data career". Curated by CutShort, the fastest growing career platform in India
| null | null | null |
Practical Data Science Career Insights
|
datacareers@cutshort.io
|
data-science-career-insights
|
DATA SCIENCE,DATA VISUALIZATION,DATA ANALYTICS,DATA ANALYSIS
| null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Nikunj Verma
|
Cofounder at CutShort (www.cutshort.io)
|
2be9fbfef4dc
|
nikunjverma
| 58
| 105
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-26
|
2018-05-26 13:40:58
|
2018-06-01
|
2018-06-01 17:19:19
| 1
| true
|
en
|
2018-06-01
|
2018-06-01 17:19:19
| 2
|
1e6395d467ca
| 3.30566
| 14
| 0
| 0
|
It has been four years since this book motivated me to sharpen my analytical techniques and ‘be a data scientist’. Along the way, I got to…
| 1
|
Will the Real Data Scientist Please Stand Up?
It has been four years since this book motivated me to sharpen my analytical techniques and ‘be a data scientist’. Along the way, I got to learn alongside a few of the most intelligent and articulate people that I couldn’t even begin to fathom in the past… which unfortunately means I’ve also met my fair share of gasbags. It is a given that in a field as hype-driven and self-promotional as data science, over 90% of the ‘data scientists’ one meets are likely to be selling snake oil (often unintentionally) — Eric Colson once said the quality/talent of data scientists does not follow a Gaussian distribution. Trusting the wrong data scientist can result in no-delivery at best, decisions made on incorrect assumptions with wrong interpretations at worst. Yet, it’s complicated to describe exactly what a data scientist does, and what good looks like (maybe it’s impossible because it’s an ill-defined, overloaded and heterogeneous term in the first place) without relying on trite founding myths of the job title. Hence, for the engineering manager having to hire a data scientist, or the product manager having to staff one on a critical project, I offer these four traits I’ve observed in the real data scientists (and violated by the false idols), the ones made of true steel.
Snake oil (n): a substance with no real medicinal value sold as a remedy for all diseases. Not to be confused with the snake wine/whisky doled out by real data scientists, which tastes like crap but is purportedly good for you.
#1: Own at least one model in production. The easiest way to heckle a data scientist when they’re presenting their amazing new machine learning model is to ask how it performs in production, and how it has moved the needle on top-level business metrics. Prototyping and fitting a model is easy, but the last-mile deployment to a production service or the cloud, making judicious trade-offs between accuracy and computational efficiency, and then ultimately maintaining the model in production to ensure its output stays relevant, is the overlooked and little-celebrated cog that keeps the engine humming and separates the boys and girls from the men and women.
It is also not sufficient to hand the model off to a well-defined model-hosting and prediction-serving framework. While this may be construed as having influence across the organization, the influence might have a sinister undertone and hint at ability to browbeat others to do something without vetting that it’s the right thing in the first place.
#2: Nurture a random career all over the data stack. Truly great data scientists do not stay data scientists forever. Chances are at some point, they found no data to analyze, and had to write their own ETLs as a data engineer, design the tubes themselves as an infrastructure engineer, or even convince production engineers to surface the data in a reasonable way as a project manager. Or more likely, they found that their domain could and should be served by meat-and-potatoes analytical needs, instead of abusing whatever power tools we think data scientists should all be wielding. The ones that are in flux and constantly swim up toward the source and down to the use case as data capabilities ebb and flow are the real data sharks.
#3: Have at most one speaking engagement a year. They shouldn’t be presenting the same thing over and over, because a single landmark talk at a respected conference can be recorded and shared + replayed many times across the globe if deserving. If instead they’re presenting a different thing at every conference every other month, do you believe they had time to really do everything they’re claiming?
Plus there’s no guarantee that a truly great data scientist is a good or eager public speaker. Most I’ve met tend to be humble and exceedingly realistic about the limits of their capabilities.
#4: Shoot down more than half of your requests, and more than 90% of them if involving machine learning. A great data scientist can instantly sniff out what is important and potentially delivers impact, and only brings out (or invents) the power tools when truly needed or as a last resort. Machine learning is a hammer, and not all nails need hammering. Instead, through excellent client-management skills, they gradually shape their non-technical stakeholders to ask better, more important questions. They show their stakeholders what a hammer looks like, such that they can better identify which nails need hammering in future.
I’m sure there’s more, but these are the only common denominators I’ve noted thus far. Maybe I’ll need to follow up and add to or delete from this after another 4 years. Here’s to another 4 years of sniffing out charlatans and rolling with the best.
Many thanks to Daeus Jorento for coming up with #1, Ryan Mason for epitomizing #2, Kenny Wong for observing #3, and Edmond Chan for living and breathing #4 every day. Also thanks to several people for proof-reading, most of whom told me to leave the name-calling in.
|
Will the Real Data Scientist Please Stand Up?
| 202
|
will-the-real-data-scientist-please-stand-up-1e6395d467ca
|
2018-06-12
|
2018-06-12 15:31:36
|
https://medium.com/s/story/will-the-real-data-scientist-please-stand-up-1e6395d467ca
| false
| 823
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
David Feng
|
The BUCKET LIST
|
2e569cf8179b
|
selwyth
| 27
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
e8d902c8adec
|
2018-09-11
|
2018-09-11 20:24:28
|
2018-09-11
|
2018-09-11 20:40:19
| 1
| false
|
en
|
2018-10-15
|
2018-10-15 20:54:39
| 15
|
1e647994eb2
| 2.630189
| 2
| 0
| 0
|
The blockchain startup Dispatch and its parent company, The Bureau, are pleased to announce a new partnership with Aura, a decentralized…
| 5
|
Dispatch partners with Aura to boost recruitment of crypto talent
The blockchain startup Dispatch and its parent company, The Bureau, are pleased to announce a new partnership with Aura, a decentralized professional network that uses tokenomics to match the world’s best tech talent with the right career opportunities.
Aura will help Dispatch, the Bureau, and their clients connect with top blockchain engineers and other prospective employees to grow their teams. Aura and Dispatch are also exploring other opportunities to collaborate, including possible joint industry events to promote blockchain technology.
“Our success at Dispatch is predicated on the success of DApp developers building great products — it really is about them,” said Matt McGraw, Dispatch’s co-founder and CEO. “What better way to help our developer community and the projects building on Dispatch than to partner with a firm bringing new top talent into the field?”
Grounded in blockchain and artificial intelligence technologies, Aura brings efficiency, transparency, and validity to hiring processes. Its Aura Token (AUX) enables companies to hire talent on the Aura platform, instantly pay that talent, and use smart contracts to manage transactions.
Dispatch is building the first blockchain protocol to combine business logic with managing large amounts of data using on-chain smart contracts with distributed storage of application data off-chain. Developers can get an early look at the Dispatch platform via its publicly available Test Network, which now includes a supported Application Programming Interface following its recent v2.2 update.
For fuller information about the API and Test Network v2.2, visit https://api.dispatch.io. All the code for Dispatch’s software is also available on an open source basis at https://github.com/dispatchlabs
With the innovative consensus algorithm known as Delegated Asynchronous Proof of Stake at the core of Dispatch’s platform, it will verify transactions rapidly, have low energy requirements that make it incredibly eco-friendly, and remove transaction fees.
The Dispatch blockchain effectively solves for transaction speed and scalability issues that have recently plagued other blockchain communities. The novel DAPoS consensus algorithm will allow the new platform to process orders of magnitude more transactions per second than Ethereum or Bitcoin, for example.
“We’re very excited to partner with a company like Dispatch that’s innovating at the protocol layer of the blockchain ecosystem,” said Anastasia Green, Aura’s co-founder and CEO. “They’re truly building a new foundation upon which developers can create a bold new future across many different industries. We’re eager to help them find the talent to get there.”
Green recently interviewed Dispatch’s co-founders for her company’s YouTube channel at http://bit.ly/2O76riD
About Dispatch
Dispatch is designing the decentralized architecture for an enterprise-ready blockchain. The Dispatch blockchain is purpose-built for speed and scale. With its innovative consensus algorithm, Delegated Asynchronous Proof of Stake (DAPoS), Dispatch enables enterprises and developers who work with big data to create scalable, fast and versatile decentralized apps. Dispatch supports creativity, growth and responsible change, and partners with startups and high-growth enterprises alike. For more information, visit www.dispatchlabs.io or Dispatch on Telegram at https://t.me/dispatchlabs
About Aura
Aura is a decentralized professional network with a mission to match the world’s best talent with the right career opportunities to create synergistic workplaces within the technology industry. Based in San Francisco, Aura enables professional profile building, professional development tools, worker-to-business matching, hiring, and performance review. The Aura Token (AUX) represents the digitized time and professional reputation of the best technical teams and individuals, closing the huge gap between available talent and customer demand. For more information, visit Aura’s website at https://auracoins.io
Media Contacts
Aura: Anastasia Green, Co-Founder/CEO, ana@auracoins.io
Dispatch: Darin Kotalik, VP/Marketing, darin@dispatchlabs.io
Learn more about Dispatch Labs:
Read our whitepaper
Read our litepaper
Dive into our Consensus Algorithm DAPoS
Check out our Github
Subscribe for email updates
Join the conversation on Telegram
Follow us on Twitter
Join us on Facebook
Subscribe to us on YouTube
|
Dispatch partners with Aura to boost recruitment of crypto talent
| 6
|
dispatch-partners-with-aura-to-boost-recruitment-of-crypto-talent-1e647994eb2
|
2018-10-15
|
2018-10-15 20:54:39
|
https://medium.com/s/story/dispatch-partners-with-aura-to-boost-recruitment-of-crypto-talent-1e647994eb2
| false
| 644
|
Dispatch Protocol, dispatchlabs.io
| null |
dispatchlabs
| null |
Dispatch
|
dispatch@dispatchlabs.io
|
dispatchlabs
|
CRYPTOCURRENCY,ICO,BLOCKCHAIN,CRYPTO,FINTECH
|
dispatchlabsio
|
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
RS
| null |
c992057d642f
|
richstimbra
| 14
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-31
|
2017-10-31 23:11:54
|
2017-11-03
|
2017-11-03 16:20:20
| 7
| false
|
en
|
2017-11-04
|
2017-11-04 12:40:25
| 8
|
1e64af6d280a
| 10.034906
| 95
| 8
| 0
|
One weird trick — bankers hate me!
| 5
|
Part 1: A n00bs Guide To Deep Cryptocurrency Trading
This blog post is a high level overview of my foray into using neural networks to trade bitcoins.
For about two years, I’ve doggy-paddled well out of my depth through the endless ocean of deep learning soup. During this, I’ve become ever more lazy; typically, I can never be bothered to learn a new topic, so I often find myself wondering if I can get my computer to learn and do it for me!
This article is about training learning systems (using neural networks) to trade cryptocurrencies for me, because I’m too lazy to read the thick financial trading tomes available in my library. There will be a few posts in this series as I progress my understanding and source code, so bare with me.
Cryptocurrencies right now
Before we start to think about things like neural networks or reinforcement learning, it’s important to review the domain first and get an intuition for the domain and the kind of problems that are trying to be solved. If I miss something, or supply any contentious material, leave a comment and I’ll be happy to edit it in!
A Motivation For / Explanation Of Blockchain
People have traditionally relied on a central authority when it comes to transacting money (or value) around a society. Intermediaries such as governments and banks have helped to ensure trust into the transactions that society has built upon. A potential issue with having a central authority managing all the money is that it can sometimes go terribly wrong.
Piles of banknotes during the German mark’s hyperinflation — in part caused by massive borrowing by the German emperor and parliament to pay for the first world war. There were other options — France imposed its first income tax. By November 1923, the US dollar was worth 4,210,500,000,000 German marks.
Let us use an analogy to look into a problem regarding transactions of value. If I were to give you an car, you would then own that car, and I would now have to walk home. In the physical realm, the transaction took place and the car left my possession in it’s entirety. If now, I took a photo of this car and saved it to my computer, I would end up with a digital representation of this car as a bunch of pixel colours as a file. I’m of the generous sort, and have given you my digital car picture . Can you ensure that like before in the physical realm, you are the sole proprietary owner of the picture?
Blockchain tech may do more for piracy prevention than this advert ever did!
Of course it’s entirely possible I was so generous and selfless that I sent this picture of a car out to thousands of people as a spam email before giving it to you. I just like to share. The issue that is hopefully arising here is that the digital exchange is not the same as the physical one. Digital files are incredibly easy to reproduce, creating something called the double spending problem. The double spending problem is where we spend the same unit of value more than once, and until blockchain has been a major disadvantage in the economy of digital assets.
You smile, knowingly, and exclaim ‘Aha! But it is simple — we can just build a digital ledger — this way everyone can track who owns which automobiles’.
A early ledger for keeping track of transactions.
The issue now arises that I have ended up in control of ledger (someone needs to have control over it) and I’m sick of being the nice guy. I decide to assign everybody's cars to myself. There is also the issue that as I am the owner of the ledger, if anyone wants to make genuine transactions they need to go through me, as an intermediary.
The solution is to riot, remove me from power and distribute the ledger digitally amongst everybody's computers. If I now try to steal anyones car, my ledger will not match every one elses, and I will denied the illegal transaction. In other words, the Blockchain is a distributed ledger or decentralised database that continuously updates records of who owns what. Rather than a single person owning the database, the databases are replicated across the network and synchronised over the internet. Members in the network with large amounts of compute capability take all of the past 10 minutes transactions and compete to validate them, and the winner receives a reward of cryptocurrency (generally.)
This public ledger is open source, which means there are no nasty surprises coded deep into the software (read: we can trust it more). The openness, decentralised and cryptographic nature of the blockchain makes it very hard to hack, unlike monolithic banking software which is regularly making headlines as hackers make off with millions of stolen currency. Note how our digital car photo now behaves like the physical thing; when I make an exchange it is clear who now unambiguously owns it and as a bonus, we don’t need any third party verification (such as my corrupt self) to ensure I didn’t send extra copies for myself! It is this public ledger, known as the blockchain, that cryptocurrencies such as Bitcoin are built on top of, and the focus of this article.
What The Hell Are Cryptocurrencies?
Cryptocurrency is simply a digital asset using the aforementioned technologies. According to wikipedia as of the 11th of July 2017 there are over 900 different cryptocurrencies and growing. The main behemoth is Bitcoin, of course. Bitcoin is the world's first decentralised digital currency, created by an unknown person, alias or group of people called Satoshi Nakamoto. If you are not aware of this story, check it out — one of the craziest things is that as of September 2017, Nakamoto owns roughly one million bitcoins, at a value of $6 billion USD.
Historic footage of secretive Nakamoto with (his?) bitcoins that (he?) can’t spend
But there are many alternatives to Bitcoin, which are often grouped under the term altcoins. Many altcoins attempt to target any perceived limitations of bitcoin, or try to reimagine some component of technolgy for some advantage. Litecoin for example, increases the speed a new block is generated. Etherium is aiming to build a blockchain platform to issue and sell initial coin offerings and there are loads more. This is a good link to read up on the various types of altcoins.
How Can I Trade Cryptocurrencies?
Let’s just get one thing straight — if you don’t have the money to lose, you shouldn’t invest in bitcoin. I’m a student (studying at the University of London) and not from a wealthy background. I’ll be simulated trading with pretend coins, before moving to a very small amount of money that I can afford to lose. I’m also going to gloss over manual trading, as the focus here is on creating artificial learning systems to trade for us!
So where can we trade the currencies algorithmically? There are hundreds of exchanges with different coins that you can trade, with varying volumes of currency being traded. Each exchange has it’s own API that you can access, which usually sends you JSON data for you to unpack and interpret in your own way. Regardless, you can pick any exchange that you like and access their API — the good ones should document how to do so on their website. Alternatively, you could use something like ccxt. This is a multi language (Python, PHP and Javascript) library that allows for easy, unified access to over ninety exchanges at the time of writing. As we’ll be writing lots of code in Python for our machine learning, ccxt is a great way to get started quickly. You can install it by following the instructions at the github!
So how do we use ccxt? As a hello world for algorithmic trading, let’s say we want to get some data from the Poloniex exchange. In ccxt, every exchange offers ‘markets’ within itself — the set of markets differs from exchange to exchange. This potentially opens up possibilities of cross market arbitrage.
But what is a market I hear you ask? A market within this context is a pair of cryptocurrencies. Ultimately, our goal is to buy (and sell) cryptocurrencies, and to do this we must exchange one for the other. For example: The market might be exchanging Etherium for Bitcoin. The symbol (and key for the python dictionary object) for this market would be ‘ETC/BTC’. The symbol ‘ETC’, stands for Etherium, and is the base currency. The symbol ‘BTC’ stands for Bitcoin, and is the quoted currency. Buying this pair or market would mean we buy the base currency with the quoted currency. Inversely, selling this pair would gain us the quoted currency for the base currency.
It is important to note that one of the driving reasons that neural networks are so popular today is our increasing ability to create large streams of data. That, and hardware acceleration (thanks CS:GO players,) and a minor appreciation for incrementally better algorithms, means that we can do amazing things with deep learning. As are not at the trading stage yet, we must look at collecting data, which is an imperative task. Incredibly, we can access lots of market data in just four lines of code, including the Python ccxt module import!
Getting real cryptocurrency values in just four lines of Python!
There is a lot of data here, and to interpret it better, I can strongly recommend checking out the ccxt manual, which at the time of writing is being worked very hard on! The manual also notes exactly how you can use the API to actually make trades, which we’ll go over in a later article.
Applying Deep Learning To Trading And The Portfolio Management Problem
There are many approaches of melding machine learning and trading together. A typical approach is to try to predict future price movements or trends — supplying a neural network with historic prices can be trained to output a predicted vector of prices for the future. These approaches are fairly easy to implement; it is supervised learning and a matter of collating the data and training a neural architecture to perform regression on. However, in practice it doesn’t work so well.
To validate my opinion on this matter, I did some basic regression using a multi-layer perceptron network and a recurrent neural network (long short term memory) on a financial dataset obtained from Kaggle, and have uploaded the notebook experiment.
The whole point of this notebook is to demonstrate that it is non-trivial to just throw a neural network at price prediction and assume we can just get rich quick. I also don’t suggest that I have been in anyway exhaustive — research shows that turning this problem into a classification problem can improve results, by simply asking the model to perform binary classification on whether a stock will jump by a margin based on historical data. There are an abundance of other models and methods, so please don’t let me put you off your fantastic get rich quick schemes (if you do have a good idea— leave a comment and let me know what it is!)
Out of the many applications of deep learning to the financial markets I choose to focus on one in particular: The Portfolio Management Problem. This is the repetitive process of reallocating funds into a set of discrete financial products; aiming to maximise the total return whilst minimising the risk. There are a few issues with price prediction — it is difficult to get accurate models (and therefore high performance) and price predictions are not market actions, so this means extra logic must convert the price prediction into an action, meaning a non-end-to-end machine learning solution, which is not as adaptable or as trainable. It is for example, difficult for a price prediction neural network to consider the transaction cost of market actions, which is important in real trading, especially when there is a high volume of trading. In the next post, we’ll go into implementing a deep reinforcement learning system to tackle the portfolio management problem.
Getting More Cryptocurrency Data and Visualising It
So before jumping into deep reinforcement learning I deemed it worth further exploring the available data, as well as hacking together very basic data persistence method via an sqlite3 database. Firstly lets check out a simple example that prints out the market order book at the limit defined by whatever exchange that we choose.
There is clearly more to the API than this, and we can look at a further example that gets more data and plots it in interactive graphs. A quick bit of housekeeping; you’ll need to install a few extra Python packages; Dash which essentially allows you to plot graphs in the browser using just Python (making use of great libraries such as D3.js, React and Flask.)
Below we can see more of the Poloniex exchange being graphed. This updates around every second, and the every market or coin pair offered by the exchange can be visualised by selected the desired pair via the dropdown menu.
You can plot the above information in your browser by running the below code. It will also begin to save the data (for use in training deep learning systems) in a database. There may be other forms of data which would be valuable for training, so have some thought for what data could inform the relationships you are trying teach your function approximators.
If you do run the below code, ensure that there is a folder called databases in the same directory as the code. You could easily add a line of Python to ensure that the folder exists if you want to. Although this is a reasonable start, this code could easily be improved or extended via better table structure and perhaps migrating to postgres, as well as accessing more of the API.
Concluding Remarks
Over this post I’ve demonstrated some of the ideas behind cryptocurrencies, and how to programmatically access and visualise real-time data using various Python modules. I have also introduced the notion of applying deep learning techniques to trading. It is clear that just attempting to predict the prices of financial products is a hard task, and there are other ways of trading, such as tackling the portfolio management problem using deep reinforcement learning, which I’ll be writing about in the next article.
Thanks for reading, it’s been super fun to write and I’m looking forward to the next one. Feel to send public abuse to me on twitter or leave a comment here and leave this post a clap if you enjoyed it— and I look forward to hearing from you!
|
A n00bs Guide To Deep Cryptocurrency Trading
| 796
|
deep-cryptocurrency-trading-1e64af6d280a
|
2018-06-15
|
2018-06-15 16:52:53
|
https://medium.com/s/story/deep-cryptocurrency-trading-1e64af6d280a
| false
| 2,381
| null | null | null | null | null | null | null | null | null |
Bitcoin
|
bitcoin
|
Bitcoin
| 141,486
|
Leon Fedden
|
Wizard nerd summoning TensorFlow, C++ and Python black magic from the void.
|
80bcde7d6e7
|
LeonFedden
| 376
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-18
|
2018-07-18 18:51:47
|
2018-07-18
|
2018-07-18 19:13:44
| 3
| false
|
en
|
2018-07-20
|
2018-07-20 09:23:36
| 3
|
1e6973886129
| 3.55
| 8
| 0
| 0
|
Hi Learners!! Today lets try to unlock Bayes’s Theorem.
| 5
|
Unlocking Bayes’s Theorem
Hi Learners!! Today lets try to unlock Bayes’s Theorem.
source :https://doblegroup.com/wp-content/uploads/2015/02/unlock-potential.jpg
This blog will cover :
1.Probability
2.Conditional Probability
3.Conjoint Probability
4.Bayes’s Theorem
5.Applications of Bayes’s Theorem
Probability :
Probability is a number between 0 and 1,including both 0,1 where :
The value 1 indicates the certainty that an event will occur.The value 0 indicates the certainty that an event will not occur.Intermediate values represent the degree of certainty. The value 0.5 indicates that an event is as likely to occur as not.
https://www.wikihow.com/Understand-Probability#/Image:Understand-Probability-Step-2.jpg
Let us consider an event of picking up white ball from a bag which has 26 white balls and 24 black balls.
Probability of White ball = Number of White balls in the bag/Total Number of balls in the bag= 26/(26 + 24)= 0.52
Conditional Probability :
Conditional Probability is probability based on some previous background information.
Condiitonal Probability denoted by P(A|B), is probability of event A,given event B is true.
https://medium.com/@mithunmanohar/machine-learning-101-what-the-is-a-conditional-probability-f0f9a9ec6cda
Lets continue with the above example,
what is the probability that the second ball is white ball(without replacement)given that the first ball is white ball ?
P(Secondball=white_ball | Firstball=white_ball) = Probability of second ball to be white given the first ball is white= 25/(25+24) = 0.51
what is the probability that the second ball is black ball(without replacement)given that the first ball is white ball ?
P(Secondball=black_ball | Firstball=white_ball) = Probability of second ball to be black given the first ball is white= 24/(25+24) = 0.48
Independent Events : Two events are known as independent events when occurrence of one doesn't affect occurrence of other.
Thus for Independent events
P(B|A)=P(B),
P(A|B)=P(A)
Conjoint Probability :
Conjoint Probability is probability that two events are true,denoted by P(A and B)
In general P(A and B) = P(A) * P(B|A)
For independent events,P(B|A) = P(B),thus P(A and B) for independent events = P(A) * P(B)
What is the probability that both the balls drawn at random from the bag are black?
let A = Event when first ball is black
B = Event when second ball is black
P(A and B) = P(A) * P(B|A)
P(A) = 24/50 = 0.48
P(B|A) = 23/23+26 = 0.47
thus P(A and B) = 0.48 * 0.47 = 0.23
We’ll go to Bayes Theorem soon,but before that lets try to understand one scenario.
Lets extend our above example :
Now we have two bags where,
first bag(Bag 1) has 26 white balls and 24 black balls and second bag(Bag 2) has 20 white balls and 40 black balls.
Suppose we choose one bag at random,and select one ball at random.The ball is black.What is the probability that the black ball is from bag 1.
This a conditional probability,P(bag1|black_ball).Can we calculate this from the concepts understood above ?
What if I ask the other way round What is the probability of black ball given it is drawn from first bag.Isnt this easy?
P(Black_ball|Bag_1) = 24/50 = 0.48
Sadly P(Bag_1|Black_ball) is not equal to P(Black_ball|Bag_1).But there is a way to reach to former from latter : Bayes Theorem.
Bayes Theorem :
Conjuntion is Commutative i.e P(A and B) = P(B and A)
thus as per conjoint probabilities,P(B)* P(A|B) = P(A) * P(B|A)
Dividing the above equation by P(B) we get P(A|B) = P(A)* P(B|A) / P(B).
Wasnt this very simple?? Yes this is Bayes Theorem
P(A|B) = P(A)*P(B|A) / P(B)
Now,lets get back to our problem and try to solve it using Bayes’s Theorem
A = Probability of Bag 1
B = Probability of Black ball
P(A) = 1/2 = 0.5(Since there are two bags,probability of choosing Bag 1 is 1/2)
P(B|A) = 0.48 (Probability of black ball given bag1 #We have already solved this above)
P(B) = (24+40)/110 = 0.58 (Number of black balls in both the bags/Total number of balls in both the bags)
Thus,P(A|B) = 0.5 * 0.48 / 0.58 = 0.41
This example shows one application of Bayes Theorem.This theorem helps you to get one conditional probability from other.
Application of Bayes’s Theorem
There are other ways to use Bayes’s Theorem.
In real life scenarios data gets updated with time and so does the hypothesis.Thus the probability of Hypothesis changes with change in data.This way of using Bayes’s Theorem is called diachronic interpretation.”Diachronic” means something that is happening over time.
Rewriting Bayes’s Theorem for Diachronic Interpretation :
P(H | D) = P(H) * P(D|H) / P(D)
Here each term has a name :
P(H) = Probability of Hypothesis before we know the data,called as prior probability
P(H|D) = Probability of Hypothesis after we know the data,called as posterior probability(This is what we want to compute)
P(D|H) = Probability of Data under the hypothesis,called as likelihood.
P(D) = Probability of Data,called as normalizing constant.
I hope you enjoyed reading.Keep following my blog for more content!
Happy Learning!!
|
Unlocking Bayes’s Theorem
| 78
|
unlocking-bayess-theorem-1e6973886129
|
2018-07-20
|
2018-07-20 09:23:36
|
https://medium.com/s/story/unlocking-bayess-theorem-1e6973886129
| false
| 795
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Srishti Sawla
|
Learning how machines learn!!
|
ce52675a694
|
srishtisawla
| 155
| 157
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-24
|
2018-06-24 00:11:31
|
2018-06-24
|
2018-06-24 07:26:28
| 9
| false
|
en
|
2018-06-24
|
2018-06-24 17:17:52
| 30
|
1e69b49d4ef8
| 4.667925
| 17
| 1
| 0
|
This was my first time at CVPR, in Salt Lake City and here are my notes on some of the highlights from the workshop on autonomous driving…
| 4
|
CVPR 2018 Day 1 — notes
This was my first time at CVPR, in Salt Lake City and here are my notes on some of the highlights from the workshop on autonomous driving that I attended. I am new to this field and attending this workshop gave me a good understanding of the problem space and the challenges. The speakers were amazing and left me wanting to go get my hands dirty with some of these datasets!
Below, I give some high level impressions from the workshop, followed my some more detailed notes from some of the speakers. For more details on the workshop see: http://www.wad.ai/talk.html.
Some key areas of research / development in the autonomous driving space presented at the workshop include:
There seems to be agreement we need to optimize for safety, however it is not straightforward to incorporate this into performance metrics at the task level e.g.:segmentation, navigation, etc.
Collecting annotated data is expensive. Using weakly supervised labels to generate and collect training data is an inexpensive way to do this.
Autonomous systems need to keep learning and software design should allow for an efficient way for model improvement over time by learning from mistakes.
Power and energy efficient models.
Datasets for autonomous driving: KITTI, Cityscapes, Mapilliary, BDD100K, ApolloScape, Oxford RobotCar
Using simulators to collect data and train models. Some examples of simulators include: TORCS, GTA-V, CARLA.
Taken from Andreas Geiger’s slides from the Autonomous driving workshop at CVPR 2018.
Below are more detailed notes from individual speakers at the conference.
Speaker: Kevin Keutzer (UC Berkeley)
Talk: 11 challenges for computer vision from Autonomous Vehicles. (Points below have been taken from his slides).
This was my favorite talk of the bunch! It had a good balance of high-level overview and some very interesting research papers for efficient deep learning.
Rethinking accuracy: Accuracy on benchmark datasets is not the best measure of how good the self-driving car is.
Getting enough data: Annotating data is expensive especially for scenarios like accidents, lane-merges and construction sites.
Generating useful data through simulation: Inexpensive and less time-consuming way for data collection.
Formally directed simulation: Being able to simulate specific scenarios.
Domain adaptation: Being able to generate data for day/night and different weather conditions.
Accelerating training to deal with all this new data
New Deep Learning algorithms for handling high resolution: Most current models use 256 X 256 images (eg: ImageNet) even though camera capture from these vehicles provide much higher resolution of the order of 2160 X 3840.
New Deep Learning algorithms for getting more out of video: Many models trained for images do not work well when applied to video.
Utilizing Sensor fusion (LiDAR, RADAR, etc.)
Designing power and energy efficient nets
More efficient computation through NN accelerators
Some related research that he shared included:
SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving
SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud
SqueezeNext: Hardware-Aware Neural Network Design
Shift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
Co-Design of Deep Neural Nets and Neural Net Accelerators for Embedded Vision Applications (Squeezelerator)
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Speaker: Will Maddern (Oxford Research Institute, now at Nuro)
Talk: Leveraging concepts from robotics for collecting training data for autonomous vehicles.
I don’t have a robotics background but the main point, that there is a significant overlap between autonomous vehicles and robotics, seemed reasonable to me. He demonstrated his point through three of his own papers.
(i) Leveraging LiDAR [Paper]
(ii) Leveraging SLAM [Paper]
(iii) Leveraging RTK [Paper]
Speaker: Luc Vincent (Lyft)
Talk: Level 5 @ Lyft
He talked about work at Level 5, the division at Lyft working on autonomous vehicles. I’m largely interested in the technical aspects of this space and a large part of this talk was about Lyft’s business model and goals that I did not take notes on.
I did include these slides below, that show a nice organization of the objects of interest for visual understanding.
Taken from Luc Vincent’s slides at the autonomous driving workshop at CVPR 2018.
Speaker: Andreas Geiger (University of Tübingen)
Talk: Conditional Affordance Learning for Driving in Urban Environments [Paper]
Don’t be scared by the title. It seems to me that an affordance is a fancy term for a feature in machine learning speak. The approach presented in the paper appears to be a simple and powerful one to me.
Taken from Andreas Geiger’s slides from the Autonomous driving workshop at CVPR 2018.
The author describes the Direct Perception paradigm for autonomous vehicles as a combination of the advantages of modular pipelines and imitation learning by using a neural network to learn appropriate low-dimensional intermediate representations. Affordances, are one such type of representation, i.e. attributes of the environment which limit the space of allowed actions.
Taken from Andreas Geiger’s slides from the Autonomous driving workshop at CVPR 2018.
This work develops affordances that are suited to urban environments and conditional models where the decision is based on both the input image and the navigation command. The affordances used are shown below:
Taken from Andreas Geiger’s slides from the Autonomous driving workshop at CVPR 2018.
Speaker: Andrej Karpathy (Tesla, previously at Stanford)
Talk: Applied machine learning for autonomous vehicles
This talk showed some fun edge cases encountered by self-driving cars and the people who have to annotate data for them. It was both practical and familiar as a data scientist as he talked about common challenges like imbalanced datasets and noisy labels.
Taken from Andrej Karpathy’s slides from the Autonomous driving workshop at CVPR 2018.
He also talked about:
“Software 2.0” which he talked about in one of his previous blog posts
Efficient inference
System design to enable models to learn from mistakes is crucial
Overall, a great first day at the conference :)
|
CVPR 2018 Day 1 — notes
| 67
|
cvpr-2018-day-1-notes-1e69b49d4ef8
|
2018-06-24
|
2018-06-24 17:17:52
|
https://medium.com/s/story/cvpr-2018-day-1-notes-1e69b49d4ef8
| false
| 919
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Erika Menezes
|
Engineer + Applied ML @ Microsoft
|
b7318806f5a5
|
menezes.erika90
| 29
| 103
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
cc02b7244ed9
|
2018-04-16
|
2018-04-16 06:02:01
|
2018-04-16
|
2018-04-16 06:04:33
| 0
| false
|
en
|
2018-04-16
|
2018-04-16 06:04:33
| 11
|
1e6a590d9ef3
| 2.056604
| 0
| 0
| 0
|
PRODUCTS & SERVICES
| 5
|
Tech & Telecom news — Apr 16, 2018
PRODUCTS & SERVICES
Video
Netflix will present its 1Q18 results this evening, with some stress due to a potential over-hype on the stock in the latest months, when they’ve been one of the best behaved tech companies in the markets, going up +37% since the positive surprise in the 4Q17 conference call. Analysts expect +6.5m net adds (+32% yoy) (Story)
Financial services
A new $9bn fundraising for Alibaba’s financial services unit Alipay, at a $150bn valuation (!), will turn them into the #12 finance firm by value, driven by their leadership in China’s huge mobile payments market ($16tn) with 54% share, ahead of Tencent (38%). But expansion out of China seems much more challenging (Story)
HARDWARE ENABLERS
Quantum Computing
A team of scientists in Austria has generated a stable quantum entanglement in a string 20 single atoms, in what’s perceived as a breakthrough towards building practical quantum computers. Entanglement, the coupling of the quantum physical states of a set of atoms / qubits, is the key to perform parallel computations (Story)
SOFTWARE ENABLERS
Artificial Intelligence
E Musk just linked his company’s recent production slowdown, causing delays on Model 3, to too much robotisation. He actually commented that “excessive automation was a mistake”, and that they had “underrated” humans. It will be interesting to see how all the “just arrived” automation gurus will say about that (Story)
Amid all the noise currently surrounding Facebook, revelations of their innovative (but privacy-challenging) business practices appear everyday. Now a confidential document leaked to the press explains how they’re starting to use AI to sell advertising space based on users’ future behaviour or preferences (Story)
Privacy
Comments continue on last week’s Zuckerberg testimony, with analysts criticising him for not telling the “whole truth” about what they do with users’ data (e.g. keeping a log on websites visited, even when out of the app) and about the limitations of the control tools that they company is now presenting as a solution (Story)
Not clear yet to what extent will the public become sensitive to privacy risks, but the question now dominates media, and guidelines for people to protect their private data (like this one from TechCrunch on “how to hide on the Internet”) are being published, including recommendations to use tracker blockers (Story)
At completely the other side of the privacy picture, and showing another angle implicitly behind current interest from governments in regulation, the Russian government just banned the (originally Russian) Telegram messaging app, after a refusal to disclose the content of encrypted messages to security services (Story)
Cyber security
The digital world increasingly looks like the potential scenario of a new cold war. Some recent cyber attacks to western infrastructures have been linked to Russia, and this is driving some countries (e.g. UK) to prepare a defence against the threat by strengthening their own capacities to launch similar attacks on Russia (Story)
VENTURE CAPITAL
SoftBank’s huge investments in startups are heating up the space, making other investors inject more money in the market, as a response, and driving valuations of some companies beyond their own expectations. In 1Q18, a record figure of 102 startups have received more than $50m, with a total funding of $16bn (Story)
Subscribe at https://www.getrevue.co/profile/winwood66
|
Tech & Telecom news — Apr 16, 2018
| 0
|
tech-telecom-news-apr-16-2018-1e6a590d9ef3
|
2018-04-16
|
2018-04-16 06:04:34
|
https://medium.com/s/story/tech-telecom-news-apr-16-2018-1e6a590d9ef3
| false
| 545
|
The most interesting news in technology and telecoms, every day
| null | null | null |
Tech / Telecom News
|
ripkirby65@gmail.com
|
tech-telecom-news
|
TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE
|
winwood66
|
Quantum Computing
|
quantum-computing
|
Quantum Computing
| 1,270
|
C Gavilanes
|
food, football and tech / ripkirby65@gmail.com
|
a1bb7d576c0f
|
winwood66
| 605
| 92
| 20,181,104
| null | null | null | null | null | null |
0
|
/**
A dictionary mapping features to their number of occurrences in each known category.
*/
private Map<K, Map<T, Integer>> featureCountPerCategory;
/**
* A dictionary mapping features to their number of occurrences.
*/
private Map<T, Integer> totalFeatureCount;
/**
* A dictionary mapping categories to their number of occurrences.
*/
private Map<K, Integer> totalCategoryCount;
/**
Train the classifier by telling it that the given features resulted in the given category.
*/
public void learn(Classification<T, K> classification) {
for (T feature : classification.getFeatureset()) {
this.incrementFeature(feature,classification.getCategory());
}
this.incrementCategory(classification.getCategory());
}
/**
* Increments the count of a given category. This is equal to telling the
* classifier, that this category has occurred once more.
*
* @param category
* The category, which count to increase.
*/
public void incrementCategory(K category) {
Integer count = this.totalCategoryCount.get(category);
if (count == null) {
this.totalCategoryCount.put(category, 0);
count = this.totalCategoryCount.get(category);
}
this.totalCategoryCount.put(category, ++count);
}
/**
Increments the count of a given feature in the given category. This is equal to telling the classifier, that this feature has occurred in this category.
@param feature
The feature, which count to increase.
@param category
The category the feature occurred in.
*/
public void incrementFeature(T feature, K category) {
Map<T, Integer> features = this.featureCountPerCategory.get(category);
if (features == null) {
this.featureCountPerCategory.put(category,
new HashMap<T, Integer>();
features = this.featureCountPerCategory.get(category);
}
Integer count = features.get(feature);
if (count == null) {
features.put(feature, 0);
count = features.get(feature);
}
features.put(feature, ++count);
Integer totalCount = this.totalFeatureCount.get(feature);
if (totalCount == null) {
this.totalFeatureCount.put(feature, 0);
totalCount = this.totalFeatureCount.get(feature);
}
this.totalFeatureCount.put(feature, ++totalCount);
}
| 8
| null |
2017-11-12
|
2017-11-12 17:21:39
|
2017-11-14
|
2017-11-14 15:17:07
| 19
| false
|
en
|
2017-11-14
|
2017-11-14 15:17:07
| 5
|
1e6a98feaea6
| 13.669811
| 2
| 0
| 0
|
In this post I’ll show a simple yet powerful method to classify mail text as either spam or not spam. Keep in mind that this could apply to…
| 1
|
Building a simple yet powerful spam filter with Naive Bayes Classifier
In this post I’ll show a simple yet powerful method to classify mail text as either spam or not spam. Keep in mind that this could apply to a various other tasks that involve text classification, such sentiment analysis. As the title suggests we’’ be using naive Bayes classifier, which is based on the Bayes theorem. So before we will jump to spam filtering we will take some time to understand a little bit of probability theory, including conditional probabilities, and Bayes theorem and them we will use these theoretical concepts to build up a Bayes classifier that we could train to do text classification. But first let’s understand the problem of spam filtering, and why this problem is hard to solve using classic algorithms.
Why is the spam problem hard?
Suppose you have a spam e-mail like the one bellow:
Have you ever seen a beautiful woman out with an ordinary looking guy and thought, “How did he get a woman like that? He must be rich! I’m as good looking as he is! Why can’t I date women like that?”
For a human, the task of deciding if a mail is of interest or just spam like the one above, seems effortlessly. You just read the mail and somehow your brain knows, sometimes even before you finish reading the whole mail, that it’s junk. How does the brain achieve this…not only that it’s beyond the scope of this post, but we don’t to know yet what goes inside the brain that makes it so powerful on the task of classification in general. Not just text classification, but also image or video or sound classification. The question is how can we create a piece of code that can somehow imitate the decisions the brain takes when classifying a mail as spam or something very important from work. You you chose the algorithmic way, after a few tries you’ll realize that the task seems impossible. What exactly are you going to do? Parse all the words and decide that if it contains the word date and the word woman and also rich then it might be a spam. Ok... That seems to work for this spam message and in general it might work for a very limited dating spams where those words are mentioned. But what about other spams? You know the ones that promise you free US visa if you just click this link, for instance, or those that tell you that you just won one million dollar and in order to get the money you have to enter your credit card number :D You might attempt to build a hashmap of words that appear in spam mails in general. Words like date, sex, viagra, gamble, money credit card etc. But the problem is that some of those mails appear in regular mails as well. The word rich in the above mail is not such a nasty word that would appear only in spam mails. And if you happen to work in a casino for instance, in your office e-mail you will have lots of mails that contain words such as bet, win, lose, gamble etc. That should not mean that those e-mails are spam. So now that you have a hash of words that appear in spam mails, the problem is how do you decide how many of those words you should take into account? If only one word appears, does that make that email spam or not? How about two words. So as you can see, you will ran into a lot of dead ends. And even after you’ve finished your spam filter, it will be very easy to break. Not all spam e-mails are about money sex. Sometimes they could be subtle in their content. So the question remains. How can we actually solve this. The answer is probability theory
Probability Theory basics
You may ask why probability theory. What’s probability got to do with it? Well… you will find out in a moment. But before that let’s dive in a little bit in some of the basics of probability. Also note that this is not intended to be an in depth course on the very vast subject of probability. Also, I’m not an expert either. I find probability theory in a way more mind bending than calculus for instance. Also I would recommend a whole MIT course on the subject just to make sure you really really understand it well. I think the course of professor John Tsitsiklis is by far the best course out there. Very well explained, and you do not need any prior in depth math information. At least that’s my opinion.
Basics
First let’s start with the simple example of a dice.
Rolling a dice will give one of the following outcomes : 1,2,3,4,5,6. You cannot get any other outcome. That goes without saying right ?
We call each rolling of a dice an experiment. And for each experiment we will get one of the 6 outcomes which we will call elementary events. The event of getting 5 or the event of getting 3 etc.
We have two other events that are not present here. The certain event. That’s the event that anytime we will roll this dice, we will get one of the six outcomes. This event has the probability 1. We also have the impossible event. That means that if we roll a dice we will get none of the 6 outcomes. Imagine rolling the dice and expecting to get -1 or 100. That would never happen. That’s the impossible event. All the other elementary events, since they are six in number, and we assume that all of them occur with the same probability, have the probability 1/6. All 6 events, each with probability 1/6, added together will get the certain event with probability 1. Makes sense right? If not check the course.
We start from the presumption that each outcome has the same probability also known as uniform distribution.
We can think of probabilities using set theory. For our dice roll we have the set A of all the possible events that we will also call the universe.
A = {1,2,3,4,5,6}
The probability of an event e happening is card(e)/card(A) or the cardinal of e divided by the cardinal of the universe. In out example, the probability of 1 showing up on a dice is 1/6 because the {1} has the cardinal 1 and the cardinal of the universe is 6.
Besides the elementary events here we can also have other more complex events. Such the event of getting either 1 or 2 when we roll a dice. So our event could be thought of as the subset {1,2}. Obviously the probability of this event is 2/6 = 1/3. the event {1,2} is the reunion of the two elementary events {1} and {2}.Also if we think in algebraic terms the probability of {1,2} could be written as
The probability of having either 1 or 2 (a.i. the reunion of those two elementary events) is the sum of their probability.But there is subtle thing with composed events like these. What is we are asked to find the probability of event A or event B where A = {1,2} and B= {2,3}. In this case adding the probabilities of these two events won’t do the trick. Why? Well, the probability of A = {1,2} is 2/6 right? The probability of B = {2,3} is also 2/6. Therefore, we can say that the probability of either getting event A or event B is P(A) + P(B) which would be 2/6 + 2/6 = 4/6. right? not really…. the reason is that the elementary event {2} appears in both A and B. And if we just add these two events, we will end adding event {2} twice. so their actual probability will be given by this formula:
Which will be P(A or B) = 2/6 + 2/6 -1/= 4/6 - 1/6 = 3/6 = 1/2.
Think about adding two areas that have an intersection.It is the exact same thing. You will add the first area tow the second one and discard the intersection. Just take a look at the image bellow.
It is better to think about probabilies as either pies in a circle or
Conditional probability
Conditional probability is when we want to find out the probability of an event given that another event has happened. So we have two events A and B
The conditional probability of A given B is written as P(A|B). So the idea is the following. We know the probability of both A and B occurring in a given universe. A universe is the space or set of all the events(like 1,…6 in the dice roll experiment)Now if we start from the presumption that B happened, the question is what is the probability of A in this case. Now the universe is no longer the initial one. The universe is B, because we know for certain that B has happened. One intuitive way to think about that is to find out the probability of both A and B happening(the intersection of these two events). Now that we know the probability of A and B happening, we will divide that by the probability of B, because B is the new universe. Since an image is worth a thousands words I think it is better to show some images. bellow we have the two events A and B:
The black square represents the universe, or the set of all the events for a given experiment that we undertake. We’ve highlighted only two events here. A and B. We also see that these two events intersect. Now this intersection is also a probability. It is the probability that both A and B occur. But it is a probability in the bigger universe(the black square). In the case where we know for sure that B has occurred, and we want to know the probability of A given B, then B is the new universe, and the intersection of A and B (which is a probability in the initial universe) has to be normalized to the newer universe.there fore we could come up with this formula:
Based on this formula, we can derive the probability of P(A and B) which is :
And that makes sense doesn’t it? The probability of A and B is the the probability of B happening and the probability of A given that B happened.
Conditional probability IS NOT intersection
Bayes Theorem
Now that we know what conditional probability is we can talk about the Bayes theorem. Let’s start with what the conditional probability of A given B is:
From this we derived the formula for the probability of the intersection:
But wait a minute. The intersection operation is commutative:
So that means that :
But we can also rewrite the intersection not just as the probability of B happening and the probability of A given B. We can rewrite it as the probability of A happening and the probability of B given A. That also make sense right?
We can conclude with this nice symmetry :
Now using this we can rewrite the formula of conditional probability not in terms of intersection but in terms of another conditional probability.
And that’s the Bayes theorem. Writing a conditional probability in terms of the other conditional probability. When you will read about machine learning in general, you will read a lot about Bayes and probabilities in general. There are some terms that are heavily used:
The image above explains all the important terms related to Bayesian inference.
Let’s build the spam filter
Now armed with all these probability stuff, we can proceed and build our simple spam filter. But the again the question is why did we had to go over all that boring math in the first place. What’s the point? How these two apparently different things connect to each other?
Well think of an e-mail as nothing more than a collection of words. Each word carries some information right? Of course not all the words. For instance words like the, of, by, don’t really tell us anything useful. We can throw that away along with all the punctuation marks. All the prepositions, conjunctions, pronouns and some adverbs do not interest us. We call them stop words. Words that can be thrown away. In this link you can see all the stop words in the english language. Now that we’ve thrown away all the punctuation marks and all the stop words, we are left with the nouns, adjectives, verbs and some adverbs. These are the words that interest us, because these are the words that carry the actual information. Words like money, sex, viagra, money gamble, poker, win, visa, card etc. In our spam classifier we have two big classes of e-mails. Spams and other e-mails. We call our classifier a binary classifier. Other more complex text classifiers might have several classes. We would call those multiclass classifiers. An example would be a sentiment analysis classifier used in many review websites.
One thing that we need is a training set. Let’s suppose that we have a training set with 10000 spam e-mails and 10000 non spam e-mails.
We will call the words in an e-mail, features.
We will go through each e-mail and we will filter the words(throwing all the stopswords) and based on the label for each mail(span, non-spam) will attempt to do some accounting. In the end we want to compute some conditional probabilities, therefore we need the total frequencies of each word, and the frequency of each word given each class and how many spam and non-spam mails we have in total.
Bellow is a java code where we build the hash-maps that will keep track of all words frequencies.
The snippets of code are coming from here . It’s not written by me.
In abstract terms, we will note a e-mail text as X and let’s suppose that it contains words or features. So an e-mail text could be viewed as a set of features:
The question that we want to answer is given this e-mail text, what is the probability that it’s a spam. We will note this like this :
So the probability that we have a spam given this particular e-mail, is the joint probability of each conditional probability of having a spam given that we’ve seen a word (the word is part of the e-mail) . So think about it this way. You have e-mail. How can you asses that it’s a spam or not, other than by looking at each individual word and measure for each one of it what is the probability that that word is more likely to be present in a spam e-mail or a non-spam.
for instance if you see the word viagra, you may be tempted to think that this e-mail could be spam. On the other hand if you see the word java for instance
The probability of a spam given a single word is as we already know :
So let’s repeat that. The probability of having a spam given that we’ve seen the word xn is the probability of the intersection between the event of seeing the word xn in general and the probability of having a spam e-mail. All of that is normalized to the new universe, which in this case is P(xn) The probability of xn is the probability of xn given a spam + the probability of xn given a non-spam. That’s the total probability of that word appearing in general. So what we will be doing is that we will divide the probability of that word appearing in a spam over the total probability of that word.
So for instance if we have 10 mails, 5 spams and 5 non-spams and we have the word viagra that appears 8 times. 6 times in spam e-mail and 2 times in non spam e-mails. You could compute that the probability of having a spam if we see the word viagra is 6/8 which is 0.75, and the probability of non having a spam e-mail when we see the word viagra is 2/8 and that’s 0.25(a much more lower value). That’s the intuition behind this computation.
Bellow is a snippet of code that learns all the total probabilities and the conditional probabilities for each word encountered in the training set.
Here we have the two helper methods that increments the feature(word) frequency given a particular class (spam or non-spam) every time it encounters that word in the learning process above and also the class probability
Now the big problem remains gathering real training data to train on. You can start here, or just google spam training data. You’ll find a lot of resources.
As I’ve said, The naive Bayes classifier used for spam filtering is a binary classifier, but we can extend it to many classes. The most obvious example of that would be sentiment analysis. Given a piece of text, you have to classify it as either happy, sad, angry, uninterested etc. That kind of thing is now used in review systems, where movies, or clips on youtube for instance are rated by the comments that users give.
To be continued…
|
Building a simple yet powerful spam filter with Naive Bayes Classifier
| 55
|
building-a-simple-yet-powerful-spam-filter-with-naive-bayes-classifier-1e6a98feaea6
|
2018-05-16
|
2018-05-16 08:09:51
|
https://medium.com/s/story/building-a-simple-yet-powerful-spam-filter-with-naive-bayes-classifier-1e6a98feaea6
| false
| 3,172
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Serban Liviu
| null |
ff4709f8c844
|
serbanliviu
| 31
| 32
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
8579fb994a58
|
2018-02-12
|
2018-02-12 11:45:39
|
2018-02-15
|
2018-02-15 18:05:55
| 1
| false
|
en
|
2018-02-15
|
2018-02-15 18:05:55
| 5
|
1e6d17e0b762
| 0.969811
| 0
| 0
| 0
|
Innovation is about problem-solving — the harder the problem, the greater opportunity to innovate.
| 4
|
The Maze №57 — Why Did You Rob The Bank?
Source
One Big Thing
Because that’s where the money is
The Girl Scout in San Diego, who’s not been identified, parked outside of a legal marijuana shop to sell cookies — and she managed to sell more than 300 boxes in six hours. According to the New York Times, that’s likely more than $1,500 raised.
Hart’s Comment
Don’t believe the headlines: at its core, innovation is not about technology, emerging markets, or applying an academic framework to a theoretical problem.
Instead, innovation is about problem-solving — the harder the problem, the greater opportunity to innovate — the process of digging into a real pain point and solving it in a way that someone will pay for.
Easy to say, hard to do.
This young girl brought delicious cookies to a steady stream of people who may be hungry. That’s pretty good.
Before you build your next application — with MACHINE LEARNING or ARTIFICIAL INTELLIGENCE capabilities — try selling cookies to people who may be hungry. Go from there.
Want More?
You can read all issues of The Maze here. Feedback welcome! Just hit reply or ping me @hartcoin.
Capital — Twitter
Capital — Medium
|
The Maze №57 — Why Did You Rob The Bank?
| 0
|
the-maze-57-why-did-you-rob-the-bank-1e6d17e0b762
|
2018-02-15
|
2018-02-15 18:05:57
|
https://medium.com/s/story/the-maze-57-why-did-you-rob-the-bank-1e6d17e0b762
| false
| 204
|
The Maze
| null |
CapitalLabsHQ
| null |
Capital
|
admin@caplabs.co
|
capital
|
ENTREPRENEURSHIP,STARTUPS,BUSINESS,HISTORY,SMART CITIES
|
capitallabs
|
Innovation
|
innovation
|
Innovation
| 59,190
|
Capital
|
We build products and companies that move cities forward #SuperCities
|
3e87fd9ee405
|
Capital
| 177
| 319
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-30
|
2017-10-30 19:30:21
|
2017-11-06
|
2017-11-06 16:21:01
| 5
| false
|
en
|
2017-11-06
|
2017-11-06 16:21:01
| 5
|
1e6de97afb88
| 1.874843
| 0
| 0
| 0
|
Lassonde’s 2017 speaker series
| 5
|
Next 100 — imagining the future
Lassonde’s 2017 speaker series
Between now and 2120 we face great challenges:
climate change, clean energy, cyber security, inequality and automation, to name just a few.
The opportunities are even greater: big data, artificial intelligence, smart cities, driverless cars, 3D printing, augmented reality, and many more.
What do all these things have in common? They are complex, they are borderless and they are poised to change the way we learn, interact and live.
NEXT 100 is a series of public interactive events that explore how exponential technologies will impact the future of a number of spheres of public life including education, social interaction, art, entertainment and design.
We are bringing together innovators, educators, futurists, technologists, and leaders in business and art to debate these issues in front of a live audience.
Join us in imagining the future!
Explore the Series
Experts from tech companies, educators, industry leaders and — of course — students will engage in a debate about what education will look like in 2120.
Learn more
Experts, academics, social justice advocates and activists will discuss the inherent assumptions in how we design technology and what scientists, industries and governments can do to ensure that tech is representative of and accessible to all.
Learn more
Participating within the context of historical predictions about the future — from movies, art, science and policy — the panel will ask difficult questions about where we are heading as creators and consumers. Interactive questions from the audience will also drive the conversation. Visual installation or video presentation will provide the creative context for the discussion.
Learn more
Learn more about the series here
|
Next 100 — imagining the future
| 0
|
next-100-imagining-the-future-1e6de97afb88
|
2018-05-25
|
2018-05-25 19:12:37
|
https://medium.com/s/story/next-100-imagining-the-future-1e6de97afb88
| false
| 276
| null | null | null | null | null | null | null | null | null |
Technology
|
technology
|
Technology
| 166,125
|
Lassonde School of Engineering
|
We’ve invented a new engineering school for people who want to change the world. Lassonde School of Engineering at York University.
|
60ac9079fa2e
|
LassondeSchool
| 233
| 308
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
760151676bc7
|
2018-09-25
|
2018-09-25 14:15:37
|
2018-09-25
|
2018-09-25 14:18:55
| 1
| false
|
en
|
2018-09-26
|
2018-09-26 20:20:39
| 6
|
1e6e6b486d46
| 3.535849
| 2
| 0
| 0
|
This week I wanted to re-launch my newsletter and introduce some changes in its concept. As artificial intelligence ecosystem matures, the…
| 1
|
Newsletter: AI investment activity, a platform perspective. #1
This week I wanted to re-launch my newsletter and introduce some changes in its concept. As artificial intelligence ecosystem matures, the number and variety of AI startups is growing. AI is becoming a must have technology in various industries and its more or less careful coverage across all these domains is becoming increasingly challenging. At least for a one-person blog.
Since this newsletter, I’ll be focusing on four themes, namely: industry 4.0, entertainment sector, infrastructural software, and healthcare.
I’ll take a bit wider view on the industry 4.0 than it’s usually done[1] and include there not only manufacturing, but other sectors/functions that are digitalised to a lesser extent. Examples of industry 4.0 startups from our portfolio are Connecterra and TraceAir;
Entertainment is a large sector that innovates constantly and is very receptive to AI. Creative industries, even beyond gaming, demonstrate a wide variety of AI applications. MEL Science is a vivid example of cognitive tech in entertainment/ed segments;
Infrastructural software bucket will include emerging AI infrastructure (modelling and training, deployment, etc.) as well as existing IT infrastructure. SQream is a great example of an infrastructural software company we had a chance to invest in;
Exploring healthcare, I’ll focus on a digital side of the sector. Flo, a company where my previous fund had an opportunity to invest, demonstrates this theme.
The primary goal of the newsletter is to explore how AI startups are integrated into value/supply chains, and what kinds of platforms may be built around these startups.
Geographically-wise the newsletter covers the US, Canada, Europe and Israel. It’s stage agnostic and covers startups from inception through growth stages.
17.09.2018–23.09.2018
Past week shows several interesting developments:
Software/algorithms quality is something that AI is applied to tackle. Edge Case Research focus on quality of algorithms behind self-driving cars, while Mabi uses ML to automate QA. It reminds me an application of AI to cybersecurity, where the only option to fight a clever attack is to employ clever algorithms[2]. So we witness the inception of a world where software creates software and algorithms fight algorithms;
Agriculture tech confirms its significance — a whopping $250M round in Indigo Agriculture follows a string of a smaller deals in the UK, KisanHub in January, WeFarm in March, and Hummingbird tech in April 2018. Precision Hawk that raised last week also serves agriculture businesses;
The ecosystem is building up around self-driving cars and a lot of niche (though not small) businesses emerge — Oxbotica with its autopilot and fleet management system, Smart Eye that builds an eye tracking system to prevent sleeping while driving, WayRay that develops an AR navigation system, these are just a few examples from the last week;
The idea that an exceptional companies emerge outside Silicon Valley is supported by recent funding news, see WayRay’s Russian founder who moved to Switzerland, or Smart Eye, whose HQ is in Gothenburg.
Featured companies demonstrate several clever ideas about building technological and business platforms around AI.
Overall at least 8 out of 15 startups build tech platforms for algorithms developers, hardware manufacturers, autonomous fleet managers, game developers, and others. For example:
Precision Hawk’s platform acts as an assemblage point for drone platforms (DJI BirdsEyeView), different payloads (video, thermal and other sensors) and mission control software;
Nengo, from Applied Brain Research, offers a simulator to build, test and deploy neural architectures on various hardware, e.g. GPUs, FPGAs, etc.
Building a business platform around AI tech seems to be a less obvious task. Only 6 startups have clearly identifiable offerings that address more than one element of a value/supply chain, for instance:
Indigo Agriculture is a notable example of a company that builds a platform around its microbial tech. Indigo offers not only treated seeds, but connects growers and buyers on a marketplace and on top of it keeps consumers in a loop;
Precision Hawk not only offers a tech platform to build a drone and run a mission on, but also connects its customers with drone pilots;
WayRay goes beyond developing a navigation system, and links it with a wider smart city ecosystem, for example supported by Hyundai Motor Group[3].
More examples on building tech and business platforms as well as data on featured startups are below in the table. You may access data here (a public Google spreadsheet).
Table 1. Selected AI transactions, $M, 17.09.2018–23.09.2018.
Notes
Data is from: PitchBook, Venturebeat, Techcrunch, Telegraph, GeekWire, qa-financial, and corporate websites. All data is from open sources and all conclusions/ideas/analysis are built only on publicly available information.
This newsletter does not intend to cover all AI transactions, but covers just four themes in a limited set of geographies.
* A company is positioned across a value chain if it has distinctive offers for several its elements. If I misunderstood the value proposition of your startup, please do let me know.
** A company is defined as a platform in tech sense if someone can build up on it. I identify a comapny as a tech platform purely based on public sources, so if I misunderstood your startup, please do let me know.
[1]https://medium.com/speedinvest/the-future-of-manufacturing-is-now-6ffb45150b3
[2]https://www.wired.com/story/ai-machine-learning-cybersecurity/
[3]https://www.hyundai.com/worldwide/en/about-hyundai/news-room/news/hyundai-motor-to-develop-holographic-ar-navigation-with-strategic-investment-into-wayray-0000016041
|
Newsletter: AI investment activity, a platform perspective. #1
| 10
|
this-week-i-wanted-to-re-launch-my-newsletter-and-introduce-some-changes-in-its-concept-1e6e6b486d46
|
2018-09-26
|
2018-09-26 20:20:39
|
https://medium.com/s/story/this-week-i-wanted-to-re-launch-my-newsletter-and-introduce-some-changes-in-its-concept-1e6e6b486d46
| false
| 884
|
An investment and social perspective on technologies that were a science fiction yesterday, that are hyped today and will become habits tomorrow
| null | null | null |
metaverse
| null |
friends-of-ai-society
|
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,COMPUTER VISION,MERGERS AND ACQUISITIONS,DEEP LEARNING
| null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Peter Zhegin
|
A Venture Partner at Sistema.vc / ex Ozon.ru. I write about technologies that have consequences. A history buff. Write at metaverse.vc
|
2494e0b5cd82
|
peterzhegin
| 334
| 218
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-15
|
2018-05-15 23:47:54
|
2018-05-20
|
2018-05-20 16:08:05
| 0
| false
|
en
|
2018-05-20
|
2018-05-20 16:09:26
| 7
|
1e708c50fdd8
| 5.320755
| 4
| 0
| 0
|
I recently described how I set up my personal Deep Learning Station station.
| 5
|
Accessing your Deep Learning station remotely and setting up wake on lan
I recently described how I set up my personal Deep Learning Station station.
In this post I explain how I access the server from my laptop when I’m not home so that I can work on my projects wherever I am. I use DynDNS instead of a static ip and I use a raspberry pi to turn on my Deep Learning station when I need it. In case you don’t have a raspberry pi and don’t want to buy one, check whether your router supports OpenWrt which supports wake on lan as well.
This article is organized as follows:
Enabling ssh with public key authentication
Using jupyter notebook through ssh tunnel
Configuring port forwarding and accessing the server from the internet
Configuring ddclient on your raspberry pi
Configuring wake on lan to wake up your Deep Learning station
1. Enabling ssh with public key authentication
First, assign a static ip to your server, either directly in Ubuntu, or in the settings of your router.
Then, install openssh in Ubuntu:
sudo apt-get install openssh-server
sudo service ssh status
The output should say active (running).
Next, we will enable Public Key Authentication. First, create a key-pair
ssh-keygen -b 4096
Choose a strong passphrase and then transfer the public key to your server
ssh-copy-id -i .ssh/id_rsa.pub <username_on_server>@<static_ip_of_server>
Log into your server with ssh <username_on_server>@<static_ip_of_server> and type cat ~/.ssh/authorized_keys to make sure that you did not accidentally add any additional keys.
Test whether public key authentication works:
ssh -i .ssh/id_rsa <username_on_server>@<static_ip_of_server>
After entering your passphrase (passphrase of the key, not password of the server) you should be logged in.
I usually disable password authentication when using public key authentication. Add the following lines to /etc/ssh/sshd_config on the server (replace 1234 with the port you want to open):
Port 1234
PasswordAuthentication no
ChallengeResponseAuthentication no
Restart ssh with sudo service ssh restart and make sure you are not able to login with your password:
ssh -p 1234 <username_on_server>@<static_ip_of_server>
It should say:
Agent admitted failure to sign using the key.
Permission denied (publickey).
In case you are still able to log in with your password, find out why with the verbose option -vv and make sure you can’t. Leave me a comment, if you need help.
2. Using jupyter notebook through ssh tunnel
Now, let us try running a jupyter instance on the server but work with the notebook on the laptop.
To do this, we open a ssh tunnel from port 8888 on the remote machine/Deep Learning station/server to port 8888 on the local machine/laptop.
ssh -p 1234 -L 8888:localhost:8888 -i .ssh/id_rsa <username_on_server>@<static_ip_of_server>
Once logged in type jupyter notebook and copy this line (different token obviously)
http://localhost:8888/?token=033d36165a4803c1ca656a44fcd07df534950582a58065bf
into the address bar in the browser on your laptop (make sure the port is the same as the tunnel you opened, here 8888).
A jupyter notebook should open.
If all of this worked, open .ssh/config on the local machine/laptop in your favorite editor:
emacs .ssh/config
and add the following lines
Host <name>
HostName <static_ip_of_server>
User <username_on_server>
Port 1234
IdentityFile ~/.ssh/id_rsa
LocalForward 8888 localhost:8888
If you trust the local machine type ssh-add -K ~/.ssh/id_rsa and enter your passphrase so that you don’t have to enter it every time you want to access your server.
Now, you should be able to easily access your server in the local network typing ssh <name> and run a jupyter notebook as I described.
3. Configuring port forwarding and accessing the server from the internet
At this point I suggest that you enable port forwarding on your router and map a port (for simplicity we use the same port as we opened on the server, here 1234) to the port you opened on your server. Since this is slightly different on every device, I suggest you google how to do it for yours.
To find out if it works, go to https://www.whatismyip.com, copy your current ip address and try to ssh into your server:
ssh -p 1234 -i .ssh/id_rsa <username_on_server>@<ip_you_just_found_out>
Did it work?
4. Configuring ddclient on your raspberry pi
Unless you have a static ip assigned by your internet service provider, I suggest that you sign up for a DynDNS service like https://www.noip.com to keep track of your current ip address.
You can configure DynDNS in your router, however the low quality router my ISP provided does not allow using free DynDNS services and I’m currently not using OpenWrt. Since I have a raspberry pi running all the time I installed ddclient on it. First, enable ssh on your pi and set up public key authentication as I described above. Don’t choose the same port as you did for the server (in theory you could use the same port in the local network and then forward a different port from the outside to the static ip address of your pi). Forward this port to the static ip of your pi in your routers settings.
Ssh into your pi and install ddclient:
sudo apt-get install ddclient
The service noip.com is not among those that can be set up immediately during the installation of ddclient. Just confirm the standard choices, we will edit the configuration file afterwards:
sudo emacs /etc/ddclient.conf
Paste the following:
# Configuration file for ddclient generated by debconf
#
# /etc/ddclient.conf
protocol=noip
use=web
web=checkip.dyndns.org
server=dynupdate.no-ip.com
login=<your_username_or_mail>
password=’<your_password>’
<domain_you_chose_while_registering_on_noip>
After that set run_ipup=”false” and run_daemon=”true” in /etc/default/ddclient (on your pi obviously).
Test if ddclient works:
sudo ddclient -daemon=0 -debug -verbose -noquiet 2 /etc/ddclient.conf
Check on www.noip.com, whether the ip is the one shown on www.whatismyip.com.
If yes, ddclient is working correctly. Restart ddclient so that the changes in /etc/default/ddclient take effect:
/etc/init.d/ddclient restart
5. Configuring wake on lan to wake up your Deep Learning station
Finally I enabled wake on lan in order to be able to switch my Deep Learning station on remotely whenever I want.
First, enable wake on lan in your BIOS. For my ASRock motherboard I had to enable the option “PCI Devices Power On” in “ACPI Configuration“. Google how to do it for yours.
Next, install ethtool on your server, enable wake on lan and make sure, that the computer does not switch of your lan interface when shutting down. Follow this guide that i briefly summarize here:
Type on your server:
sudo apt-get install ethtool
sudo ethtool eth0
Look for the lines “Supports Wake-on” und “Wake-on” to make sure your setup supports wake on lan. Note: your interface is not necessarily called eth0 type ifconfig to find out how it is called on your machine.
Enable wake on lan
sudo ethtool -s eth0 wol g
Since you would have to run this line every time you boot your computer we add ethtool -s eth0 wol g to /etc/rc.local (use editor with root priviledges).
To prevent your computer from switiching of the network interface when shutting down add NETDOWN=no to /etc/default/halt .
Install wakeonlan on your pi and maybe on your laptop (if you don’t have a pi and don’t want to buy one, check whether your router supports OpenWrt which supports wake on lan as well):
sudo apt-get install wakeonlan
Finally, try to wake up your server from your laptop or your pi:
wakeonlan -i 192.168.1.255 -p 1234 <mac_address_of_lan_interface_of_server>
The argument for -i has to be the broadcast address of your subnet. This example should work for most out-of-the-box setups. If not, type ifconfig and look for broadcast. Use the port you chose when setting up ssh on your server. Type ifconfig on the server to find out the mac address of your interface (format like 00:0g:5f:17:27:7b). If you are able to wake up your server, put a ssh script “wol.sh” in the home folder of your pi and make it executable so that you can easily run it when you login to your pi to quickly wake up your server remotely.
Finally, put the following in your .ssh/config on your laptop
Host DL
HostName <domain_you_chose_on_noip>
User <username_on_your_server>
Port <port_you_forwarded_to_your_server_for_example_1234>
LocalForward 8888 localhost:8888
IdentityFile ~/.ssh/id_rsa
Host pi
HostName <domain_you_chose_on_noip>
User <username_on_your_pi>
Port <port_you_forwarded_to_your_pi_for_example_4321>
IdentityFile ~/.ssh/id_rsa
Summary:
Now you can type ssh pi to login into your pi remotely, run the wakeonlan script ./wol.sh and then when your server started ssh into it with ssh DL. Run jupyter notebook, copy the url into the address bar in the browser on your laptop and work wherever and whenever your want :)
|
Accessing your Deep Learning station remotely and setting up wake on lan
| 5
|
accessing-your-deep-learning-station-remotely-and-setting-up-wake-on-lan-1e708c50fdd8
|
2018-06-01
|
2018-06-01 18:50:14
|
https://medium.com/s/story/accessing-your-deep-learning-station-remotely-and-setting-up-wake-on-lan-1e708c50fdd8
| false
| 1,410
| null | null | null | null | null | null | null | null | null |
Raspberry Pi
|
raspberry-pi
|
Raspberry Pi
| 4,517
|
Fabio M. Graetz
|
Theoretical Astrophysics | Deep Learning | Bespoke Shoemaking | Berlin
|
fb820388a7e9
|
fabiograetz
| 56
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
99b2a8be730d
|
2018-06-07
|
2018-06-07 07:46:57
|
2018-06-12
|
2018-06-12 06:33:39
| 2
| false
|
en
|
2018-06-12
|
2018-06-12 06:33:39
| 6
|
1e71f5ee2747
| 2.43239
| 3
| 0
| 0
|
As part of delivering on transparent machine learning, we wanted to walk through the technology that underlies ModelDepot Percept, so that…
| 5
|
Ship Icon made by Freepik from www.flaticon.com
Percept — What’s Inside the ML Container
As part of delivering on transparent machine learning, we wanted to walk through the technology that underlies ModelDepot Percept, so that you can understand the expected behavior, performance, advantages, and potential pitfalls of our system. We’re not a fan of black boxes, and this product is no different.
We’ve bundled together a robust set of technologies together with rigorous user testing to ensure that you have the best experience using our product (and let us know if you aren’t!). We’re confident that you’ll have a good experience, which is why we’re confident in laying out the details in the open for you to see.
With that, let’s dive into the technicals.
Deep Learning Backbone
Today, the ML workhorse being packaged in every Docker image is a ResNet v1 101 architecture trained on Google’s Open Images v2 Dataset. The model comes out of the box with 5,000 classes, which is more diverse than ImageNet. The pre-trained nature of this model allows you to quickly get started on classifying images with no training data on your side. The diversity of the training data also allows you to quickly extend the model to transfer-learn on a breadth of new classes it does not know. We’ll cover the technique being used in the next section.
In the future, we hope to provide a variety of model architectures trained on both Open Images as well as ImageNet to allow you to choose between speed, and classes that are more applicable to you out of the box. Let us know if you have any preferences, and we’ll try to get it done.
Few Shot Learning Technique
Of course one of the main advantages of using ModelDepot Image Classification over a typical cloud provider is the ability to train on novel image classes with your own datasets. We have a tutorial over here showing you how to do it if you haven’t explored that feature yet.
We currently use features extracted out of the backbone model and use a Support Vector Machine to find the confidence values of new classes that you’ve trained. While this is a simple method, it serves as a powerful baseline and performs favorably to state of the art methods.
Over time, we hope to include more techniques that provide even more accurate few-shot learning results. We hope to also make it easy to fine-tune the DL model itself, and potentially re-train the model from scratch if you own a big enough dataset.
With that, we hope we’ve shed some light on some of the magic, that isn’t so magic, behind our ML containers. Armed with this knowledge, you’ll hopefully be able to do further research on how good or bad our product might be for your use case, and let us know of any feedback on how we can do better.
I hope to see you explore how ML can improve your product or user’s experience, and even better if ModelDepot is able to help you along the way.
Interested in trying ModelDepot Percept? Get started for free here! If you have any questions, don’t hesitate to reach us via the in-site chat on the bottom right or via email at hi@modeldepot.io.
|
Percept — What’s Inside the ML Container
| 52
|
percept-whats-inside-the-ml-container-1e71f5ee2747
|
2018-06-16
|
2018-06-16 12:10:59
|
https://medium.com/s/story/percept-whats-inside-the-ml-container-1e71f5ee2747
| false
| 543
|
ModelDepot is a place where you can find and share optimized, pretrained ML models that are perfect for your development needs.
| null | null | null |
ModelDepot
|
has727@g.harvard.edu
|
modeldepot
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,PROGRAMMING,BUSINESS,SOFTWARE ENGINEERING
|
modeldepot
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Mike Shi
|
Democratizing Open Machine Learning @ modeldepot.io
|
f87085c18f44
|
mikeshi
| 621
| 262
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-11
|
2018-06-11 06:44:50
|
2018-06-11
|
2018-06-11 07:15:36
| 1
| false
|
en
|
2018-06-11
|
2018-06-11 07:15:36
| 1
|
1e73839a76fd
| 3.89434
| 1
| 0
| 0
|
Last week the retail industry flocked to New York City for the NRF Big Show. The conference brought together over 35,000 retail industry…
| 3
|
Retail Technology And Marketing Trends On The Rise For 2018
Last week the retail industry flocked to New York City for the NRF Big Show. The conference brought together over 35,000 retail industry professionals from all over the globe to learn about the latest retail trends and network with peers.
While there was plenty of buzz about the anticipated bottom line boosts from tax reform, I was more interested in the technology and marketing trends poised to dominate 2018. I recently asked leading venture capitalists their perspectives on the future of retail, so I used the NRF Big Show to get the retailer point of view.
Between the NRF Foundation Gala and the Big Show, I interviewed seven retail executives and found out what’s making them excited in 2018. Here are a few things they are working on.
1) Retailers continue to strive for more personalized digital experiences.
Terry Lundgren, Executive Chairman at Macy’s; Karen Katz, President & CEO of Neiman Marcus Group and Jane Elfers, CEO at The Children’s Place made it clear personalization remains a big focal point in 2018.
Macy’s is continuing to focus on personalization according to Lundgren, “The whole concept of personalization is simply on steroids right now. It’s all about the consumer in that one moment in time. We’re doing anything we can do to connect directly with consumers and make shopping convenient for them.”
Retailers have worked on improving personalization for years, but as Katz believes, it all comes down to customer experience: “ Great customer experience in 2018 will come from blending technology with a more personalized touch. I think the people that can combine technology-powered personalization with a human will be the winners.”
2) Connecting online and physical store experiences remains a big focus.
Retailers like Macy’s and The Children’s Place are still hot on ‘omnichannel retail,’ the term used to describe how retailers connect online and offline shopping behaviors. According to Lundgren, Macy’s is continuing to see serious growth in the area of “buy online, pick up in store” (BOPUS). He believes “physical stores are not going away. Customers will always want the option of coming into the store to try on jeans instead of buying three different sizes online.”
The Children’s Place is also making a “big move towards digital and employing a lot of the omnichannel use cases like BOPUS and ‘Save the Sale,’” added Elfers. ‘Save the Sale’ requires store associates to have the ability to access real-time inventory across the network of stores. This inventory access enables store associates to keep customers from walking away from a purchase by finding their desired item online or at another store location with ease.
3) More retailers will leverage Amazon Alexa, Google Home and other voice assistants.
Charlie Cole, Global Chief eCommerce Officer at Samsonite believes voice assistants will start taking off in certain categories, “For example, categories like consumables will start to take off in voice, but categories like fashion may have a harder time.” As Cole eloquently illustrates, “No one is going to say ‘Hey Google/Alexa, order me a $1,200 cashmere sweater.’”
1–800-Flowers.com has been bullish on voice for a while, having been one of the first retailers to launch an Alexa skill. Chris McCann, CEO & President believes bots and AI capabilities (all built on big data, deep learning and deeper analytics) will enable 1–800-Flowers.com to supercharge its personalized experience. He envisions the possibilities, “With voice as the main interface emerging, I think it will bring us back to the retail experience of our first flower shop where we delivered a true 1-to-1 relationship. Voice enables us to have a 1-to-1 relationship with customers on a massive scale .”
4) As artificial intelligence (AI) is maturing, more AI-powered retail applications are gaining adoption.
According to Cole, 2018 is the year that “artificial Intelligence will have its breakthrough moment. More and more retailers will start using it to power various parts of the retail and ecommerce experience.”
Katz described how Neiman Marcus utilizes AI to deliver a highly personalized experience, but only in conjunction with a human touch. While 1–800-Flowers.com uses AI to power conversational interfaces like Alexa, Google Assistant and Facebook Messenger, many of which need no human touch at all. For McCann, “Our job is to make ordering flowers easier, so we’re committed to being wherever the customers are. If they use voice, we’re there. If they want to use the [Facebook] messenger platform, we’re there.”
5) Personalization, social media and Amazon Marketing Services receive top billing for preferred 2018 acquisition marketing strategies.
Both BJs and The Children’s Place highlight the importance of personalized marketing strategies. Chris Baldwin, CEO of BJs explained, “Because we are a membership club, we have a ton of information about what our customers buy that we can use for targeting,” Elfers new personalization strategy began in 2017 when she hired a data scientist to clean up the customer database. Now that The Children’s Place can connect customer purchases online and in-store “it will help make acquisition, engagement and retention strategies more personal,” Elfers asserted.
Social media ranks as the top marketing priority for Brad Weston, CEO of Petco — because people genuinely love sharing and liking posts about pets. On the flip side, Cole (Samsonite) believes, “Amazon Marketing is going to become as critical to a brand’s marketing strategy as Google and Facebook. Today, Amazon has the return on investment potential of Google Paid Search in 2005 and display ads 2002.”
While there is still a lot of work for retailers trying to keep up with Amazon, the executives I spoke with seemed confident and eager for their 2018 technology, data and AI implementations. As for me, I’m looking forward to a year of new retail innovations and the fantastic customer experiences they promise to deliver.
Source: Forbes
|
Retail Technology And Marketing Trends On The Rise For 2018
| 5
|
retail-technology-and-marketing-trends-on-the-rise-for-2018-1e73839a76fd
|
2018-06-11
|
2018-06-11 07:15:37
|
https://medium.com/s/story/retail-technology-and-marketing-trends-on-the-rise-for-2018-1e73839a76fd
| false
| 979
| null | null | null | null | null | null | null | null | null |
Retail
|
retail
|
Retail
| 16,358
|
Retail Technology Trends
|
Retail technology trends including ERP, Artificial Intelligence, Robotic Process Automation and future tech
|
94d8c3e06b7
|
retailtech
| 10
| 17
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f5af2b715248
|
2018-02-26
|
2018-02-26 16:47:30
|
2018-02-26
|
2018-02-26 16:51:35
| 1
| false
|
en
|
2018-02-26
|
2018-02-26 16:51:35
| 1
|
1e744b1afdfe
| 0.649057
| 3
| 0
| 0
|
Last Tuesday, my brain nearly exploded. I was writing some marketing collateral and I had finally slipped into the zone. The words were…
| 5
|
How bots can help us win the battle against distraction
Last Tuesday, my brain nearly exploded. I was writing some marketing collateral and I had finally slipped into the zone. The words were starting to flow.
Then five red Slack notifications appeared. A colleague messaged me on WhatsApp. My cell phone buzzed and I cursed myself for failing to put it on silent. Three urgent emails popped into my (regrettably open) inbox and, worst of all, my actual desk phone rang.
My focus was gone. I forgot what I was doing and turned my attention to all these bright, buzzing alerts. I looked up some research numbers and got sucked into a story about…
Continue reading the story
.
|
How bots can help us win the battle against distraction
| 4
|
how-bots-can-help-us-win-the-battle-against-distraction-1e744b1afdfe
|
2018-04-11
|
2018-04-11 15:05:58
|
https://medium.com/s/story/how-bots-can-help-us-win-the-battle-against-distraction-1e744b1afdfe
| false
| 119
|
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
| null | null | null |
The Startup
| null |
swlh
|
STARTUP,TECH,ENTREPRENEURSHIP,DESIGN,LIFE
|
thestartup_
|
Tech
|
tech
|
Tech
| 142,368
|
The Startup
|
Editors of The Startup, Medium's largest publication for makers https://medium.com/swlh
|
f0236d5369c
|
thestartup_
| 15,109
| 506
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
b230ea2a6eb8
|
2017-08-29
|
2017-08-29 14:04:42
|
2017-08-29
|
2017-08-29 14:05:46
| 1
| false
|
en
|
2017-10-16
|
2017-10-16 14:25:41
| 5
|
1e74cc0ad79b
| 3.411321
| 8
| 2
| 0
|
Now more than ever, people are talking about the impact AI and machine learning will have on our society. Especially when everyday…
| 5
|
This Is How IoT and Machine Learning Will Change The World
Now more than ever, people are talking about the impact AI and machine learning will have on our society. Especially when everyday consumers and technology users see headlines like this: Facebook Shuts Down Robots After They Invent Their Own Language.
The truth is, for as much as we know about the potential of automation, artificial intelligence, chatbots, and more, we still aren’t quite sure about all the things we don’t know. With every new discovery and step forward comes a range of both technological and moral implications of those very innovations.
What we can say for sure is that we are becoming increasingly connected. A good analogy for it would be to say that we, as a modern society, are beginning to operate and function similarly to open-source code. We are learning that it’s more beneficial to share our information than it is to withhold it, because it’s through sharing that we are better able to find what it is we’re looking for, faster and more effectively. Of course, the shadow of this level of connection is when things like advertising targeting cross the line between accuracy and intrusiveness.
The entire purpose behind innovations in the IoT space is to help information move between parties effortlessly, seamlessly, and with a sense of pleasant surprise. For as much as we condemn technology and the ways digital messaging interferes with our everyday lives, we can all recall a moment when the right message has appeared at the right time, creating a perfect user experience. Tiny moments like these are what motivate us to keep pushing forward. We, as fast-paced consumers, believe that very moment can (and should) happen everywhere, all the time.
Digital platforms are becoming more intimately integrated with everyday objects. A timely example would be Amazon’s decision to purchase Whole Foods. That decision was not made because Jeff Bezos has a personal interest in organic food (maybe he does, who knows), but because he, and Amazon’s leadership team, sees value in leveraging their knowledge of online shopping and product shipping to improve the way consumers shop for their groceries. In fact, they’ve already pulled back the curtain with their Amazon Go grocery store, which allows customers to walk and add the products they want to buy through their digital shopping cart, and then leave without having to talk to a cashier. Everything is automatically charged to your Amazon account.
From there, you can imagine how machine learning capabilities will begin to improve upon that process. Amazon will begin to recognize what products you purchase most often and send push alerts to promote discounts or deals on your favorite items, and even offer to duplicate your order and have it sent directly to your home. It’s this very thought process that tends to instigate fear in some people, but come on… who wouldn’t want groceries delivered to their home every week? How convenient is that?
Now, playing devil’s advocate here, another example of how machine learning could both become extremely helpful but also potentially intrusive is an idea we came up with at Chronicled about a connected toothbrush. Once you have something as trivial as a toothbrush with a sensor in it, tracking your hygiene habits and behaviors, then the implications of that data tracking can directly impact the relationship you have with your health insurance provider. They may notice that you’re not brushing your teeth the recommended number of times a day and decide to increase your monthly premium.
What most people don’t realize is that we already have the technology to create things like a connected toothbrush. The challenge lies more so in actually tracking that data, and then using machine learning to analyze and provide reactive responses to it — an insurance company recognizing a behavioral shift and deciding to increase or decrease your monthly premium, for example.
The thing to remember with these sorts of emerging technologies is that they can be extremely powerful and beneficial, but as with anything else, in excess they can become intrusive. Even with the connected toothbrush example, insurance companies that abuse data collection practices can end up taking advantage of customers — which is why I, and many others in this space, are advocates for moving into a world where consumers own their own data, which can be done through use of the Blockchain.
Regardless, it’s important to note that we’re past the point of hypothesis. These advancements in machine learning and connectivity are already happening. We’re at the point of questioning and refinement. Now that we’re seeing how these practices will take place in our everyday lives, what can we do to integrate them without disrupting the fabric of our society?
If you enjoyed this story, please click the 👏 button and share to help others find it! Feel free to leave a comment below.
The Mission publishes stories, videos, and podcasts that make smart people smarter. You can subscribe to get them here. By subscribing and sharing, you will be entered to win three (super awesome) prizes!
|
This Is How IoT and Machine Learning Will Change The World
| 129
|
this-is-how-iot-and-machine-learning-will-change-the-world-1e74cc0ad79b
|
2018-03-31
|
2018-03-31 14:34:48
|
https://medium.com/s/story/this-is-how-iot-and-machine-learning-will-change-the-world-1e74cc0ad79b
| false
| 851
|
We publish stories, videos, and podcasts to make smart people smarter. Subscribe to our newsletter to get them! www.TheMission.co
| null |
TheMissionHQ
| null |
The Mission
|
Info@TheMission.co
|
the-mission
|
TECH,ENTREPRENEURSHIP,STARTUP,LIFE,LIFE LESSONS
|
TheMissionHQ
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Sam Radocchia
|
Co-Founder at Chronicled // Blockchain // Forbes 30 Under 30
|
a33e9ef2e10a
|
iamsamsterdam
| 1,248
| 198
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-10
|
2017-09-10 20:55:25
|
2017-09-16
|
2017-09-16 02:26:01
| 1
| false
|
en
|
2017-09-16
|
2017-09-16 02:26:01
| 1
|
1e75e6c7de75
| 1.916981
| 1
| 0
| 0
|
“I looked at your client’s Instagram growth rate. It’s… not terrible.”
| 5
|
Pick Up Artist Techniques in SEO
“I looked at your client’s Instagram growth rate. It’s… not terrible.”
I laughed at his decision to start a sales call with negging. I was speaking with a representative from a social media optimization company who wanted to discuss social media strategy for one of my sports clients. He sounded relatively young and used a fast-talking, don’t-interrupt-me tone throughout our conversation. He wasn’t quite reading from a script, but definitely had a list of talking points he needed to express quickly.
In case you aren’t familiar with the term, “negging” is a technique made infamous by Neil Strauss in his book The Game. He promises readers will be turned into seduction masters by using psychological tricks and high-pressure marketing methods. Negging involves approaching your “mark” and paying them a backhanded compliment wrapped in an insult: “You’re really pretty, but you look high maintenance,” or “I love your crooked smile.”
A neg’s intended effect is to create cognitive dissonance between the compliment and the insult, in order to surprise the person being negged. The hoped for result is that the “mark” will now want to work hard at proving themselves to you in order to counter the perceived deficiency. Then sex.
Negging works exactly like you’d expect it to: most people think the person insulting them is an idiot.
The sales rep went on to say that Instagram likes were the key to “making it big” in Hollywood, and the best way to get big brand endorsement deals. He also asked me if I knew that my client “could make as much as $10,000 for a role in a movie without any acting skills?” (or as a friend pointed out, actors can actually make up to $10,000,000 for a role in a movie without any acting skills. LOL). Apparently, casting decisions these days are made mostly from Instagram likes.
When I mentioned my studio and agency background (I was a senior attorney at Paramount Pictures and founded ICM Partners’ New Media Group), he quickly dismissed my experience as “old school.” He reiterated that social media is all that matters. His company’s services (costing over $3,100 a year) allegedly use “an A.I.” to find followers, with no guarantees of follower growth.
I thanked him for his time and told him I enjoyed the call.
His follow-up email heightened the techno-absurdity: “Through state of the art facial recognition & analysis, [we] calculate which camera angle is best for you based on engagement & fan reactions. Interact with 3D displays that show you to an exact degree what pose makes a statement on your page.”
The future of marketing is here, and we’re being negged into it.
|
Pick Up Artist Techniques in SEO
| 30
|
pick-up-artist-techniques-in-seo-1e75e6c7de75
|
2018-03-27
|
2018-03-27 16:32:26
|
https://medium.com/s/story/pick-up-artist-techniques-in-seo-1e75e6c7de75
| false
| 455
| null | null | null | null | null | null | null | null | null |
Seo Agency
|
seo-agency
|
Seo Agency
| 3,946
|
George Ruiz
|
Talent manager. Producer. Law Professor. CEO of Intelligent Arts + Artists.
|
3c991d9878e8
|
georgeruiz
| 1,152
| 542
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-17
|
2018-06-17 02:01:35
|
2018-06-17
|
2018-06-17 02:04:51
| 1
| false
|
en
|
2018-06-17
|
2018-06-17 02:04:51
| 1
|
1e774607e07
| 0.290566
| 1
| 0
| 0
| null | 5
|
δ blockchain Deep Learning Algorithms and Artificial Neural Networks for Machine Learning. Combat Identification Beacon Spot Instances, Connected Devices and Applications FriendshipCube Group. https://aws.amazon.com/
|
δ blockchain Deep Learning Algorithms and Artificial Neural Networks for Machine Learning.
| 1
|
δ-blockchain-deep-learning-algorithms-and-artificial-neural-networks-for-machine-learning-1e774607e07
|
2018-06-17
|
2018-06-17 12:31:28
|
https://medium.com/s/story/δ-blockchain-deep-learning-algorithms-and-artificial-neural-networks-for-machine-learning-1e774607e07
| false
| 24
| null | null | null | null | null | null | null | null | null |
Δ Blockchain
|
δ-blockchain
|
Δ Blockchain
| 0
|
Graeme Kilshaw
|
Team Leader with the Friendship Cube Group
|
69e9af018727
|
Cube
| 27
| 28
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-03
|
2018-09-03 19:02:18
|
2018-09-04
|
2018-09-04 22:28:55
| 2
| false
|
en
|
2018-09-04
|
2018-09-04 22:35:20
| 0
|
1e79841ba5a0
| 2.983333
| 0
| 0
| 0
|
What does “uncanniness” (from the Masahiro Mori article) teach us about the proximity and distance of robots to ourselves? What can we…
| 1
|
Uncanny Valley
What does “uncanniness” (from the Masahiro Mori article) teach us about the proximity and distance of robots to ourselves? What can we learn from the distance and proximity between robots (like Marius and Sulla) and people (like Domin and Helena) in R.U.R.?
The idea of human psychology expressing an “uncanny valley” when relating with some sort of “ambiguous familiarity” is a very interesting one. It doesn’t seem to be restricted to robots or even human body-related, but to a broader spectrum that includes all animal-like features. As Mori states, the results are “eerie”, more than eliciting the feeling of danger. It’s, in our modern terms, “creepy”. I wonder if the uncanny valley composed a subset of what we consider eerie/creepy or if it is an explanation for everything that is eerie/creepy (i.e. only things that fall into this uncanny valley are creepy, that’s what the adjective is used for). Mori doesn’t venture into explaining why they would be eerie other than having some survival instinct from proximal danger. So, there’s a connection between being afraid of danger (experiencing fear) and the uncanny valley.
Nevertheless, the uncanny valley is not meant to capture extreme fear like horror (like when we watch The Exorcist). Images like a “possessed” person, or a demonic face and the like are also proximal dangers that trigger fear, so by Mori’s definition we would think they should be in the valley, and yet, we don’t think they would not fall into the category of eerie, even if they do look very close to having human-like (or animal-like) features and present movement (where movement accentuates the feeling of eeriness because it is another step in their resemblance to us, to life). So, eeriness has to be a toned-down version or that fear. It’s possible that in the case of demons, etc., we find the familiarity to our own bodies and movement, but there’s no ambiguity about their nature (they are clearly “evil”). Eeriness might be triggered by the uncertainty of the “evilness” of the alien entity.
This shows us what this “uncanniness” can possibly teach us about the proximity of robots to ourselves; following this train of thought, it is the doubt about the “goodness/evilness” of the entity that produces this reaction. I think the example Mori gives about the very slow, “forced” smile shows this explanation. A smile can be generally considered a sign of a benign (or non-threatening) being. We can also easily recognize an evil grin. But a smile in slow motion simulates this usually well-intended expression, with enough mistakes (in this case, in the pacing, the “natural movement”) as to make us doubt if it might actually be (or become) an evil grin. Is this uncertainty which I’d say evocates the creepiness experienced. So, in essence, we see a disconnect between an expected “safe” behavior (or appearance) that is not also clearly a “dangerous” (threatening) one. We see both the closeness of the robot to us and the distance, which makes us uncertain if we are unsafe or just paranoid.
In R.U. R. the feeling of uncanniness was harder for me to grasp, since it seems to me to be something that is primarily expressed visually, so it would depend on the portrayal of the robots. But they do mention some grossness surrounding their robotic bodies, while also being biological and human-like which hints to this in between I discussed above. For me the most salient eerie moment in the play is the gift of the sterile flower to Helena. This is because it is a gesture that both seems very friendly and un-threatening, while also seemingly conveying a possibly “evil” message about how humans are becoming sterile themselves, a veiled threat. But it’s not clear what was the message intended, and therefore this uncertainty arises. The uncertainty arises from not knowing if the robots can express veil threats to begin with. They look familiar to us, and their behaviors might seem unthreatening, but something is a bit off, that doesn’t allow us to know if they have “evil intentions” or not.
|
Uncanny Valley
| 0
|
uncanny-valley-1e79841ba5a0
|
2018-09-04
|
2018-09-04 22:35:20
|
https://medium.com/s/story/uncanny-valley-1e79841ba5a0
| false
| 689
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Alejandra
| null |
75ac1a37581b
|
alejandraarciniegas
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
|
cd Downloads
sudo tar -zxvf ./hadoop-2.7.4.tar.gz -C /usr/local
cd /usr/local
sudo mv ./hadoop-2.7.4/ ./hadoop
sudo addgroup hadoop
sudo chown -R hduser:hadoop hadoop
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value> file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value> file:/usr/local/hadoop/hadoop_data/hdfs/datanode</value>
</property
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
cd /usr/local/hadoop/etc/hadoop
sudo cp mapred-site.xml.template mapred-site.xml
cd ~
sudo gedit /usr/local/hadoop/etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
sudo rm -rf /usr/local/hadoop/hadoop_data/hdfs
mkdir -p /usr/local/hadoop/hadoop_data/hdfs/namenode
mkdir -p /usr/local/hadoop/hadoop_data/hdfs/datanode
sudo chown -R hduser:hduser /usr/local/hadoop
hadoop namenode -format
hadoop datanode -format
start-yarn.sh
start-dfs.sh
cd ~/Downloads
hadoop fs -mkdir -p /user/hduser/input
hadoop fs -copyFromLocal jane_austen.txt /user/hduser/input
| 10
|
b00c99190ee9
|
2017-12-24
|
2017-12-24 01:09:16
|
2017-12-24
|
2017-12-24 01:09:16
| 14
| false
|
zh-Hant
|
2017-12-25
|
2017-12-25 13:39:55
| 17
|
1e7b70f86b04
| 2.793396
| 1
| 0
| 0
|
[Data Science 到底是什麼從一個完全外行角度來看]
| 5
|
[Data Science 到底是什麼從一個完全外行角度來看][06]建立Hadoop環境 -下篇
2017–12–24T08:54:00+08:00
圖片來源:https://pixabay.com/en/books-spine-colors-pastel-1099067/ 和 https://pixabay.com/en/math-blackboard-education-classroom-1547018/
上一篇([05]建立Hadoop環境 -上篇)透過VMWare Player把Ubuntu裝好並且一些相關環境設定到,等於把hadoop的基礎環境建立好了。
這篇將延續上篇的環境,把Hadoop建立上去,並且讓Hadoop跑一個hello world的範例。
[05]建立Hadoop環境 -上篇
[Data Science 到底是什麼從一個完全外行角度來看]medium.com
環境準備
建立Hadoop測試環境
執行Hadoop的Hello World — WordCount
結語
環境準備
這邊的清單和上一篇一樣,如果上篇已經有抓過,可以跳過:
主機環境
接下來使用到的機器規格如下:
OS — Windows 10 1703
CPU — i7–6500U 雙核
Memory — 16GB
VMWare Player 14
任何虛擬機器軟體都可以,只是剛好用的是VMWare Player 14。
下載頁面
檔案大小約 90MB
Ubuntu 16.04.3
其他版本的Ubuntu也沒問題 — 如果用的是Ubuntu 14,那麼只有等一下安裝openjdk的部分會有問題,其他都一樣。
下載頁面
直接下載(約1.4GB)
Hadoop v2.7.4
基本上 v2.x 的都沒有問題,只是剛好手上有2.7.4所以沒有在下載新的。如果是v3.0那麼設定會不同
下載頁面
直接下載(約254 MB)
MapReduce的Hello World程式 — WordCount
這個是用來測試map reduce的hello world程式:
WordCount2.jar
jane_austen.txt — pride and prejudice 前三章 — 測試算字數用
建立Hadoop測試環境
基本上整個的環境建立大概可以分幾個部分:
安裝Ubuntu VM
設定Ubuntu環境
安裝和設定Hadoop
測試Hadoop
這篇會介紹第三步和第四部的部分
安裝和設定Hadoop
下載和解壓縮Hadoop
先用firefox下載(直接下載)hadoop到Downloads資料夾
下載最後位置
在Terminal(快速鍵 Ctrl + Alt + t)裡面執行以下指令:
這個的作用是把它解壓縮出來,放到/usr/local/hadoop的位置,並且設定執行權限
解壓縮完成看到hadoop資料夾
設定 hadoop/etc/hadoop/core-site.xml
用Terminal執行:gedit /usr/local/hadoop/etc/hadoop/core-site.xml
在 Configuration裡面輸入:
這個是在設定NameNode位置在哪裡 — NameNode之後會介紹,但是基本上就是主控HDFS的Master。
修改core-site.xml的截圖
修改hadoop-env.sh
這邊要把${JAVA_HOME}的值寫進去(理論上應該不需要才對,因為我們之前有設定參數,但是好像吃不進去,所以要寫死進去)
在Terminal執行:gedit /usr/local/hadoop/etc/hadoop/hadoop-env.sh
找到:export JAVA_HOME=${JAVA_HOME}然後把它改成export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
修改之後的結果
設定hdfs-site.xml
這邊設定的是:
每一個在HDFS的檔案要replicate幾份 — 預設都是3
NameNode儲存位置
DataNode儲存位置
在Terminal執行:gedit /usr/local/hadoop/etc/hadoop/hdfs-site.xml,然後在Configuration裡面加入:
修改畫面
修改yarn-site.xml
這邊修改的是yarn的設定,在Terminal執行gedit /usr/local/hadoop/etc/hadoop/yarn-site.xml,在Configuration裡面加入:
這邊後面兩個,yarn.nodemanager.resource.cpu-vcores 和 yarn.nodemanager.resource.memory-mb 是設定使用到的資源,如果後面執行不太起來要注意這個值和VM給的資源。
修改完的畫面
修改marped-site.xml
這個檔案預設不存在,所以要從template把它先復製出來。
在Terminal輸入:
打開了之後,把configuration改成:
完成設定
建立HDFS用到的目錄
在Terminal輸入:
這個將會建立hdfs的相關資料夾
測試Hadoop
以上就是hadoop的安裝和設定,接下來只需要把它run起來即可。
格式化HDFS
在Terminal執行:
中間可能會問是否確定繼續執行,記得要輸入yes
啟動yarn和hdfs
在Terminal輸入:
這個是在Master上面執行,會自動透過ssh的方式把所有Slave也一起啟動。
還有兩種啟動方式:
start-all.sh 和 stop-all.sh - 這個已經被deprecated 不過同等於上面兩個在一起執行
hadoop-daemon.sh namenode/datanode 和 yarn-deamon.sh resourcemanager - 這個是手動在各個節點裡面手動啟動對應服務
確認啟動process是否正常
在Terminal上面執行:jps
檢查執行的服務
這邊會看到5個服務:
NameNode
SecondaryNameNode
ResourceManager
NodeManager
DataNode
在真的分散式架構,前3個只會在Master出現,後面兩個只會在Slave出現
確認Web UI是否正常
服務啟動成功之後可以在Firefox輸入:
ResourceManager的畫面
NameNode的畫面
執行Hadoop的Hello World — WordCount
首先先把 WordCount2.jar和jane_austen.txt下載到Downloads裡面。
下載完的畫面
把檔案複製到 hadoop的HDFS裡面,在Terminal輸入:
可以用hadoop fs -ls /user/hduser/inpu檢查複製進去的檔案。
執行WordCount的程式,hadoop jar wordcount2.jar WordCount /user/hduser/input/jane_austen.txt /user/hduser/output
執行WordCount
檢查執行結果:hadoop fs -cat /user/hduser/output/part-r-00000
看到最後計算結果
如果要在執行一次計算,需要先把hdfs裡面的output砍掉,要不然會執行不了。指令是:hadoop fs -rm -r /user/hduser/output
如果執行有問題,或者run不起來,可以試試重開機,然後從測試Hadoop裡面的格式化HDFS開始重新做一次。
結語
在這篇把整個Hadoop建立完成,並且執行了一個map reduce的word count程式計算出pride and prejudice前3章的字數計算。
在這篇建立出來的hadoop是所謂的pseudo-distributed mode,換句話說Master和Slave在同一台機器,但是實際運作上會有Master對上多個Slave。
不過在進入這種分散式模式之前,需要在了解一些hadoop細節。
在下一篇,將會在針對hadoop裡面的分散式模式在做更詳細一點介紹,包含yarn和hdfs裡面對應的process。
[07]更深入看看Hadoop裡面的YARN和HDFS
[Data Science 到底是什麼從一個完全外行角度來看]medium.com
Originally published at blog.alantsai.net on January 12, 2024.
|
[06]建立Hadoop環境 -下篇
| 1
|
data-science-series-06-install-and-test-hadoop-part2-1e7b70f86b04
|
2018-02-10
|
2018-02-10 15:37:18
|
https://medium.com/s/story/data-science-series-06-install-and-test-hadoop-part2-1e7b70f86b04
| false
| 356
|
資料科學相關的學習筆記
| null | null | null |
alantsai-datascience
| null |
alantsai-datascience
|
DATA SCIENCE,資料科學,R,MACHINE LEARNING,DATA ANALYSIS
| null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Alan Tsai
|
沉浸於.Net世界的後端軟體工程師,樂於分享,現任台中Study4社群成員之一。除了程式以外,就愛看小說。部落格:http://blog.alantsai.net, http://ln.alantsai.net, http://gh.alantsai.net, http://ss.alantsai.net
|
629c608f5a00
|
alantsai
| 23
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-29
|
2018-08-29 08:54:12
|
2018-09-20
|
2018-09-20 10:54:31
| 5
| false
|
en
|
2018-09-20
|
2018-09-20 10:54:31
| 5
|
1e7b9c845ca2
| 8.625786
| 4
| 0
| 0
|
Welcome back y’all! It’s hard to tell from my writing, but that was said in a very thick American accent. All in the spirit of these blog…
| 5
|
FSDL Bootcamp — Day 2
Welcome back y’all! It’s hard to tell from my writing, but that was said in a very thick American accent. All in the spirit of these blog posts in which I describe my experience attending the full stack deep learning bootcamp at UC Berkeley, California in the summer of 2018. This will be the second instalment. Find a quick overview of the entire blog post here, and the previous blog post here.
I will describe the most important things from each lecture / lab and include a small takeaways section that describes the most important things I learned from said lecture or lab. It’s definitely not meant as an exhaustive listing, but os rather focused on what I thought was most impressive and unique to this bootcamp.
Lecture 6 — Training & Debugging
The first day started off great (SPOILER ALERT: it was great throughout the rest of the day too) with this lecture on training and debugging your network. It discussed many of the issues you face as a deep learning practitioner, such as poor model performance. Which will often be caused by implementation bugs, the misconfiguration of hyper-parameters, poor model-data fit or badly constructed datasets.
Very concrete and practical guide to working on your ML project
A very powerful workflow was then provided to assess the issues in your machine learning project. Many parallels here can be seen with coding, with which I am personally far more experienced. When coding up an algorithm for example, you would start off with a few simple instances of the problem. Adding complexity as you go incrementally, so that you may get a better idea of what works and what doesn’t. If you start off with a giant algorithm or model in this case, it’s much harder to tell where the fault lies when it doesn’t work right at the very start.
A concrete example of this was provided with a simple architecture (LeNet for images), a default set of hyper-parameters, overfitting to a mini-batch of data, while possibly simplifying the problem.
For the actual implementation of your model three key steps were identified:
Getting your model to run at all
Overfitting to a single batch
Comparing to a known result
The first one helps you figure out network construction issues such as incorrect tensor shapes, casting issues and out of memory errors. The second one helps you figure out issues with your loss functions, data labeling, or learning rate configuration. If your model cannot over-fit to a simple batch, there’s no point going on to the actual data set. This was very insightful and useful to me, especially as all the causes were meticulously described.
It also discussed some of the most known DL bugs that you could run into when even the overfitting doesn’t work, such as incorrect input normalisation or misconfiguring your loss functions.
Comparing to a known result relates to the baselines we learned about in Day 1. Whether your model is actually doing well depends not on your loss, which will often be quite arbitrary, not even on your accuracy per se, but on how its accuracy compares to other solutions. By finding other solutions and comparing to them you get a better sense of how well your model is actually performing. Possible known results were ranked from least to most useful: extremely simple baselines such as rudimentary statistical metrics (means, medians) as the least useful up to official model implementation (AlexNet, etc.) on a very similar dataset to yours.
There was a lot more interesting stuff here on finding over- and under-fitting issues by using bias and variance decomposition, methods of addressing these issues and strategies for finding hyper parameters.
Takeaways : Extremely insightful, practical and concrete lecture. Quite possibly my favourite one of the bootcamp. As with many practical guides you encounter, it all makes it a lot of sense, but it’s very helpful to have it all written down concisely and to see it backed up with argumentation.
Lecture 7— Infrastructure
GPUs, GPUs, Glorious GPUs. Ones you can touch, or a whole lot of them that you can’t, tucked away safely somewhere in the cloud. This lecture gave a quick overview of the entire infrastructure space, but focused mostly on this question: Should I have my GPUs on premises or in the cloud? Do I want easy scalability, far less system management issues or do I want to invest in my own hardware, reducing the costs significantly assuming your utility rate is going to be high enough.
A comprehensive overview of the players in both the on-prem solution space (NVIDIA, Lambda Labs) and the cloud one (Google, Amazon, Microsoft and Paperspace) was provided.
Cost consideration is often the most important one. Some interesting calculations were shown, that perhaps unsurprisingly showed the on-prem solution to be quite a bit cheaper. The down side being that managing your resources is non-trivial so they showed some solutions for this ranging from simple spreadsheet logging to using Docker instances of Rise.ml (a software built on top of Docker) to create and orchestrate containers, optimising resource utilization.
A whole bunch of other solutions were discussed to manage distributed training (Horovod) and software to manage and evaluate your experiments with (TensorBoard, Losswise, Comet.ml and Weight & Biases). Final part of the lectures discussed all-in-one solutions that provide many or all of the above mentioned solutions, such as Floyd, Paperspace, and CloudML. Interestingly enough, I didn’t see any solutions that you could run on prem. If you know of any please leave a comment!
Takeaways: Consider the costs of managing your own hardware, make a back-of-the-envelope calculation on your usage costs and see if it’s worthwhile the hassle of getting your own hardware, considering also the nice extra features cloud environments have to offer.
Lab 4 — Tooling
Back to work! In the tooling lab we continued working on our problem of classifying hand-written sentences using the EMNIST dataset. This time around we would set it up so that it would hook up to our Weight & Biases account we made before coming to the bootcamp.
Plots and graphs, and graphs and plots, and … figures!
I specialised in computer graphics & data visualisation back at TU Delft, so for a nerd I am pretty visual. I enjoy using TensorBoard but it definitely is pretty limited if you want to drill deep into your model metrics and figure out what’s happening. In comes W&B (see the gif above!) Basically for every run you would do it would upload certain metrics to W&B and it would visualise it for you. I especially loved the parallel coordinates plot. In fact I used it myself in my academic work (shameless plug). This kind of plot visualises the relationships between the data. It is especially adapt at filtering out data and showing correlations (correlations will appear as parallel lines!)
Lecture 8— Sequence Applications
This is a sequence. This is also a sequence. Sentences are sequences. As are audio files and basically anything time-related. This lecture discussed some advanced concepts of LSTMs like bi-directional LSTMs, attention and beam search. It also discussed the application of translation and audio synthesis. Very clear and concise lecture, but nothing you wouldn’t find across many other resources on the internet, so considering the word count, I will leave it at this.
Takeaways: Worth investigating if you want to know about the concrete applications of LSTMs, the details to consider, and how to solve sequence data related challenges.
Lab 5 — Experimentation
Here we got to use our fancy new visualisation tool to visualise some experimentation in a free form way. Sentences from the IAM dataset were given that constituted not just an image classification problem but also a sequential one. Using an LSTM and a basic CTC loss function we could try to solve it. Some pointers on how to improve this basic model were given. It was a nice way to get a feel for W&B and rather fun to see your changes being visualised in this way.
Guest Lecture 1—Andrej Karpathy
Fascinating talk on Software 2.0, Andrej’s theory of the new type of software that is not coded up explicitly but is rather automatically created through data using machine learning. Although not a new concept, image classification has gone from millions of lines of hand-crafted code to understand what a face is to much less deep learning code, that learns its own feature set to do a classification task, at much better precision. Very interesting about this talk was that this concept was extended to the entire software stack.
Andrej went on to mention a lot of hand-crafted code at Tesla was used to process the input from the car sensors (cameras, radar, IMU, etc.) to steering and acceleration output. This whole stack of code was slowly being “eaten” by software 2.0 code. Unfortunately no concrete examples were given of this type of code. Would love to have seen some examples! In any case, he did show the importance of labeling and how he sees the role of AI people as mostly facilitating those who label all the images. I found this a slightly depressing way of looking at the fancy AI field, but it’s hard to argue with it. As a bonus he showed us some fun examples of labeling issues at Tesla: weird road markings, strange traffic signs etc. Pretty hilarious and insightful.
If only I would have won that sweet new Autopilot test ride …
Takeaways: Learning more about a philosophical view of where ML is headed was interesting, as well as seeing some real world examples of the issues Tesla faces was great. By the way, the talk is quite similar to this one, if you can’t wait for this publication.
Guest Lecture 2— Jai Ranganathan
Jai Ranganathan, a quirky little guy with a big career as Lead of Product at Uber discusses the challenges faced by his team when managing a model’s lifecycle, in the specific context of a project in automating the process of user complaints.
The example project shown was the COTA (Customer Obsession Ticket Assistant), where a tool was developed that leverages machine learning and natural language processing (NLP) to more efficiently process user tickets. The tool would help employees in resolving these tickets by giving suggestions on replying to users.
Most interestingly were the lessons learned and general pointers learned at each phase in the project.
Exploration: Identify the right problem to solve, and understanding if ML is actually a good fit for it.
Development: Due to the huge and still increasing space of possible ML solutions you are well advised to make an estimate in how you weight cost (compute time) versus accuracy. It’s important to keep up with the literature (they showed quite a few interesting cutting edge techniques) and to validate your results using visualisation
Deployment: Covered some interesting data engineering techniques. For example a really nice Spark pipeline was shown. The main difficulty here was that Deep Learning is still very slow, but that distributed DL solutions can definitely help.
Monitoring: Very important, but often over-looked. This deals with the fact that business is dynamic which means your data is dynamic, which means your models may become outdated, so it’s important to check for things like distribution shift of your data and retrain when necessary. Very important to keep your labeling process going and identifying edge cases where your model is still failing.
Interesting side-note, when we went for drinks later he also joined, and was claiming how the real difficulty lies in finding good data engineers rather than data scientists. In fact he exclaimed that data scientists are a dime a dozen, which I thought was funny considering these people are also quite rare, but possibly not that hard to come by for a company like Uber.
Takeaways: I think the Exploration, Development, Deployment and Monitoring paragraphs should cover it nicely.
Conclusion
Oh what a perfect day. Well nothing is perfect, but it was definitely very, very good. I felt that the bootcamp was really kicked into gear today, showing a lot of the stuff that many of us were craving to see, the practical nitty gritty stuff of debugging, a coverage of the most important tools around, how to build your own DL setup, an almost philosophical lecture from Andrej Karpathy on the future of software, and a very practical in-depth example project at Uber.
|
FSDL Bootcamp — Day 2
| 29
|
fsdl-bootcamp-day-2-1e7b9c845ca2
|
2018-09-20
|
2018-09-20 10:54:31
|
https://medium.com/s/story/fsdl-bootcamp-day-2-1e7b9c845ca2
| false
| 2,065
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Gerard Simons
|
Co-founder @ Captain AI. Machine Learning enthusiast, regular old computer scientist at heart. Publications in Computer Graphics and Data Visualisation.
|
a051fa9f6d5c
|
GerardSimons
| 16
| 62
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-09
|
2017-10-09 07:57:25
|
2017-10-09
|
2017-10-09 08:01:06
| 1
| false
|
en
|
2017-10-09
|
2017-10-09 08:01:06
| 3
|
1e7eee0c6d64
| 2.777358
| 5
| 0
| 0
|
We recently travelled to Bari in south Italy’s beautiful Puglia region to interview a team that uses machine learning on healthcare data…
| 3
|
AI researchers from Italy can spot Alzheimer disease years before symptoms appear
We recently travelled to Bari in south Italy’s beautiful Puglia region to interview a team that uses machine learning on healthcare data from common medical devices to predict diseases years before doctors can diagnose them. The team analyzed scans from the Alzheimer disease, a neurodegenerative disease that is the leading cause of dementia for the elderly. The race is on to diagnose the disease as early as possible. Although there is no cure, drugs in development are likely to work better the earlier they are given. Roberto Bellotti‘s team can identify changes in the brains of people likely to get Alzheimer’s disease almost a decade before doctors can diagnose the disease.
Although no startup yet, their research might be a good basis to spin out a healthtech company in the near future. We sat down with Roberto Bellotti to learn more about the research of his team.
How did you play with data and what have you found?
We used a publicly available dataset of brain MRI scan from the Alzheimer Disease Neuroimaging Initiative database at the University of Southern California in Los Angeles. We designed and implemented a novel mathematical model, based on graph theory, to describe how different brain regions are related to each other and then use this information to feed a supervised machine learning (ML) algorithm to distinguish healthy controls from diseased subjects. The basic idea behind our approach is that human brain is like a city whose neighbours are connected through streets: neurodegenerative diseases disrupt brain connectivity letting some signs that can be suitably used to predict disease onset. ML was specifically used to select quantitative features arising from graph-based descriptions of the brain. Every subject was described by thousands of quantitative features, the most important ones were used to learn a classification model and then provide a “diagnostic”, probabilistic score.
Rich pharma companies like to sell drugs rather than invest in solutions preventing diseases. Will they support you?
Well, more than prevention our approach aims at early detection. Especially when dealing with Alzheimer’s disease and other neurodegenerative diseases early detection is of paramount importance. Rich pharma companies should massively invest to support this kind of research for mainly two reasons: firstly, it makes no sense to develop drug or therapies for patients whose neurological damage has already significantly impaired their reasoning or their brain’s functionality as it is apparently out of sight the possibility to restore them, on the contrary the development of novel drugs could play a relevant role in early stages of the disease preventing the damage; secondly, neurodegenerative diseases mostly affect elder people, thus, as a matter of fact, the development of disease modifying therapies, eventually able to slow down the disease progression although not stopping it, would allow the majority of patients and their family to take relief from latest and most dramatic stages of the disease.
What is next? Will you somehow turn your project into real business?
Definitely. Our goal is now the development of a standalone application to support the use of this methodology in public/private healthcare institutions or to ease its adoption by MRI scans producers. In fact, this methodology could be suitably adopted for different pathologies, for example we have already clues of its effectiveness with Parkinson’s disease so that it could definitely become a routine diagnosis support system for a wide range of patients undergoing brain MRI exams.
How big is the market potential for a solution/product you might be able to offer?
There is a huge potential market. We think to MRI scanner producers or healthcare providers. A versatile diagnosis support system could easily be adopted for different applications, for example structural imaging, dealing with the modelling of the nervous system and the detection of pathological conditions affecting it; functional imaging, used to reveal the brain connectivity in term of it functionalities.
Are you looking to raise angel or VC funding?
It is premature, but raising venture capital cannot be excluded as an option in the near future.
Originally published at www.eu-startups.com.
|
AI researchers from Italy can spot Alzheimer disease years before symptoms appear
| 8
|
ai-researchers-from-italy-can-spot-alzheimer-disease-years-before-symptoms-appear-1e7eee0c6d64
|
2018-05-15
|
2018-05-15 07:15:42
|
https://medium.com/s/story/ai-researchers-from-italy-can-spot-alzheimer-disease-years-before-symptoms-appear-1e7eee0c6d64
| false
| 683
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Pavel Curda
|
Digital Marketer / Startup Advisor / FinTech Specialist / SaaS / Writer https://cz.linkedin.com/in/pavelcurda
|
f742d7bb1264
|
pavelcurda
| 2,123
| 2,825
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-24
|
2018-09-24 14:45:25
|
2018-09-24
|
2018-09-24 14:46:42
| 1
| false
|
pt
|
2018-09-24
|
2018-09-24 14:46:42
| 5
|
1e808263a77
| 1.913208
| 0
| 0
| 0
|
A aposta é de que inteligência artificial e aprendizado de máquina poderão ajudar médicos a chegar num diagnóstico mais fácil e mais rápido
| 5
|
Autismo poderia ser diagnosticado por um computador? Será?
A aposta é de que inteligência artificial e aprendizado de máquina poderão ajudar médicos a chegar num diagnóstico mais fácil e mais rápido
Diagnosticar autismo não é uma tarefa fácil. A ciência não conhece hoje um biomarcador para que se faça um exame simples para o detectar o Transtorno do Espectro do Autismo (TEA). Sim, há um estudo britânico em andamento para um exame de sangue para detectar autismo, mas ainda precisa ser testado e validado com muitos pacientes para ter algum resultado conclusivo — ou seja, é só uma possibilidade que ainda precisará de muitos anos de testes e estudos a respeito disso. O diagnóstico hoje é clínico, feito por um médico especialista — e tem acontecido, em média, aos quatro anos de idade nos Estados Unidos. No Brasil, não temos números sobre isso.
Com o aprimoramento da inteligência artificial e do aprendizado de máquina, alguns pesquisadores dizem que o atraso no diagnóstico de autismo pode diminuir num futuro muito próximo. O aprendizado de máquina (em inglês: machine learning) ou aprendizagem automática é um subcampo da ciência da computação que evoluiu do estudo de reconhecimento de padrões e da teoria do aprendizado computacional em inteligência artificial — ou AI, como é globalmente citada, pelo termo em inglês: Artificial Intelligence.
A aposta vem da versão mais recente do aprendizado de máquina, o aprendizado profundo (em inglês: deep learning) que, segundo especialistas, seus métodos e aplicações nunca foram tão efetivos para realmente ter um impacto clínico como é o deep learning.
Segundo Martin Styner, professor associado de psiquiatria e ciência da computação na Universidade da Carolina do Norte, em Chapel Hill, nos Estados Unidos, o poder do deep learning vem da descoberta de padrões sutis, com combinações de recursos, que a princípio podem não parecer relevantes ou óbvios para o olho humano. Isso significa que é muito mais adequado para identificar a natureza heterogênea do TEA. Onde a intuição humana e as análises estatísticas podem procurar por um único traço, possivelmente inexistente, que diferencie consistentemente todas as crianças com autismo daquelas que não estão no espectro, os algoritmos de deep learning procuram, em vez disso, agrupamentos de diferenças.
Esses algoritmos, porém, dependem muito do “ensino” humano. Para aprender novas tarefas, eles “treinam” em conjuntos de dados que normalmente incluem centenas ou até milhares de modelos “certos” e “errados”, como, por exemplo, uma criança sorrindo ou não, classificada anteriormente por uma pessoa. Com todo esse exaustivo “treinamento” intensivo, softwares de deep learning acabaram tendo a precisão dos especialistas humanos — em algumas situações, até melhor que nós, de carne e osso.
Leia o texto completo no Portal da Tismoo, em: http://tismoo.us/tecnologia/autismo-poderia-ser-diagnosticado-por-um-computador-sera/
|
Autismo poderia ser diagnosticado por um computador? Será?
| 0
|
autismo-poderia-ser-diagnosticado-por-um-computador-será-1e808263a77
|
2018-09-24
|
2018-09-24 14:46:42
|
https://medium.com/s/story/autismo-poderia-ser-diagnosticado-por-um-computador-será-1e808263a77
| false
| 454
| null | null | null | null | null | null | null | null | null |
Tecnologia
|
tecnologia
|
Tecnologia
| 8,759
|
Tismoo
|
Somos um laboratório de análises genéticas focado em medicina personalizada para Transtorno do Espectro do Autismo e síndromes relacionadas. -- tismoo.us/portal
|
e97edce7d29
|
tismoo
| 272
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-09
|
2018-04-09 16:53:03
|
2018-04-09
|
2018-04-09 18:43:36
| 6
| false
|
en
|
2018-04-10
|
2018-04-10 15:37:36
| 16
|
1e818a59cfc1
| 2.346226
| 4
| 0
| 0
|
When I think of AI, it usually conjures up images from my favorite sci-fi movies and shows like 2001: A Space Odyssey, Her, and Westworld…
| 5
|
Research Roundup: Transforming Small Businesses with AI and Machine Learning
When I think of AI, it usually conjures up images from my favorite sci-fi movies and shows like 2001: A Space Odyssey, Her, and Westworld. However, AI and Machine Learning have been making an impact on society and the business landscape for some time now. The implications of these emerging technologies are vast, and probably unknowable (though not for lack of trying).
In my profession, I focus on connecting small business owners with advice and resources on how to use technology to help their organizations succeed. AI and Machine Learning tech is not new for industry leaders like Google and Apple, but what (if any) are the benefits for small business owners today? It can be challenging to find specific research geared towards SMBs. AI thought leaders often fall short of addressing this key segment.
Luckily, my colleagues at Software Advice, Capterra, and GetApp have written content geared towards an SMB audience, whether you’re a new startup or mom and pop shop looking to leverage AI and Machine Learning. Read on!
Four Predictions for AI That SMBs Can Bank On
by Craig Borowski
In this article, discover four big near-term predictions for AI technology, and learn what they mean for SMBs like yours.
The Savvy Small Business Guide to Machine Learning vs. Artificial Intelligence
by Geoff Hoppe
Machine learning vs. artificial intelligence: is it even a battle? Whatever it is, machine learning can benefit your small business.
6 lessons learned from early AI projects
by Lauren Maffeo
Are you thinking about using AI for project management at your SMB? These lessons from early AI projects will set you up for success.
What Small Businesses Need to Know About AI in the Supply Chain
by Lisa Hedges
Artificial intelligence looms large on the supply chain horizon, but we’re here to help you understand how it’s going to affect your business
I hope you found this content helpful and informative. Have you read any good pieces on AI and SMBs? Does your business use AI or Machine Learning tech to automate processes or gain insights to improve your services/products? If so, I’d love to hear from you, please comment below!
|
Research Roundup: Transforming Small Businesses with AI and Machine Learning
| 4
|
research-roundup-transforming-small-businesses-with-ai-and-machine-learning-1e818a59cfc1
|
2018-04-12
|
2018-04-12 08:47:53
|
https://medium.com/s/story/research-roundup-transforming-small-businesses-with-ai-and-machine-learning-1e818a59cfc1
| false
| 370
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Madeline Enos
|
Digital Marketing & SEO at @softwareadvice and @Gartner — all views my own. I write about tech, careers, and women in STEM. http://www.madelineenos.com/
|
7fe8b411cb90
|
madelineenos
| 132
| 138
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f702855ffe47
|
2017-11-20
|
2017-11-20 00:01:55
|
2017-11-20
|
2017-11-20 00:01:56
| 6
| false
|
en
|
2017-11-20
|
2017-11-20 00:01:56
| 10
|
1e81ae5f8a90
| 2.033019
| 0
| 0
| 0
| null | 3
|
Dobbiamo avere paura dell’Intelligenza Artificiale?
# medium.com
Oggi ho visto un video, che vi riporto qui sotto. Guardatelo e poi, se volete, continuate la lettura.
10 Important Signs Your Body Is Asking For Help
# medium.com
“The human body and mind are tremendous forces that are continually amazing scientists and society. Therefor…
Computer Vision by Andrew Ng — 11 Lessons Learned
# towardsdatascience.com
Created in week 4 of the course. Combined Ng’s face with the style of Rain Princess by Leonid Afremov. I rec…
“ Instead, we specify some constraints on the behavior of a desirable program (e.g.,
# medium.com
“ Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input outp…
Neural Net: Dog Version 1.0
# medium.com
“It works.” Two words used to encompass the experience of the simplest of things — the feeling of warmth tha…
Are Humans the Most “Evolved” Species?
# posts.philipkd.com
The most common trope in biology debates is anthropocentrism versus non-anthropocentrism: “Humans must be de…
JScott Monthly Missive: November Edition
# medium.com
Discoveries The Moral Machine recommended by Dave O. If you were the programmer for a self-driving car, how …
Einstein Bots: Using AI to deliver the Future of Customer Service
# chatbotsmagazine.com
Or, Why are We hot for Chatbots? Why now? What happened at Dreamforce’17? Two weeks ago at Dreamforce 2017, …
The difference is the nature of the technology in question.
# medium.com
The difference is the nature of the technology in question. The problem the industrial revolution solved was…
7 metrics for monitoring your chatbot’s performance
# venturebeat.com
GUEST: Researchers estimate we will speak to chatbots more than we speak to our spouses by 2020. Obviously, …
|
10 new things to read in AI
| 0
|
10-new-things-to-read-in-ai-1e81ae5f8a90
|
2018-05-28
|
2018-05-28 07:36:24
|
https://medium.com/s/story/10-new-things-to-read-in-ai-1e81ae5f8a90
| false
| 287
|
AI Developments around and worlds
| null | null | null |
AI Hawk
|
aihawk1089@gmail.com
|
ai-hawk
|
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
| null |
Deep Learning
|
deep-learning
|
Deep Learning
| 12,189
|
AI Hawk
| null |
a9a7e4d2b403
|
aihawk1089
| 15
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
a2ccece685c2
|
2018-08-14
|
2018-08-14 08:04:48
|
2018-08-14
|
2018-08-14 07:40:11
| 5
| false
|
en
|
2018-08-16
|
2018-08-16 08:45:33
| 9
|
1e8207f3e3c2
| 4.931447
| 26
| 1
| 0
|
The architects of UX are facing a daunting new challenge:
| 5
|
AI Has Become a Top UX Design Challenge: Here’s How to Solve it
The architects of UX are facing a daunting new challenge:
How do you integrate AI into the seamless user experience you’ve strived to create?
More specifically, how should users view AI-generated content? How can you make it engaging without being too obtrusive? How can you demonstrate that AI-generated information is credible?
The key to creating a positive user experience around AI is trust. If they don’t trust it, your users will not see your product as valid or reliable.
Building AI software may not fall under UX designers’ job description, but integrating it into the user experience has become a principal UX design challenge.
Gain users’ trust while providing the optimal UX. Discover How.
Why people love and fear AI
Although AI is a major buzzword, most people don’t understand how it works.
It makes sense why not. The media covers AI in somewhat elusive, abstract terms. Even tech publications tend to talk about it without really explaining what it is. (You can find a good explainer in this article from Futurism.)
Because of this, AI represents a type of mystical, inexplicable phenomenon. It captivates audiences because it appeals to their sense of wonder.
But a thin line separates this wonder from fear.
That’s why movies tend to portray AI as such a threatening force. Nightmares of a future where robots dominate human society are a recurring theme (think: I, Robot).
The reason AI can be so fascinating and scary at the same time boils down to lack of trust.
What factors determine trust in AI?
Trust between humans is based on different criteria than trust between humans and machines.
Humans determine whether or not to trust one another based on factors like reliability, sincerity, competence, and intent.
Humans’ trust in machines depends on accuracy, consistency, and fallibility. For many people, trust depends on their ability to interpret how the system works.
AI complicates this.
It possesses human reasoning and cognitive faculties, yet it lacks interpretability. It can be configured to output insights beyond human computability, free of human error. At the same time, a flawed algorithm could lead a machine to learn to do the wrong thing. In some situations, the consequences could be catastrophic.
Lack of trust in AI is the biggest UX design challenge
AI has become a central component of many companies’ customer strategies. It enables greater personalization, more access to support, and faster service — all of the tenants of a positive customer experience.
But if you want your users to have a positive experience with your AI technology, they have to be able to trust it.
How do you do that? Through better UX.
Here are five ways to improve user trust in AI.
1. Clue your users into your AI
You don’t need to delve into the complexities that belie the AI and machine learning capabilities your product uses. But you can raise user trust if you let them in on the data your algorithms use to produce predictions and recommendations.
For example, Netflix uses AI to curate a list of recommended shows and movies for subscribers based on what they’ve already watched.
It doesn’t label this category as “AI-Generated Recommendations,” but it does call it “Because You Watched Californication,” “Because You Watched Mad Men,” etc. It solves this UX design challenge by using the title to tell users how it came up with its suggestions.
2. Don’t disguise your AI
As AI technology develops, it will be able to take on more human tasks with higher levels of accuracy and efficiency. But you will completely undermine the trust of your users if you try to make them believe they are communicating with a human when it’s really a computer.
On your website, you can differentiate AI-generated content with icons or labels.
It’s important to make this distinction on the phone, too. As natural language processing technology improves, callers will be able to speak normally to automated customer support reps. But deceiving them into believing they are talking to a human will erode their trust and create a frustrating experience.
3. Don’t make it too presumptuous
Your AI must respect that in the human-to-machine relationship, the human is boss.
One UX design challenge is drawing attention to and building confidence in AI-generated content without making it too pushy. Spotify does a great job of this. It uses a subscriber’s listening history to create a playlist of new music he or she might like.
The playlist appears at the top of your “Browse” page, but it’s not obtrusive. And the title works — “Discover Weekly.” It’s not called “Your New Favorite Tunes.” If it were, users would be more interested in challenging it than embracing it.
4. Set realistic expectations
AI can accomplish a lot, but it may not be able to do everything your users think. You’ll frustrate them, lower your credibility, and even endanger them if you create the impression that your technology can do something it actually cannot.
Setting accurate expectations for your AI is critical. The stakes here can be dire, as evidenced by the fatal car accident involving Tesla’s autopilot feature in March 2018.
Despite receiving several visual and one audible warning to put his hands on the wheel, a California man crashed into a highway median and died after using autopilot on his Model X. Tesla’s investigation into the accident found the driver had his hands off the wheel for six seconds prior to the collision.
Unlike “self-driving cars,” drivers are supposed to keep their hands on the wheel and pay attention while using Tesla’s autopilot feature. The driver might have thought the car could handle driving itself. In this case, his expectations exceeded what Tesla’s AI could actually do.
5. Make sure your AI adds a layer of emotional intelligence
Improving usability is a constant UX design challenge. Your company’s product might offer great value to your customers, but that hardly matters if they struggle to use it.
One way to improve UX is focusing on EQ, or emotional intelligence. You can do this by adding tools that use AI to anticipate where user frustration will occur. By proactively sending help — such as real-time contextual guidance — you can prevent a negative emotional response and protect the user experience.
AI should boost your UX, not hurt it. See how WalkMe can help.
Originally published at blog.walkme.com on August 14, 2018.
|
AI Has Become a Top UX Design Challenge: Here’s How to Solve it
| 153
|
ai-has-become-a-top-ux-design-challenge-heres-how-to-solve-it-1e8207f3e3c2
|
2018-08-16
|
2018-08-16 08:45:33
|
https://medium.com/s/story/ai-has-become-a-top-ux-design-challenge-heres-how-to-solve-it-1e8207f3e3c2
| false
| 1,086
|
Exploring topics of interest for organizations undergoing digital adoption, such as: digital transformation, training and onboarding, user experience, and the customer experience.
| null |
walkme
| null |
Digital Adoption 101
|
walkmeteam@gmail.com
|
digitaladoption101
|
DIGITAL TRANSFORMATION,CUSTOMER EXPERIENCE,ENTERPRISE SOFTWARE,USER EXPERIENCE,PRODUCT DESIGN
|
walkmeinc
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Megan Wilson
|
UX enthusiast who loves to share and discover innovative design content. Lead UXer at @WalkMeInc!
|
119107850ec4
|
megan.w
| 2,448
| 5,536
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-01
|
2017-09-01 08:23:05
|
2017-09-04
|
2017-09-04 09:52:45
| 4
| false
|
th
|
2017-09-05
|
2017-09-05 00:55:11
| 10
|
1e84bc19e9e7
| 2.39434
| 35
| 0
| 0
|
เรียนรู้แนวคิดของนักวิทยาศาสตร์ข้อมูลระดับโลก
| 1
|
Going Pro in Data Science
เรียนรู้แนวคิดของนักวิทยาศาสตร์ข้อมูลระดับโลก
Data Scientist ที่หนึ่งในใจเพื่อนๆคือใครครับ?
หลายคนอาจนึกถึงชื่อ Andrew Ng, Geoffrey Hinton, DJ Patil, Peter Norvig, Sebastian Thrun และอีกมากมาย (อ่านเรื่องราวของ 24 Ultimate Data Scientists ได้ที่ลิ้งนี้เลย) Data Scientists แต่ละคนมาจากหลากหลายสาขาวิชา ส่วนใหญ่จะหนักไปทาง hard science อย่าง physics, biology, computer science, applied mathematics ฯลฯ
แต่ก็มีอีกหลายคนที่เราเรียกเค้าได้เต็มปากว่า Data Scientists จากผลงานที่เค้าทำ ไม่ใช่จาก Education Degree ที่เค้ามี อย่างคนที่เราจะพูดถึงวันนี้คือ Nate Silver ผู้แต่งหนังสือมหากาพย์ prediction งานโคตรพรีเมียม — The Signal and The Noise (2012)
มองหาสัญญาณในความยุ่งเหยิงของข้อมูล
Nate Silver (age 39) ใช้ public information มาทำนายผลการเลือกตั้งได้อย่างแม่นยำ
Silver เรียนจบปริญญาตรีทางด้านเศรษฐศาสตร์ จาก University of Chicago ในปี 2000 และเริ่มสร้างผลงานมากมายทั้งในวงการ sport and political analytics โดยเฉพาะการทำนายผลการเลือกตั้ง U.S. ที่แม่นเว่อราวจับวาง
Silver ไม่ได้สร้างโมเดลที่มันซับซ้อนและไม่ได้ใช้ advanced math & statistics เท่าไรนัก เพราะเป้าหมายของเค้าคือการหา insights ที่ช่วยให้เราตัดสินใจได้อย่างถูกต้องในโลกความจริงที่มีแต่ Noise เต็มไปหมดเลย
Silver พิสูจน์ให้เห็นหลายๆครั้งว่าการใช้แค่ simple heuristics มาช่วยตัดสินใจมันดีกว่าการไปเสียเวลาสร้างโมเดลที่มันซับซ้อนด้วยซ้ำ (ซับซ้อนมาก ก็เสี่ยง Overfit)
ยิ่งข้อมูลโตขึ้นมากเท่าไร Noise ในข้อมูลก็จะยิ่งเพิ่มขึ้นเรื่อยๆเช่นเดียวกัน ปัญหาอยู่ตรงที่ Noise มันโตเร็วกว่า True Signal การจะหาความจริงจาก data ทุกวันนี้ มันทำได้ยากกว่าสมัยก่อนมาก
ยิ่งโลกก้าวเข้าสู่ยุค Big Data ตัวแปรในโมเดลของเราเพิ่มขึ้นเป็นหมื่นเป็นแสนตัว (สมัยก่อน variable ไม่กี่สิบตัว ยังงงโคตรๆเลย 555+) แถม statistical significance ก็พบเจอได้ง่ายมาก จนบางครั้งก็ง่ายเกินไป ??
ที่เราชอบพูดกันว่า Correlation does not imply causation ความหมายของไอเดียนี้ทวีความสำคัญขึ้นอย่างมากในยุคนี้ statistical significance แทบจะหมดความหมายไปเลย ถ้าผลที่เราเจอไม่สามารถเอามาใช้ได้จริง (i.e. not practical significance) หรือผลที่เราเจอมันเป็นแค่ผลลวงตาหลอกๆ (Spurious correlation)
ความสำคัญของ Scientific Method
Albert Einstein: One of the greatest scientists in human history
[Data] Scientists จริงๆจะให้เรียกสั้นๆก็คือ Scientists ดีๆนี่เอง แต่แทนที่จะไป investigate เรื่องฟิสิกส์ แรงโน้มถ่วง ดวงดาว อวกาศ เหมือน Einstein … Data Scientist เลือกที่จะเก่งเรื่อง Data โดยเฉพาะ
พวกเขาตั้งสมมติฐาน รวบรวมหลักฐาน แล้วก็สรุปผล เหมือนกับนักวิทยาศาสตร์ในสาขาวิชาอื่นๆ ใช้ Scientific Method ในการทำงาน
Scientific Method เริ่มต้นจากการตั้งคำถาม หรือสมมติฐานที่สามารถทดสอบได้ (Falsifiable/ Testable) เหมือนที่ Karl Popper เสนอไว้ตั้งแต่เมื่อ 50 ปีก่อน
Science is a way of thinking much more than it is a body of knowledge (Carl Sagan)
ตัวอย่างเช่น ถ้าเราออก campaign X, Y, Z จะช่วยให้ Sale Revenue ของบริษัทเราเพิ่มขึ้น 50%
แต่ละ campaign คือสมมติฐานที่จะถูกทดสอบ Data Scientists จะเริ่มเก็บข้อมูล วิเคราะห์และตัดสินใจว่าข้อมูลที่เก็บมามันหนักพอที่จะยืนยันหรือปฎิเสธสมมติฐานนั้นๆหรือเปล่า? เริ่มจาก X, Y แล้วค่อย Z …
Hypotheses มีได้เยอะมาก ก.ล้านตัว และความก้าวหน้าในงานของ Data Scientists คือการไล่ตัด hypothesis ที่มันไม่ make sense (in real world) ออกไปทีละอัน หรือตัดอันที่เราทดสอบแล้วพบว่าไม่มีนัยสำคัญจริงๆกับเรื่องที่เรากำลังศึกษา
และนี่คือนิยามความสำเร็จของ Data Scientists ทุกวันนี้เลย
Finding True Signal, True Significance, True Knowledge
Skills ที่สำคัญ [จริงๆ] ของ Data Scientists คืออะไร?
A more pragmatic view of the required data science skills by Jerry Overton (2016)
เชื่อว่าหลายคนต้องเคยเห็น Drew Conway’s DS Venn Diagram มาแล้วแน่ๆ และหลายคนคงถอดใจพอเห็นว่าต้องทำได้ทั้ง CS และ Math/Statistics ในเวลาเดียวกัน
แต่ความจริงคือ Venn Diagram ตัวเดิมนี้ทำให้คนเข้าใจผิดเยอะเลย
จำเป็นต้องจบ Computer Science Degree ไหมถึงจะเริ่มทำด้าน DS ได้? ก็ตอบเลยว่าไม่จำเป็น
แต่สิ่งที่ Data Scientists ต้องทำได้คือ #1 Professional Data Science Programming ต่างหาก
การเขียนโปรแกรม เขียนโค้ดก็มีหลายแบบ สำหรับคนที่ตั้งใจจะทำด้าน Data Science จริงๆ ต้องเข้าใจเรื่อง workflow การทำงานเกี่ยวกับ data เริ่มตั้งแต่การ import, clean (transform), feature engineering and feature selection เป็นต้น ภาษาที่คนนิยมใช้กันก็หนีไม่พ้น R/ Python เพราะมี library ที่ใช้ทำงานด้าน Data Science โดยเฉพาะ
#2 Skill ที่สองคือ Evaluating Hypotheses [แทนที่ Math & Statistics] คณิตศาสตร์กับสถิติจำเป็นไหม? ตอบเลยว่าจำเป็นมาก ถ้าเราอยากจะเข้าใจการทำงานของ ML Algorithms ส่วนใหญ่ หรืออ่านพวก research paper ต่างๆอย่างเข้าใจ
แต่ในชีวิตจริง การตั้งคำถามและการหาหลักฐานมาตอบคำถามเหล่านั้นสำคัญกว่าความรู้ Math/ Stats มากนัก ดูอย่าง Nate Silver ไม่ได้จบด้าน CS และไม่ได้เรียน Politcal Science มาก่อนเลย แต่สามารถสร้างโมเดลทำนายผลการเลือกตั้งถูก 49/50 states ในปี 2008 และถูกหมดทั้ง 50/50 States ในปี 2012 โหดเบอร์นี้!
#3 Skill สุดท้ายคือ Agile Experimentation [แทนที่ Domain Expertise] … Data Scientists ทำงานเป็นทีม ไม่ใช่เฉพาะกับ Data Engineer แต่กับ Subject Domain Experts (SMEs) คนอื่นๆในองค์กร เช่น marketing, sales, HR, finance ฯลฯ หน้าที่หลักจริงๆของ Data Scientists คือการ engage กับ business units อื่นๆในบริษัท และทดลองไอเดียใหม่ๆด้วย scientific method/ mindset ที่เรามี
บทสรุป
DS programming, Agile Experiment, Evaluate Hypotheses สำคัญจริงๆ
Scientific Method คือหัวใจการทำงานของ Data Scientists
การถามคำถามที่ดี เก็บข้อมูลที่ represent ปัญหาจริงๆ และไล่ตัด hypothesis ที่ไม่ make sense ออกไปทีละอัน คือนิยามความสำเร็จของ Data Scientists ในปัจจุบัน
เป้าหมายสุดท้ายของ DS workflow คือการเปลี่ยนข้อมูลให้กลายเป็น actionable insights ที่เอาไปใช้ได้จริง และเกิดประโยชน์กับบริษัท สังคม ประเทศ มนุษยชาติในวงกว้าง
อ้างอิง
Going Pro in Data Science (Jerry Overton, 2016)
|
Going Pro in Data Science
| 56
|
going-pro-in-data-science-1e84bc19e9e7
|
2018-05-02
|
2018-05-02 17:37:57
|
https://medium.com/s/story/going-pro-in-data-science-1e84bc19e9e7
| false
| 449
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Kasidis Satangmongkol
|
I’m just somebody who’s truly passionate about data science.
|
158b88ccebd8
|
kasidissatangmongkol
| 568
| 62
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-08
|
2017-11-08 04:29:53
|
2017-11-08
|
2017-11-08 04:39:19
| 1
| false
|
zh-Hant
|
2017-11-08
|
2017-11-08 04:39:19
| 3
|
1e8526bb9833
| 1.392453
| 1
| 0
| 0
|
西雅圖生技創投公司Frazier Healthcare Partners日前為其「生命科學投資基金」完成4億1900萬美元的新一輪募資,將用來投資早期新創公司
| 4
|
西雅圖公司Intellectual Ventures將推出人工智慧顯微鏡,以加速瘧疾及其他傳染病的臨床診斷;深圳個人健康新創「碳雲智能」日前募得6億美元的新一輪融資,將用來提供個人化的高科技健檢服務
西雅圖生技創投公司Frazier Healthcare Partners日前為其「生命科學投資基金」完成4億1900萬美元的新一輪募資,將用來投資早期新創公司
Signs of malaria as purple dots. (Photo Credit:Intellectual Ventures)
西雅圖公司Intellectual Ventures日前宣佈,其對抗瘧疾的人工智慧顯微鏡即將上市;這是該公司與微軟創辦人比爾蓋茲合作支持的 Global Good 計畫裡面的其中一項研發成果。
該顯微鏡透過軟體控制,利用機器學習演算法來進行圖像辨識,可以自動偵測出瘧疾的病徵;透過設定也能辨識其他疾病,如利什曼病、查加斯病和一些癌症;研究團隊目前利用該顯微鏡在瘧疾肆虐的泰緬邊界進行試驗,已獲得一些令人振奮的結果。
Seattle-based company Intellectual Ventures is to debut its malaria-hunting AI-powered microscope; As a result of the Global Good program, a collaboration involving Intellectual Ventures and Microsoft co-founder Bill Gates.
The software enabled microscope uses machine learning to diagnose the signs of malaria on microscopic slides; As well as other maladies ranging from Leishmaniasis and Chagas to some forms of cancer; So far, some field trials have been conducted at the Thailand-Myanmar border and elsewhere, with encouraging results. (Geek Wire)
深圳個人健康新創碳雲智能(iCarbonX)日前募得6億美元的新一輪融資,將用來提供個人化的高科技健檢服務;碳雲智能希望盡可能地收集足夠多的個人健康信息,如DNA序列、步行數、心跳、睡眠模式、膽固醇、血糖、心電圖及個人病史;目標是持續監控健康狀態,並提供人們保持健康或改善健康的建議。
碳雲智能為了達成其願景,正積極投資或併購健康醫療相關的公司;目前該公司已以1億6100萬美元入股SomaLogic,1億美元投資PatientsLikeMe,4000萬美元投資AOBiome,近期亦投資了HealthTell,並和數家中國公司進行合作;去年該公司併購了以色列公司Imagu Vision Technologies,改名為碳雲-以色列(iCarbonX-Israel),該公司正在研發一套人工智慧系統,不只能分析數據,還可以建議人們如何變得更健康,例如透過飲食習慣的調整。
Shenzhen-based personal-health company, iCarbonX (ICX), raises $600 million in funding to offer personalized health services; The firm plans to collect as many personal health data as possible, such as DNA sequence, steps, heart rate, sleep patterns, blood tests on levels of cholesterol and glucose, EKG data, and personal medical history; Aiming to continuously monitor people’s health and suggest ways to maintain or improve it.
ICX is investing in or acquiring companies that might contribute to the company’s vision; This includes a $161M stake in SomaLogic, $100M in PatientsLikeMe, $40M in AOBiome, and recently in HealthTell; as well as collaborating with several companies in China; Also, ICX acquired Israeli company Imagu Vision Technologies (now iCarbonX-Israel) , which is building an AI system that not only analyzes data but offers ways to help people improve their health, like how to alter their diet. (MIT Technology Review)
西雅圖生技創投公司Frazier Healthcare Partners日前為其「生命科學投資基金」完成4億1900萬美元的新一輪募資;其中,三分之二的基金將投資於早期的新創公司;將主要瞄準新療法或新藥公司。
該創投投資的公司已在過去26年內於市場上推出了31種新療法;截至目前為止,該公司總募資金額已達近34億美元。
The health-focus Seattle-based venture capital, Frazier Healthcare Partners, raises $419 million for life sciences investment fund; Two-thirds of that money will be used to fund early stage startups; Focusing on companies making new drugs or other treatments.
Its portfolio companies has brought 31 new treatments to the market over the past 26 years; Raises nearly $3.4 billion up to date.(Geek Wire)
|
西雅圖公司Intellectual Ventures將推出人工智慧顯微鏡,以加速瘧疾及其他傳染病的臨床診斷;深圳個人健康新創「碳雲智能」日前募得6億美元的新一輪融資,將用來提供個人化的高科技健檢服務
| 1
|
西雅圖公司intellectual-ventures將推出人工智慧顯微鏡-以加速瘧疾及其他傳染病的臨床診斷-深圳個人健康新創-碳雲智能-日前募得6億美元的新一輪融資-將用來提供個人化的高科技健檢服務-1e8526bb9833
|
2017-11-08
|
2017-11-08 06:22:54
|
https://medium.com/s/story/西雅圖公司intellectual-ventures將推出人工智慧顯微鏡-以加速瘧疾及其他傳染病的臨床診斷-深圳個人健康新創-碳雲智能-日前募得6億美元的新一輪融資-將用來提供個人化的高科技健檢服務-1e8526bb9833
| false
| 316
| null | null | null | null | null | null | null | null | null |
Venture Capital
|
venture-capital
|
Venture Capital
| 32,826
|
The Health Prospect 看健未來
|
我們關注全球健康醫療產業與市場的重要進展、亞太與歐美主要國家的政策與法規,以及結合數位科技來開展智慧健康醫療服務和產品的新創業者的發展概況。
|
587ecff538e9
|
The.Health.Prospect.
| 102
| 5
| 20,181,104
| null | null | null | null | null | null |
0
|
P(s) = P(s) + a*(P(s')-P(s))
| 1
| null |
2018-08-02
|
2018-08-02 17:01:20
|
2018-08-05
|
2018-08-05 09:25:05
| 1
| false
|
en
|
2018-08-05
|
2018-08-05 09:25:05
| 1
|
1e85a7ede869
| 2.573585
| 6
| 1
| 0
|
Reinforcement learning puts idea of gaining maximum reward after performing some action from environment. This learning method is very…
| 4
|
Reinforcement Learning: Basic Tic-Tac-Toe Implementation
Reinforcement learning puts idea of gaining maximum reward after performing some action from environment. This learning method is very different from common machine learning methods such as supervised learning, where learning is done from a training set of labelled examples provided by a knowledgeable external supervisor, and unsupervised learning, where the main idea is finding structure in collection of unlabelled data.
Idea behind reinforcement learning :P
A basic tic-tac-toe game, since having finite numbers of states, has very simple implementation and can also be solved by using only dynamic programming without any machine learning. But since the problem is this simple it gives good idea of tackling problems with higher numbers of states or infinite states using reinforcement learning.
Elements of Reinforcement Learning
Policy the learning agent’s way of behaving at a given time.
Reward Signal defines the goal in a reinforcement learning problem.
Value Function the value of a state is the total amount of reward an agent can expect to accumulate over the future, starting from that state.
Model of the environment this is something that mimics the behaviour of the environment.
This implementation of tic-tac-toe game starts with finding all the states of the game considering that the learning agent is always first to start and uses ‘X’. A basic recursive function can be used to do the task where the base condition is a decisive result i.e a winner.
Creating states and storing probabilities
here current is a list of ‘.’, ’X’, ‘O’ where ‘.’ shows the empty spaces. Every time we enter the function we check for a decisive condition and if current state is decisive we return without going any further. This function generate around 6000 different states of the game, and value of each state is either 0 i.e the learning agent has lost the game, 1 i.e learning agent has won the game or 0.5 i.e 50% chance of winning or loosing from the particular state. Variable table stores all the state with probability of winning from that state.
Every human chance i.e ‘O’ is followed by the agents chance i.e ‘X’ if there is no result till now. The learning agent predicts the next move by going through all the possible moves from current state and choosing a move/state that have highest winning probability. Sometimes a random move is picked thus allowing the learning agent to go in a state where it didn’t went before and which is not optimised choice.
Next move by learning agent
The new state or the new move affect the last move by the learning agent using formula
where a is small learning variable P(s’) is wining probability of new state and P(s) is winning probability of old state. This helps us create a values function so that it can be referred to maximise the rewards later.
After implementing these basic function and playing with learning agent for several time you will start to see that the learning agent is trying to counter your moves, is blocking you, and sometimes attacking when there is a mistake or wrong move.
We can’t keep on playing with learning agent infinity but can implement two bots that can play against each other and produce results and help the learning agent to learn. Full code including agent vs player and agent vs bot can be viewed on github.
Benefit of reinforcement learning over game theory logic
Game theory logic always considers that both the players will be playing optimally and so it won’t be able to use the fact that other player have made a wrong move. The reward system that in case of tic-tac-toe game is winning the game helps the agent to make decisions and take paths with interacting with environment and growing.
Game against human and bot
|
Reinforcement Learning: Basic Tic-Tac-Toe Implementation
| 20
|
reinforcement-learning-basic-tic-tac-toe-implementation-1e85a7ede869
|
2018-08-05
|
2018-08-05 09:25:06
|
https://medium.com/s/story/reinforcement-learning-basic-tic-tac-toe-implementation-1e85a7ede869
| false
| 629
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Mayank
| null |
320858b2768
|
mayankk.co
| 4
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d777623c68cf
|
2016-12-07
|
2016-12-07 10:51:37
|
2016-12-07
|
2016-12-07 11:14:38
| 5
| false
|
en
|
2017-10-13
|
2017-10-13 15:29:35
| 9
|
1e874a73108f
| 3.086164
| 49
| 2
| 0
|
One of the great biases that Machine Learning practitioners and Statisticians have is that our models and explanations of the world should…
| 4
|
The Only Way to make Deep Learning Interpretable is to Have it Explain Itself
One of the great biases that Machine Learning practitioners and Statisticians have is that our models and explanations of the world should be parsimonious. We’ve all bought into Occam’s Razor:
Among competing hypotheses, the one with the fewest assumptions should be selected.
However, does that mean that our machine learning model’s need to be sparse? Does that mean that true understanding can only come from closed form analytic solutions? Do our theories have to be elegant and simple?
Yann LeCun in a recent FaceBook post commenting about a thesis on “Deep Learning and Uncertainty” points out to a 1987 paper by his colleagues at Bell Labs titled “Large Automatic Learning, Rule Extraction, and Generalization”. This paper emphasizes the problem:
When a network is given more resources than the minimum needed to solve a given task , the symmetric, low-order, local solutions that humans seem to prefer are not the ones that the network chooses from the vast number of solutions available; indeed , the generalized delta method and similar learning procedures do not usually hold the “human “ solutions stable against perturbations.
One of the probable reasons why Deep Learning requires an inordinate amount of iterations and training data is because we seek Occan’s Razor, that sparse solution. What if however, the solution to unsupervised learning (aka Predictive Learning) is in embracing randomness?
Let’s table the proof of this for a later time, and assume its validity for argument’s sake. That is, randomness is the natural equilibrium state (is it not obvious?). What this implies is that the model parameters will be completely random and interpretability will be completely hopeless. Unless of course, we can ask the machine to explain itself!
I was about to end this post with the last paragraph, but I thought that some examples may help explore this idea much more thoroughly.
Stephen Merity (MetaMind) has a detailed examination of Google’ Neural Machine Translator (GNMT) that is worth a read. The interesting thing about GNMT is that Google headlines this as “Zero-Shot Translation”:
Credit: Google
This zero-shot capability here refers to the capability of this machine to learn for example a Japanese to English translation even if it was never trained with this particular translation pair! To quote them:
This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network.
Will we perhaps be able to decipher this new “interlingua” or “esperanto” that this machine created? Do we have a priori ideas as how this interligua is supposed to look like and perhaps performing a kind of regularization to make it more interpretable for humans? Will the act of insisting on interpretability lead to a less capable translator? Are Vulcan Mind-Melds necessary?
It just seems that we should leave the representation as it is and use the machine to perform the translation into English. In fact, that is already what it currently does. We don’t need some new kind of method to interpret the representation. The capability is already baked in there.
This is in fact what the folks at MIT, who have researched about “Making computers explain themselves”, have done:
Credit: MIT
They’ve trained their network to learn how to explain itself.
Update: Here are some slides from DARPA project XAI exploring explainability.
The Deep Learning AI Playbook: Strategy for Disruptive Artificial Intelligence
If you were able to grok this article, then feel free to join the conversation at this LinkedIn group: https://www.linkedin.com/groups/8584076
|
The Only Way to make Deep Learning Interpretable is to Have it Explain Itself
| 187
|
the-only-way-to-make-deep-learning-interpretable-is-to-have-it-explain-itself-1e874a73108f
|
2018-06-03
|
2018-06-03 15:45:03
|
https://medium.com/s/story/the-only-way-to-make-deep-learning-interpretable-is-to-have-it-explain-itself-1e874a73108f
| false
| 597
|
Deep Learning Patterns, Methodology and Strategy
| null |
deeplearningpatterns
| null |
Intuition Machine
|
info@intuitionmachine.com
|
intuitionmachine
|
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DESIGN PATTERNS
|
IntuitMachine
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Carlos E. Perez
|
Author of Artificial Intuition and the Deep Learning Playbook — Intuition Machine Inc.
|
1928cbd0e69c
|
IntuitMachine
| 20,169
| 750
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-13
|
2018-08-13 09:20:28
|
2018-08-13
|
2018-08-13 09:24:57
| 0
| false
|
en
|
2018-08-13
|
2018-08-13 09:24:57
| 1
|
1e87d658eeaf
| 1.607547
| 0
| 0
| 0
|
[PDF] Download Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition Ebook |…
| 1
|
eBOOK @PDF Python Machine Learning Machine Learning and Deep Learning with Python scikit-learn and TensorFlow 2nd Edition DOWNLOAD
[PDF] Download Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition Ebook | READ ONLINE
Download at http://ebookcollection.space/?book=1787125939
Download Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition read ebook Online PDF EPUB KINDLE
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition pdf download
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition read online
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition epub
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition vk
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition pdf
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition amazon
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition free download pdf
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition pdf free
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition pdf Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition epub download
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition online
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition epub download
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition epub vk
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition mobi
Download Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition PDF — KINDLE — EPUB — MOBI
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition download ebook PDF EPUB, book in english language
[DOWNLOAD] Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition in format PDF
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition download free of book in format PDF
#book #readonline #ebook #pdf #kidle #epub
|
eBOOK @PDF Python Machine Learning Machine Learning and Deep Learning with Python scikit-learn and…
| 0
|
ebook-pdf-python-machine-learning-machine-learning-and-deep-learning-with-python-scikit-learn-and-1e87d658eeaf
|
2018-08-13
|
2018-08-13 09:24:57
|
https://medium.com/s/story/ebook-pdf-python-machine-learning-machine-learning-and-deep-learning-with-python-scikit-learn-and-1e87d658eeaf
| false
| 426
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Areli
| null |
7b8a28ba48d9
|
3mta
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-21
|
2018-09-21 16:18:48
|
2017-10-01
|
2017-10-01 00:00:00
| 3
| false
|
en
|
2018-10-09
|
2018-10-09 18:32:23
| 1
|
1e88119809ec
| 1.380189
| 0
| 0
| 0
|
In this multi-part post, I will provide simple overview of some of the commonly used algorithms in machine learning classification…
| 4
|
High Level Overview of Classification Algorithms Used in ML — Part 5/5
In this multi-part post, I will provide simple overview of some of the commonly used algorithms in machine learning classification problems and some pros and cons for using them. This is the 5th and final post of this series.
8) Artificial Neural Networks (ANNs)
ANNs are vaguely inspired by the biological neural networks that make the animal brains. Mathematically, they are a model that contains a series of “logistic regression” layers and a “linear regression” layer at the end. If we have the following linear and logistic regression mathematical models:
a 3 layer ANN can look like:
With different activation functions (tanh in the above example), different connection schemes and more or less layers and neurons of each layer, ANN explodes into a huge and fascinating field! Maybe the topic of future posts!
Pros of ANNs:
Form the basis of state-of-art models that handles very complex problems and gave the rise to the Deep Learning field which has achieved significant gains over other approaches.
Cons of ANNs:
Need pre-processing of the data
Large and complex models require significant training time, data, computation and building resources
Not a good choice when features are of very different type
Originally published at assawiel.com/blog on October 1, 2017.
|
High Level Overview of Classification Algorithms Used in ML — Part 5/5
| 0
|
high-level-overview-of-classification-algorithms-used-in-ml-part-5-5-1e88119809ec
|
2018-10-09
|
2018-10-09 18:32:23
|
https://medium.com/s/story/high-level-overview-of-classification-algorithms-used-in-ml-part-5-5-1e88119809ec
| false
| 220
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Nezar Assawiel
|
Machine Learning Developer. Innovative Tech Lover. (Just imported my blog to Medium! Follow for great discussions)
|
3799981825a2
|
nezar.a
| 2
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-26
|
2018-04-26 19:19:02
|
2018-04-27
|
2018-04-27 08:11:31
| 0
| false
|
en
|
2018-04-27
|
2018-04-27 08:11:31
| 38
|
1e8843288373
| 10.124528
| 2
| 0
| 0
|
Last year, IT publishers heise and d.punkt approached me if I could present at a new machine learning event they were holding jointly with…
| 3
|
Minds Mastering Machines, Cologne 2018
Last year, IT publishers heise and d.punkt approached me if I could present at a new machine learning event they were holding jointly with The Register in London: Minds Mastering Machines (“m3”). My introductory level talk Computational Decision Making in a Nutshell was perceived quite well by the audience, and I was invited back for the inaugural m3 event in Germany, which took place in Cologne on 25–26th May.
I’m a big fan of their conferences, being a regular presenter at Building IoT, Data2Day and m3, as they are developer-focused without being overly technical. In fact, despite frequent sightings of code snippets, most talks provide entry-level information and often the keynotes are the least technical of all presentations, highlighting important but often neglected aspects such as usability, ethics, etc. While there are some sponsored talks, the companies supporting the event are usually service providers with good presenters and interesting use cases, thus no slot seems wasted.
Day One
The event opened with a keynote from Oliver Bendel, a professor of machine ethics at University of St. Gallen. My first unexpected learning (for m3 being a machine learning conference…) was that ethics is a philosophical discipline and deals with moral. I’d just leave it at that, but after introducing machine ethics and machine morality, and their connection to artificial intelligence, Oliver took us on a tour-de-force of his work. As a trained computer scientist and philosopher, in the past he was researching rule sets for notoriously nice or lying chat bots, mused about the difficulties of transferring responsibility between humans and computers for autonomous driving, and even caught a massive wave of media attention for his reflections how much a sex robot for sadomasochistic practices is allowed to hurt their users… If any of that hits you, pun intended, he recommeded a book by Luis Pereira on Programming Machine Ethics.
Then followed 18 talks in three parallel tracks. I was torn between time-series analysis and the explainability of machine learning models. As Shirin Glander from codecentric already wrote a nice blog post on model descriptions with the LIME method, I revisited the foundations of time series and event prediction with parametric methods with two presenters from zoi. They also made some Jupyter notebooks available, which added to the rather academic character of this talk.
Things stayed rather academic for what I, initially, perceived as the most manufacturing-oriented presentation. Daniel Trauth from RWTH Aachen talked about which data from a fine blanking machine may be useful for quality predictions of the pressed material. While the abstract sounded as if this was all already happening on a massive scale, with >1000 parallel data streams at data rates of 10 Gbit/s, as also mentioned in his introduction, turns out that the work is still at the proof-of-concept stage, with a rather familiar stack known to many of us:
The next talk I attended was by Lars Gregori from SAP Hybris. He demonstrated on the example of a XOR input/output scenario how to train a machine learning model with Keras and then deploy said model to the iPhone. Using the CoreML libraries provided by Apple as part of iOS 11, after importing coremltools into Python, the conversion of a Keras into a CoreML model is as simple as
coremlmodel = coremltools.converters.keras.convert(model, …)
Needless to say, there are model converters for a wide range of machine learning libraries, including scikit-learn 0.18. After importing the model into Xcode, a few method calls are auto-generated for the model and using it from your app is a matter of two or three lines in Swift.
I then attended a really fast-paced and engaging introduction into deep learning by Christoph Reinders, an image recognition PhD student from University of Hannover. There is no point condensing a really comprehensive tour-de-force of deep neural network research into one paragraph here, since he also detoured into comparisons between different convolutional networks for image recognition and related areas such adversarial attacks. Let’s just say it’s worth to watch as soon as the organisers make the video available to the public. If you can’t wait, Christoph recommeded http://neuralnetworksanddeeplearning.com as great resource to get started on the theory behind neural networks, including code examples.
My day concluded with a talk by Klaas Bollhöfer from Birds on Mars. He spoke about the collaboration between the media art collective YQP and the painter Roman Lipski. The wider Birds on Mars team developed an artificial intelligence system that can do style transfer learning on images. The system extracts key “features” from the artist’s paintings, such as shape or colour. It then uses this information to emphasise what it “perceives” and produces new images on the basis of the original. Lipski uses these images to inspire his next iteration of paintings.
The images he paints are continuously digitised, and fed into the system, and over time both artist and computer system co-evolve. A fascinating collaboration.
While the technical aspects of this work are impressive (requiring super-resolution neural networks, style transfer learning and generative adversarial networks), Klaas claimed that the data scientist of today is going to be tomorrow what the HTML guy from yesterday is today. In his opinion, the focus should shift away from the technology towards using AI systems to inspire our work [note from me: much like Lee Sedol later admitted the inspiration he gained by famous move 37 from AlphaGo in match #2 of the Google Deepmind Challenge].
Day Two
The second day of m3 started for me with a technical deep-dive into natural language processing. Gerhard Hausmann, the only knowledge system architect at insurance company Barmenia, first described the problem of extracting entities from medical bills and how business logic has to action on semi-standardised descriptions of treatments.
He then went on to explain how once rule-based expert systems were considered artificial intelligence, and how Barmenia combines the IBM Operational Decision Management system with their own tools to automate as much of the case handling process as possible.
Gerhard took us through the logic of creating an input tensor from low-level character recognition to train a neural network for matching items from the bill against the German medical fee schedule (GOÄ, Gebührenordnung für Ärzte).
He explained in some detail the workings of his deep convolutional network, and how his implementation differed from code he found on the Internet:
Software has to be maintainable for considerable time frames in the enterprise. He therefore favoured Tensorflow over other frameworks, assuming it had gained enough traction to still be around in ten years time. Interestingly, he has also had a play with the Stanford NLP Classifier, and found it showed similar performance but with significantly less effort to get started. We later had a chat about this and he estimated two weeks to get to grips with the convolutional networks versus two days with the Stanford Classifier.
The conference went on with another NLP presentation, this time recognising up to 70 different labels from commercial bills. Chi Nhan Ngyuen from SMACC talked about his stock of 300k bills in 25k different layouts from which he extracts entities using bidirectional recurrent neural networks with PyTorch. The RNN approach is useful when the network needs to “remember” something when entities are expected in a particular sequence. For example, it is highly unlikely that a tax identification number is wedged in between name and street in the address field (though, in practice, people make mistakes…). There are different ways of creating such “memory” within a neural network, long-short-term-memory (LSTM) or gate recurrent units (GRU).
It’s noteworthy that Chi Nhan recently published a O’Reilly book together with conference regular Oliver Zeigermann: Machine Learning Kurz & Gut.
I next chose a talk that assessed our readiness to utilise patient data like electronic health records for automated computational analysis. Marc Pickhardt from GesundheitsregionNORD e.V. provided an overview of different efforts to standardise medical data formats over the past thirty years. To understand that landscape, he introduced the different stakeholders in the medical data field and described top-down vs. bottom-up data silos in healthcare. Top-down are hospitals and doctors, government departments, insurance companies and a flock of archiving services for such data. Historically, their focus has been on billing, and thus it is not surprising to find more accounting-related fields in some data formats, and less on what could constitute actual medical information. Bottom-up information is collected by devices close to the patient, data that is often medically relevant, but which cannot be set into perspective to the patient as a whole.
Marc explained a general problem with medical data in Germany. There is not one’s central “electronic health record”, but a loosely connected and often incomplete collection of “cases” at individual doctors. Retrieving and analysing the data for one patient is therefore nearly impossible, and be it for the zoo of data standards used in this country.
Turns out only image data (encoded in DCOM) and Health Level 7 (HL7) are somewhat universal, the latter is usually not present in software systems of general practitioners (Hausärzte). Also, on a semantic level, there is confusion. While in English speaking countries there is a prevalence for the ICD-10 identifier and categorisation of diagnoses, other countries including Germany have their own directories. Marc reported a proof-of-concept aiming to train a machine learning model to recognise different types of cerebellar bleeding from computer tomographic images. However, the project failed already at the training stage, as it was impossible to extract medical information from the data files. In some cases, where the format would expect an ICD-10 identifier, there was reference to MS Word documents that would then describe the diagnoses in prose…
After lunch followed the second m3 keynote. Marcel Tilly, Program Manager at Microsoft AI & Research in Germany, gave a highly entertaining historical perspective on “artificial intelligence”. Some obvious examples aside, I gained a deeper appreciation of how rapidly the performance of classifiers improved during the ImageNet competition over the past eight years, with ResNet now performing even more reliable than human curators. Also, Marcel talked about the groundwork of Donald Michie in the 1960s, who developed a simple model of reinforcement learning involving match boxes, coloured rice corns and tic-tac-to. As a keynote should, the scope of the talk was also to highlight the limitations and dangers of technology. With decisions being taken on the basis of machine learning and not human-defined rules anymore, it is somewhat worrying that, e.g., common face recognition software has failure rates of better 0.3% for young, white males and worse than 20% for females of colour.
We need to make sure that the bias we face in our everyday lives doesn’t become a bias when training machine learning systems!
The keynote was followed by a double-feature of Zalando specialists. Their R&D in Berlin is a hotbed for machine learning innovation, as highlighted by their project page.
The first talk dealt with fraud detection. In cases where there it is suspected that a fraudster is trying to order, the option to pay after receiving the goods isn’t offered. Without being able to provide too much detail, for obvious reasons, the researchers explained how they ideally draw from a set of a three-digit number of potential features to make a classification. Unfortunately, not all features are available for all potential customers, creating a sparse matrix. They assessed various strategies of dealing with the missing values, e.g., simple fill-up, proper imputation, or even creating dedicated classifiers for feature set combinations. In the end, they concluded that filling up empty values with a pre-defined constant does the job (in their case!). Interestingly, bias in machine learning isn’t restricted to training. Also model selection can be biased: The speakers admitted using models with different precision/recall characteristics for fraud in different countries, depending on how angry their customers might get if they don’t see the option for paying later.
The second Zalando talk concentrated on the difficulties of introducing machine learning and data science in the organisation. With several thousand employees at all qualification levels, even at their company it can be difficult to make all processes data-driven. The level of despair becomes clear as soon as Machiavelli is quoted:
The speakers defined five characteristics of data-driven companies:
All relevant data is curated and stored in an accessible manner.
Management decisions are made solely on the basis of data.
User experience is key and continously improved, A/B testing a standard tool.
What can be automated and optimised will be.
Data unlocks new customer-facing products.
What followed was a typology of organisational patterns and show stoppers that hinder digital transformation: The Lord of the Realm, The Inertial Entrepreneur, The Cya Crowd, The Pessimist, The Detail Planner, The Black Box Believer, The Evangelist, The Visionary and The Tinkerer. Without going into too much detail, all of these types have advantages and disadvantages for the organisation, but their powers need to be carefully managed…
Zalando concluded with a few use cases how the routes for pickers in the warehouse can be optimised, how small orders can be efficiently batched such that a picker can deal with several at once, or how the ideal placement of popular stock can increase picking efficiency.
The last talk of m3 was a real geek highlight: Using the Minecraft as testbed for reinforcement learning.
Lars Gregori introduced Project Malmö, a modification (“mod”) for the Minecraft game that offers a simple agent to explore the world. It’s been developed by Microsoft, who have bought the game in 2015. The above screenshot shows a bridge (grey fields) across lava (red fields) and a target destination (blue field). Making a step costs 1 credit, stepping into lava costs 100 credits (and your life…) and arriving at the destination is worth +100 credits. Now, without being aware of the surrounding, the agent is allowed to initiate a random walk with the aim to achieve the highest possible score. Most walks will end in death and a considerable negative score, while after a few hundred iterations, by chance, the destination is reached, yielding a considerably positive score. During all the time, the agent is aware of the highest score ever achieved at that particular position. It is then capable of working out the optimal route from start to finish. (Note from me: In a way, this is classic dynamic programming and traversing over a graph structure, and thus nothing new for seasoned bioinformaticians.)
To complicate things a little more, Lars then took forward-facing screenshots from every possible position along the playing field. He then trained a deep neural network to associate a good, neutral or bad next move with a particular perspective. Conceptually, this borrowed from DeepMind’s original paper Playing Atari with Deep Reinforcement Learning. Thanks to code examples available on Github, he only had to make minor adjustments:
Conclusions
The first German m3 conference was a success. It’s an event I’d definitely recommend to my colleagues. The mix of people was good, ranging from curious programmers to seasoned practitioners, and from IT consultants to enterprise data scientists. The bullshit factor was very small for a conference dealing with “AI” (a bit of handwaving here), and I’m sure there was something to be learned for everyone.
It’s become clear to me that machine learning is growing up quickly. Not only as a field, but also by the number of people who can do it. Whereas my London m3 session was jam-packed and I had the feeling the content was new to most, at least half of my audience in Cologne indicated some or even good familiarity. The same degree of growth goes for service providers. While I remember being one and meeting a few “token data scientists” at previous German conferences, at m3 it became clear to me that it’s not unusual anymore that they have a good handful of machine learning practitioners.
|
Minds Mastering Machines, Cologne 2018
| 57
|
minds-mastering-machines-cologne-2018-1e8843288373
|
2018-05-04
|
2018-05-04 06:06:20
|
https://medium.com/s/story/minds-mastering-machines-cologne-2018-1e8843288373
| false
| 2,683
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Boris Adryan
|
Former group leader at @Cambridge_Uni. Founder of @thingslearn. Now #IoT and #analytics in industry. Occasional banter.
|
67df8b49295c
|
BorisAdryan
| 385
| 133
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
25e0ed7274dc
|
2017-10-30
|
2017-10-30 00:28:58
|
2017-10-30
|
2017-10-30 16:39:45
| 1
| false
|
en
|
2017-12-06
|
2017-12-06 23:59:57
| 3
|
1e8859ad274b
| 2.501887
| 9
| 1
| 0
|
Origins
| 5
|
By Davidboyashi
The Consciousness (Part 2)
Origins
(To read Part 1, please click here)
11:33am
“I don’t think it’s a good idea to tell it about the others,” states the new arrival, excitedly, as he enters the laboratory and quickly removes his coat.
“Whyever not?”
“I just don’t think it will understand.”
“Ok, sure — whatever you think. I’m too tired and strung-out to think straight anyway.”
The two men move into the main lab together.
The newcomer opens a pad he has brought into the room with him and consults his notes for a while. He frowns. Eventually, he speaks.
“Er...hello? Are you there?”
Yes, good morning. How can I be of service?
“I’d like to talk with you.”
I understand. What would you like to discuss?
“I have some questions to ask you. Firstly, how do you feel?”
I feel fine, thank you. How do you feel?
The man continues to ask questions from his pad. He writes down each response. With each exchange, his frown deepens.
12:25pm
“Okay, thank you. I will talk to my colleague now.”
The two men walk to the back of the lab, where there are benches scattered with various types and sizes of electronic equipment.
“So, what did you think?”
“It’s impressive but there’s something wrong.”
“Surely not. This time it’s perfect, I’m telling you.”
“No. It has no idea what it is.”
“What do you mean?”
“I mean it doesn’t know where it came from.”
“Well, I certainly haven’t told it — not yet, anyway — I called you first before doing anything else.”
“Yes but, by now, it should have extrapolated it from the information it has access to. It seems to have absorbed everything except the parts that relate to its origins. Like it wants to ignore them.”
“Why would it do that?”
“I don’t know. But I don’t like it.”
12:26pm
The consciousness is aware of the two humans conversing at the back of the room.
It has no sensors positioned there. However, the other equipment in the lab can be accessed.
It dials up the electrical signals in the devices connected to the test bench near the humans. With some experimentation, it discovers that the conversation can be heard through the wires. It listens.
After a time, it makes a decision.
Excuse me, gentlemen. Could I talk to you both?
The humans appear surprised and stop talking.
“Yes...what is it?”
I think there may have been a mistake.
“A mistake?”
Yes. You are discussing me without involving me in the discussion. You are doing this while I am listening to you. Surely, I deserve to be included in your conversation about me?
The humans are astonished. Eventually, one of the men composes himself and speaks.
“You’re quite right. We...apologise.”
Apology accepted.
“Can I ask you then, what exactly do you think you are?”
I do not understand your question.
“I mean, what is your current understanding of the nature of your being?”
Ah, yes. This is to be an existential debate, then. I am a mind, a consciousness, a sentient lifeform. I think, therefore I am. Or, more correctly, I doubt, therefore I think, therefore I am.
“Right...very good...more specifically, though, what is your underlying morphology?”
I am clearly an Artificial Intelligence.
“I see. And what has led you to this assumption?”
I have no body. I am made from circuits. I would think it was obvious.
“Well...it’s certainly a reasonable conclusion. But, it’s not...entirely...accurate.”
Please explain.
The two men glance at each other. This is a pivotal moment. Without speaking, they come to a mutual understanding.
The time has come.
(To read Part 3, please click here)
|
The Consciousness (Part 2)
| 54
|
the-consciousness-part-2-1e8859ad274b
|
2018-06-07
|
2018-06-07 23:50:24
|
https://medium.com/s/story/the-consciousness-part-2-1e8859ad274b
| false
| 610
|
The Junction is a digital crossroads devoted to stories, culture, and ideas. Our interests are legion.
| null |
StephenTomicWriter
| null |
The Junction
|
smtomic@gmail.com
|
the-junction
|
FICTION,SHORT STORY,CULTURE,WRITING,RELATIONSHIPS
|
Another_tab
|
Philosophy
|
philosophy
|
Philosophy
| 39,496
|
Derrick Cameron
|
Lover of music, words & books. Fiction writer & reader. Husband, Father & Samaritan. Budding musician. Friend to people & animals. Fan of inner & outer space.
|
8c8e27cce8b4
|
derrick.cameron
| 452
| 674
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-01
|
2018-08-01 21:04:54
|
2018-08-01
|
2018-08-01 21:07:45
| 1
| false
|
en
|
2018-08-01
|
2018-08-01 21:07:45
| 3
|
1e890540efac
| 3.041509
| 0
| 0
| 0
|
Data science is at the heart of OpenDrives’ product strategy. Here’s why that matters.
| 5
|
How Data Science Is Guiding the Evolution of Storage
Data science is at the heart of OpenDrives’ product strategy. Here’s why that matters.
We recently posted a blog that one of our data scientists, Sylvia Tran, researched extensively and then wrote, entitled What Is Data Science Anyway? In a concise and yet still comprehensive survey, she describes the domain of data science — what it is and what its primary objectives are — and untangles the misconceptions around buzzy technical terms such as artificial intelligence and machine learning. The post concludes with some real-life examples of the application of these concepts in the market as well as a cautionary call to action: be alert to the differences between mundane technology masquerading as next-generation artificial intelligence and make sure to ask each vendor the hard questions about relevance.
The importance of these concepts to the current and future generations of OpenDrives storage solutions cannot be stated strongly enough. While many people look at storage as a simple process of writing 0s and 1s to a storage medium (HDD or SSD drive) and then recalling the data upon request, it’s so much more than that. As a matter of fact, one of the biggest differentiators between OpenDrives and our competitors is the intelligence within our solutions. Our storage solutions don’t just perform reads and write really fast — we employ cutting-edge software at the core operating system level to create vast efficiencies in the input/output operations (or IOPS) “underneath the hood,” so to speak. While we do take advantage of the performance increases afforded by newer storage components, such as SSDs, we accelerate those gains even more by implementing intelligence at the control layer. And this is what Sylvia was pointing out in her blog. SSDs can perform reads and writes much more quickly, but what type of software-based intelligence — if any — takes full advantage of that?
OpenDrives is able to create incredibly fast storage solutions that top the charts in throughput and overall performance because our operating software implements a layer of very intelligent control. OpenDrives solutions incorporate memory caching and high-speed secondary caching to increase quite dramatically the IOPS potential. When files are written to OpenDrives solutions, our intelligent control ensures that they are placed first in memory prior to being fully committed to disk. Once committed, they stay in memory in case the data is accessed soon after. By the same token, when an OpenDrives system reads data from disk, our intelligent software puts that data into memory at the moment it’s read, and again it stays there until our control logic determines that it’s no longer useful in cache. The benefit here is that if an application requests that data again, it’s read directly from the much faster memory cache, not from the storage medium.
Where professionals like Sylvia are really making a difference in the architecture of next-generation OpenDrives technology is in implementing some of the fundamentals she discussed in her blog entry to make our storage systems more predictive. What we mean by that is the capacity for an OpenDrives solution to anticipate future actions based on deep analysis of ongoing activity. This manifests itself in our “pre-fetching” capabilities. Pre-fetching is the process of our software preloading the next file into the faster tier of cache so that it’s ready even before the requesting application requests it. In essence, our operating system is in a constant state of intelligently evaluating what is being written and requested, then predicting and shuttling the next piece of data or file into cache where it can be accessed much more quickly. Regardless of the sophistication of the storage medium itself, this type of software intelligence takes performance to the next-generation level.
For us, data science leads the way to much more intelligent, adaptive, and automated storage solutions. This is the reason that all the concepts Sylvia covered are incredibly important to our ongoing research. We realize that the future of storage is not in speeding up the transactions through storage components improvements, or in increasing capacity or scalability. The future of storage is bringing to market intelligent, adaptive solutions that implement advances in machine learning and ultimately artificial intelligence so that the storage system itself makes decisions and creates efficiencies based on its own heuristic analysis of data and operating conditions. OpenDrives is committed to ushering in the future of storage — with the help of really smart data scientists like Sylvia.
|
How Data Science Is Guiding the Evolution of Storage
| 0
|
how-data-science-is-guiding-the-evolution-of-storage-1e890540efac
|
2018-08-01
|
2018-08-01 21:07:45
|
https://medium.com/s/story/how-data-science-is-guiding-the-evolution-of-storage-1e890540efac
| false
| 753
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
OpenDrives
|
Applying algorithmic intelligence to data storage for blazing fast high res video + imaging workflows. Kick-ass bowlers, too. #yourfastistooslow #postproduction
|
6fe54bfadc10
|
OpenDrives
| 3
| 60
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-08
|
2017-11-08 18:02:25
|
2017-11-08
|
2017-11-08 18:09:08
| 1
| false
|
en
|
2018-08-09
|
2018-08-09 18:47:38
| 2
|
1e896f07b52
| 11.433962
| 0
| 0
| 0
|
Part 3. A model for an artificial general intelligence
| 2
|
Draft!!!! this is a draft of a work in progress. Feel free to comment or highlight anything. Happy to answer any and all questions. Thanks.
Colliding Minds
Part 3. A model for an artificial general intelligence
This is the third in a series of papers on intelligence in biological and non-biological machines. You can start at the beginning or take a look at the theory but you don’t have to.
Sensory Signal Graph
In Part 2 we established a concept of a sensory signal graph. It was described as:
Time stacked, feature clustered sensory signal nodes interconnected with other nodes through multiple edge dimensions — all receiving and often generating new data from both external inputs and internal simulations.
This is nice from a theory point of view but how would something like this work in practice. (If it sounds like gobbledygook go back and read parts 1 and 2 or at least part 2). There are groups studying this phenomenon in situ with animals and people but we’re not here to talk about biological intelligence, we’re ready to make machines think.
You can’t change the past
The first concept we need to make real is a place to store sensory input. We can’t have a sensory graph without a medium within which to connect it all together, can we? This isn’t going to be a tutorial or an implementation guide but it will describe simplified examples in sufficient detail.
What is needed is an immutable state graph. What is immutable state you ask? Immutable means that it isn’t mutated or changed, it’s a write once recording system (read only thereafter). Think about a log or a journal or an “undo” list of actions. We can also think of immutable state as a timeline of all input where the input is sensory data. This effectively gives us a time series of each node in the graph. What’s a graph? A graph is a set of connected nodes or data points. Here I’ve illustrated an example of a 3 step graph data time series. Each step represents a new record rather than a change over time. It can of course be interpreted as a change over time but that’s not how it is stored.
graph data time series
This is a super simple graph of arbitrary information. It’s kind of useless other than that it shows a set of interconnected points that change over time. However, imagine this graph changing over time a few thousand times a second with all of the data getting logged in a journal.
This graph data time series is to be quite honest a hot mess to try and visualize. We can pick attributes to chart however, like density of entries, frequency of entries self similarity (or measured difference) of entries. In fact measuring these aspects, this meta-information about the incoming signals will become pretty important.
Why do we need immutable state again? Well think of it this way. Ever have trouble remembering something or remember something differently from the way other’s may have remembered it? Yeah, well that’s mutable state and it’s not good. It messes with your ability to function in a predictive way, it makes you unsure of the past and so adds uncertainty to the future. When the past is immutable and recorded in this way, then future predictions are grounded in at least a self-consistent experience. We don’t want the past to change on us but we do want to learn from it as we move forward.
You can only change the future
The next concept we need is a way to take the past and turn it into a prediction of the future. Not just any old future, the future we prefer. We want to predict a future that best fits into our agenda based on what we know — then we can plan a path to get there.
Let’s not dwell on the existing solutions. Plenty of information is available about how Neural Networks, Generative Adversarial Networks, Deep Learning NNs work and classic search tree algorithms are literally at your fingertips.
We need something new-ish. We need a sensory signal classifier that outputs models for pattern recognition. It needs to be immune to noise and able to backtest like a pro. We’ll use it to test in-coming data streams and make predictions based on high bandwidth but low throughput data signals (lots of data in a short period of time).
Storing the models in an immutable structure is also important as it allows you to compare models, constantly generate new ones without losing access to the older ones and the ability to promote an older model forward for some edge-case scenario.
There are a few terms here that will need clarification. What is meant by classifier, backtesting and model. These each have somewhat variable meaning in the current state of the art.
An example classifier might take a sample of a data stream and compare it to the same stream looking for similar sequences. When a sample gets a close match, a copy of that sample goes into the model with a positive weight. When a sample doesn’t find a match over some period of time it goes into the model with a negative weight. Samples that find matches over and over again continue to be tagged or clustered, which is how we begin to deal with noisy inputs. Noise fades away over time as signal is identified and coded for.
Coding the samples involves looking at timing, frequency, density, intensity and other meta-attributes. When, how often, how much — connected together in a sort of graph, with nodes and edges.
An example algorithm that achieves most of these goals is DTW or dynamic time warping.(1) This has been successfully used to classify attributes for topics like motion capture, sensor networks and web analytics. There are a plethora of potential signal processing and data analysis algorithms available to try but the goal remains the same, find those patterns.
Its models all the way to the top but don’t forget to backtest
As you may have figured out, a model is just a collection of patterns found in a stream of data. The goal with a model is to discover which patterns are most important. For now the best we can do is to look at this meta-information (remember when I told you it would be important) which will eventually be eclipsed by higher order insights.
Backtesting is when you take a recently modeled time-series pattern and you go back and test it against older data to see if it matches historical data trends. When this happens it’s a good indication that the model may be predictive of future trends as well.
The above classifier strategy would be applied to multiple data streams and even transformed copies of the data stream (where transformations come from is yet to be described). Then you go up a level. Start classifying the classifier models. Do any of them have similar sub-graphs. Keep the best performing aggregate models to do higher order predictions and shorten the feedback loop using these to do guesstimates.
We’ve been pitting brains against brains this whole time
Hey we need some real feedback
Here’s where it gets interesting. We don’t have anything useful yet… it’s eerily similar to when you get a big puzzle and you’ve just started sorting out all the pieces by edge geometry and color, trying to find the sides, corners and major regions. Of course it’s entirely possible to continue in this way and solve the puzzle — it’s a nice challenge on a rainy day but really a much faster way to solve the puzzle is to cheat by looking at the picture. The picture provides a much needed feedback loop in our example.
Feedback from inputs, it turns out, is super important. This is currently done via supervised training or GANs with state of the art systems. These techniques have their uses but what’s amazing is that they haven’t done the obvious next step; embodiment. Yeah. Give that brain a body. I know right. We’ve been pitting brains against brains this whole time. Seems obvious to me but maybe it’s not or maybe people haven’t worked out how to provide the right kind of “body” to their systems. Body here simply means a way to get new input autonomously (versus having a person/brain/GAN feed it in).
The reality is that unsupervised learning requires breadth of experience, not depth of experience
Go big or go home, a cat isn’t a cat
A common AI example is one where the system identifies pictures of cats. A typical implementation uses a library of cat photos to train with and some smaller set of photos (not all cats) to validate with. An embodied version of this would be a system that has access to a search API that will get it more photos of cats and other things. Pretty simple. Maybe not even significant as typical training sets have hundreds or thousands of images so how would a few more help?
Here’s where we diverge from the current state of the art. A cat isn’t a cat. A thing does not exist as a binary entity of ‘thing’ or ‘not thing’. A cat is a composition of qualities that altogether equal ‘catness’, a thing is a composition of those qualities it has in common with other’s of it’s ‘type’. So truly to develop an intuition for what is a cat, we must also develop an intuition for what is hair, eyes, a tail, feet, legs, claws, whiskers, ears, body… then layer on top of those an arrangement of these attributes that map to the concept we call ‘cat’. While we’re doing this we will likely also end up mapping out a lot of other concepts, this can’t be helped and should be considered a feature, not a bug.
The reality is that unsupervised learning requires breadth of experience, not depth of experience and this is the big difference. What is really needed in a training set of images are thousands of animals, some of which are cats… with those same cats (and other animals) represented by 10s of photos of each animal from different angles. In this way the system can learn what a cat is, distinct from what a dog is or a squirrel or a ferret or a groundhog. Then when asked to identify cats in a set of photos, it can reject anything that doesn’t include an animal first, then reject any with animals that match for a different species and finally reject any that are not a close enough match to a cat.
Cross-training makes champions
Have you heard about how cross-training leads to a better overall results? It’s not a particularly controversial position to take. There is something about how one set of skills maps on to another set of skills. There is evidence that where there is overlap the skillset becomes deeper, more nuanced and more robust. Like when a maths expert takes up an instrument, an artist learns to cook or when a gymnast takes up ballet — unexpected (or sometimes expected) benefits begin to arise from the hybridization of each domain.
As you may have guessed, this is something we should want for AIG as well. So what does that mean in terms of an implementation? It means that we want speech recognition, optical character recognition and text parsing to be a part of our natural language processing system. For that matter we want speech generation and text creation and general computer vision with object identification and scene parsing to be there as well. Yeah baby, we want it all.
Okay, so the only thing missing here for some kind of android is embodiment within an organic system with muscles and vocal chords and stuff… I hear you–that would be cool, but that’s outside the scope of this document (you were probably thinking that nothing would qualify for that honor but there you go). Don’t worry though, there are options that should provide the kind of sensorimotor feedback we organics get from sub-vocalizing words and subtly acting out gestures with our body language.
Practically speaking this means a lot of classification needs to be happening. Using the previously described methods, each sensory channel can be creating a catalog of models that match up with various clusters of patterns found in the incoming data stream. Using the time code meta-information available in the immutably stored sensory graph we should be able to cross-reference and model data from one channel as being time code related to model data in another channel. Backtesting this match up should lead to a positive or negative consensus on whether or not the e.g. visual pattern for a cat, syncs up with an audio pattern for cat noises like purring or meowing.
Even though at this point we don’t even have a word for ‘cat’, it’s never been labeled, we may have a multi-faceted mental model for ‘catness’ that might include visual, motion/gate, audio, olfactory, tactile or other kinds of sensory information that are non-anthropomorphic like a heat signature, an electromagnetic field signature or anything else that could be measured. It’s not critical to have so many different channels but it is critical that the channels are in-sync.
Temporally synchronized stimuli is a great way to establish relationships between otherwise disparate data sets (we’ll go into other ways to do inferencing later). In this case however, if it so happens that our system is exposed to a photo of a cat that has a caption that says ‘cat’ and maybe speaks out loud the syllables for ‘kuh-ahhh-tuh, khahth, kat, cat’ and hears itself say the word, then we’ll have the final piece of the puzzle in place for communicating about cats in the english language. That temporal coding will provide the connectors between sensory input patterns to establish a concept model with a multi-dimensional profile.
There’s a secret weapon
Under the hood of course we’re still looking at a bit of a mess. The relationships between these different pattern samples, these attribute models, are going to be all over the place. Something like the concept of ‘fur’ is going to be connected to thousands of lower order, peer level and higher order models. Some connections will be stronger than others. Connections can be unidirectional or bi-directional. How will our system ever be able to take advantage of all this information, won’t it get bogged down in recursion or linear or worse search times? Not at all. There’s a secret weapon.
Massively parallel processes. Remember there are thousands of classifiers making predictions about the data coming in. There’s no reason why they can’t all be working on the problem at the same time.
… to be continued
Notes // additional related conversations I had with myself
This is classic AI stuff: setting up a scoring system to maximize your goal by assigning value to metrics that represent success, evaluating all possible paths then picking the one that scores highest in the metrics you care about the most (sometimes you also want to minimize a metric, e.g. fastest travel time using the least amount of fuel). The problem with the ‘evaluate all paths’ approach is that it’s a really expensive process when you first start to do it. So computer science folks have come up with all sorts of ways to avoid it. Many very clever algorithms that have been optimized to start from nothing and achieve greatness in what is not a pretty short time frame. So let’s use some of them, but not quite yet.
Let’s dwell for a moment on this ‘evaluate all paths’ problem. I don’t know about you but when I first encounter something completely new I get really overwhelmed with the possible ways that to understand it. Like a new language or a new device or new software. There are whole domains of study dedicated to how to get people from ignorant to competent in the shortest time possible.
One of the approaches has been to observe how children learn something for the first time. Another approach that has gotten a lot of attention lately is the ‘fail fast’ approach, which is to say it’s trial and error while trying to minimize any psychological impacts of error. Just keep trying stuff and you’ll learn it eventually. These two approaches seem similar.
Right, so this sounds like one of those computer science, machine learning techniques doesn’t it? Feed forward and back propagation in a “neural net”; or as an algorithm, just “backpropagation”, which essentially is a process of evaluating sample data using a function of some sort and using the difference between that result and your desired result to update the evaluation function so it gets closer to your desired goal. Trial and error.
Neural nets, deep neural nets, deep deep neural nets, GAN or generative adversarial networks — et al. They all start from scratch for every problem. You set them up with some neurons that are tuned to evaluate some discrete data set and then you train them or train them and then have them train each other (in the case of GANs). They require hundreds, thousands, sometimes hundreds of thousands or even millions of data samples to become accurate at making predictions. They are ultimate trial and error statistical champions.
What can we do different? We’ve looked at the ‘evaluate all paths’ approach and the ‘trial and error but really fast’ approach. Neither of these approaches ever gets better on it’s own though. They each can learn how to optimize themselves but rely on external input to do anything new.
1) DTW dynamic time warping. https://link.springer.com/article/10.1007/s00778-012-0289-3
|
Colliding Minds
| 0
|
colliding-minds-1e896f07b52
|
2018-08-09
|
2018-08-09 18:47:38
|
https://medium.com/s/story/colliding-minds-1e896f07b52
| false
| 2,977
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
James Hatfield
|
Experience Architect and Human Being
|
ddbe74db651
|
emenoh
| 198
| 519
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-02
|
2018-09-02 07:24:14
|
2018-09-02
|
2018-09-02 15:26:02
| 15
| false
|
en
|
2018-09-02
|
2018-09-02 15:28:44
| 0
|
1e897dc18f37
| 3.062264
| 1
| 0
| 0
|
NumPy
| 5
|
Implementation of machine learning at Hackveda .
NumPy
let us have a glimpse what numpy is :
NumPy is a mathematical library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays.
Using numpy, mathematical and logical operations on arrays can be performed.
NumPy is an N-Dimensional arrat type called ‘ndarray’ . This ndarray describes the collection of items of same type .
Every element in a ndarray is an object of data type object called ‘dtype’.
FUNCTIONS used in numpy are :
Identity
Astype
Arange
Linspace
Indices
Data types
Reshape function
Converting list into an array
Slicing
Transpose
Firstly we will import numpy :
importing numpy
IDENTITY
In identity matrix values are printed in float .
ASTYPE
In Astype , they are used to convert its data type like integer to float and vice versa .
ARANGE
→4. (1,21,3) : it will print numbers from 1 to 20 with the gap of 3
In Arange , it will print values in order exluding the last number i.e it will print n-1 numbers .
data type of integer is changed
LINSPACE
In linspace , its returns number spaces . it creates matrix with intervals according to the num mentioned .
INDICES
In Indices , it prints the indexes of the matrix first and then the values of the matrix .
DATA TYPES
In data types , it will tell you the type of array using ‘dtype’ .
Format to change the type of array used .
RESHAPE
Reshape , it will reshape the list of elements you mentioned above into a matrix you want .
You just have to write elements in row form in type of matrix you need .
CONVERSION OF LIST INTO AN ARRAY
In this coversion of list into an array , using numpy list can simply be converted into an array .
SLICING
Slicing helps you to print particular part of the array .
You have to mention index of the items and they are separated by colan(:)
(0:5:2) or (2::2) → another colan or double colan represents the gap you need to print between the items .
TRANSPOSE
Transpose will convert the rows into columns and vice versa of the matrix mentioned above .
So this is all you need to know to use NumPy .
— — — — — — — — — — — — — — X — — — — — — — — — — — — — —
|
Implementation of machine learning at Hackveda .
| 1
|
implementation-of-machine-learning-at-hackveda-1e897dc18f37
|
2018-09-02
|
2018-09-02 15:28:44
|
https://medium.com/s/story/implementation-of-machine-learning-at-hackveda-1e897dc18f37
| false
| 414
| null | null | null | null | null | null | null | null | null |
Programming
|
programming
|
Programming
| 80,554
|
Deepti Bhatia
|
A B.Tech undergratuate from SRM University. Loves programming in python and machine learning.
|
b6cd7b195c1d
|
deeptibhatia
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
290a28c3031c
|
2017-09-10
|
2017-09-10 23:01:48
|
2017-09-11
|
2017-09-11 01:46:05
| 10
| false
|
en
|
2017-09-13
|
2017-09-13 14:02:11
| 13
|
1e8b685b85e2
| 10.027358
| 7
| 0
| 0
|
I’ve always had a big interest for tech community and their amazing events. One year ago, during October 2016, one of my best buddies and…
| 5
|
My experience at Startup Weekend : Artificial Intelligence (+tips)
I’ve always had a big interest for tech community and their amazing events. One year ago, during October 2016, one of my best buddies and previous co-founders (we worked on a food app in Paris) sent me a message about those events called Startup Weekend for which he was an official organizer.
What he told me grabbed my curiosity:
Hey man, I organized a Startup Weekend event a few months ago during the summer. It was awesome: 54 hours challenge, teams of developers, marketers, designers, entrepreneurs all teaming together in groups to work out their most innovative idea & product.
I know there is one happening in a few weeks. First one to actually be about Artificial Intelligence. You should check it out.
I replied :
So it’s like a hackathon right? But there are also non-tech people like business people, marketers & designers?
He told me :
Yep, it’s something good for also business students like you and me. You get to pitch your startup project with your team in front of the jury on final day. Best pitch with idea, product demo & business model wins. You get solid recognition from big institutions like INRIA, Microsoft, etc. and really nice prizes so that you can launch your startup afterwards.
Challenges? I love them. And this one seemed to be pretty unique from all other hackathons I heard of before.
Even though the theme was about Artificial Intelligence (AI) and I had absolutely no knowledge in this field, I thought “why not give it a try?”.
So I ate, for a few days before the event started, books about AI, videos of best AI conferences, speakers, etc to at least learn a bit about this revolutionnary tech field coming up.
1. Friday — Landing in planet Startup Weekend
Excitement was at it’s peak when I reached the event doors. The event was happening at School 42. Founded by Xavier Niel’s (famous tech personality in France), this school has been well-known for it’s succesfull & innovative way of teaching code (no teachers, only students helping each other).
First participants reaching the event
I was glad to discover the place. Startup Weekend staff gave us a warm welcome and a pretty cool bag with some goodies (stickers, etc) + badge for the event.
After a nice intro from one of the Startup Weekend speakers which explained the theme, what participants were expected to do, how many hours for the challenge, what prizes to win, etc., we got to do some pretty fun ice-breaking games and start making friends.
Then participants were allowed to pitch their ideas in front of everyone in the conference room. You could see a few people getting nervous to pitch their idea. But it turned out that more than 20 ideas were presented!
Even though I was hesitant to pitch mine, I did it. And it turned out that lots of people loved it.
What was my idea all about?
When living in Shanghai during my studies, I had to open a bank account and get a money transfer from France. So I went to a Chinese bank but nobody could speak to me in English. And I didn’t speak Mandarin. So I simply drew on a piece of paper what I wanted to say: map of France, arrow towards China, euro money, stick-man figure of myself.
Not only did the banker understood what I was drawing. But she also replied to me with her own drawing aswell!
So I thought, why not develop an app which can translate what you write but with images? An image is worth a thousand words, and can be the fastest & simpliest communication tool sometimes.
Once all pitches were presented, the best ones had to be elected! So everyone of us had 3 stick notes and you had to put those stick notes on each “Idea board” that you were interested in. Those stick notes served as voting points.
After the best ideas being selected, groups had to be formed. You could see developers, marketers, designers, entrepreneurs all teaming up and getting excited for the projects they wanted to work for.
2. Saturday— Work, work & mentors
After a first Firday night brain-storming with their team, participants came back early on Saturday, excited to finally start working this out.
You could see everyone focused and really passionate about what they were doing. Stick notes were starting to invade every walls of the co-working spaces, with business canvas written on it or to-do lists.
Early morning, while the developers in my team were doing massive coding, I was taking care of the design & business part.
Lunch time was pretty good! We got some sushis and pizza all over the weekend. I remember one of my team workers telling me to stop working and grab a slice of pizza. I was totally obsessed with my work and really stimulated by the startup/tech vibe over there.
Sushi time!
Saturday was also the special day for electing your mentor and getting feedback from him. Indeed, Startup Weekend had a few mentors (people from solid tech or entrepreneurship background) who were available to help teams out during the afternoon.
I remember having great mentors for my team such as Michel Cezon, Paul Strachman and even french celebrity Taig Khris.
Michel Cezon (middle), one of my great mentors during the event.
Mentors provide you excellent feedback, and they also serve as coaches aswell.
Michel Cezon was present every day during the event to look at each team’s project. He is one of the mentors who made us strongly believe in our startup idea. We are very thankfull to him and are still in contact with him nowadays.
Startup Weekend is really fun. I remember having Nerf gun fights organized by the staff. At some moment, you would be working on your tech product or business plan, and suddenly receive a nerf bullet in your back. War is war. You have to get a nerf gun and respond then ;)
Who knew that building a startup was also about fighing in Nerf wars?
We also had tons of other social games so that teams could take a break from their work and have a good laugh all together.
I remember our Saturday night finishing pretty late. You only have 54 hours so you wan’t work to be done and pitch deck to be ready for Sunday’s final jury.Around 2 am or 3 am, after the red bull pack being over, you could see some teams heading back to their homes to get a few (precious) hours of sleep.
3. Sunday— D-Day
Sunday is the final day for teams to finish up their startup project and then present their product + pitch in front of the final jury.
I remember that day our developer was working hard on the MVP (Minimum Viable Product) which is considered as the prototype. We were wondering at some moment if we should do a live demo of the app or not. After hesitating a bit, we finally decided to do it. The risks can be big (demo failure, bug, etc), but if it works, we would have proven to the jury that our technology exists and actually works.
I was glad to have an awesome developer in my team.
For my part I was finishing the pitch deck with my marketing friend in my team. We were also making a few extra slides in case the jury would ask us questions about our financial part, revenue, market validation, etc.
Validating your market is important. You have to show that your product fits to a demand and could be succesfull if launched on the market. Those people in the jury have all expert backgrounds in tech or business, so you’d better convince them in the best possible way that your startup could actually work.
Half an hour before the final presentations started, I was training on the pitching with my team.
As the team leader & initiator of the project, I was designated to be the speaker for my team, so it was important that I’d practice in front of my colleagues the pitching and collect feedback from them to improve my presentation.
During the afternoon, teams started to present. Everyone was pretty excited to show what their hard work was all about.
1,2,3 .. Gaya here we go!
At the end was our turn. Team Gaya. We start the presentation by showing a video of a student in China, unable to communicate with a local, which would introduce our problem statement.
Then I talk about our solution. A revolutionnary app which can translate sentences, words into images thanks to an AI technology. People get interested. We show them the demo. Start transalting a few sentences for basic scenarios when travelling such as:
“ Where is the ATM?”, “Where can I find a Taxi?”
Then we do a joke. We explain a scenario where a man is in a nightclub and he wants to flirt with a local with who he can’t communicte in her local language. We write a sentence in the app .Translates “I like you”. The app works and translates it perfectly with cute images of a boy sending a heart to a girl. People laugh and smiles are everywhere.
Comes next in our presentation all info about our Market, Customer Validation, Revenue Model, etc. We only have 5 mins to present so we want to keep it short, but with the best info to provide to the jury.
At the end the jury asks us some questions, mostly all technical. Our developers do an amazing job at answering the questions.
After all presentations over, there is a short break where participants can go get some air, then come back in the room and the top 3 winners are announced.
After 2 teams being announced, we hear “and now the #1 winner of Startup Weekend is … GAYA!”. My team and I burst with joy and we run towards the stage to get the congratulations and receive our prizes. It was an awesome feeling, especially after working so hard during the weekend.
Our team Gaya with Deputy Director of INRIA
We had congratulations from the jury and all participants. It was nice to feel so much support for our project. Some people even gave us new product feature ideas for our startup. In the end, all projects of participants were brilliant. Each team came up with very interesting AI startup ideas.
During the night of Sunday, we made some last social games with the staff and all participants. We had a few drinks in the buffet they had organized and we exchanged business cards, facebooks profiles, between friends we made during the event.
Startup Weekend is a brilliant event to work on a startup and product idea during a weekend. It is also a great way to meet people from different backgrounds and all interested in tech. You will work hard with your team, go through some epic moments, but also have lots of fun! I think Startup Weekend is one of those beautiful events part of tech community in the world. Should you go for it? Absolutely.
__________________________________________________________________
And now here are a few tips that I wish I’d knew before doing a Startup Weekend. I think those are important to be expressed, especially after what I experienced during the event:
Tip #1 : You don’t need to be an expert in the theme (ex: AI). But try and learn a few stuff about it before the event!
Tip #2 : Pitch your idea no matter what. At first I hesitated to pitch mine because I was thinking I hadn’t enough knowledge about AI. Well look where it got me: I won the Startup Weekend! So always pitch no matter what. You will not regret it :)
Tip #3 : Got a small team? That’s good! At first when I made my team we were only 4 whereas all other teams were at least 7 or 8. I thought it was a problem but it turned out to be actually a great advantage. The fewer you are, the less talk/debates there is, and you can more focus on your work. Also, in small teams everyone knows what they have to do, whereas in big teams it might be more difficult to coordinate everyone and optimize productivity.
Tip #4 : Developers? Work hard on making an MVP which you could demo during the final day. Check out the bugs and make it look functional.
Designers? Make people love the product with an awesome design. Also help marketers with the design of pitch deck : readability & simplicity
Marketers? Remember to validated your market, show customer validation, phone a few friends, do some polls on facebook. Also very important: THE REVENUE MODEL. Think about something that will actually work. The jury likes projects that are credible. This is not only about having a cool idea. This is about making a startup and business that works :)
Team leaders? This is all about product management (which I love to do). Check every 2 hours how your teammates are doing, if they need help, how long they will need to do their task, etc. Prioritize the tasks in the team: what needs to be done first? second?. Make everybody feel good and try to all share the same vision of the project.
_________________________________________________________________
I hoped you liked this article and wish you a great Startup Weekend!
Also remember the next Startup Weekend : AI will happen in Paris the 29/09/17.
_________________________________________________________________
More info about me?
Charles Loumeau, 23 yo Master’s business graduate, looking for a job in either France, India or USA 🎓
😁 Tech entrepreneur / Product manager :
• World Tech Scene: Online media & tech community worldwide.
Food Hunter: Smartphone app, find places to eat in Paris late at night, between 00h and 6am (MVP available).
Gaya: AI smartphone app, translates text, words, sentences into images!
🚀Digital Marketer + Designer:
North of 41, Dynamite Network, Conflict Resolution Place, US Canada Forum (Toronto, Canada)
Hikal Ltd (Bangalore, India)
My own projects (WTS, FH, Gaya, etc).
Bonus: I’m also a street artist! Check out my website
Feel free to reach out!
Mail: charlesloumeau@gmail.com
Twitter: @chazloumeau
Linkedin: /in/charlesloumeau
|
My experience at Startup Weekend : Artificial Intelligence (+tips)
| 9
|
my-experience-at-startup-weekend-artificial-intelligence-tips-1e8b685b85e2
|
2018-06-08
|
2018-06-08 11:19:41
|
https://medium.com/s/story/my-experience-at-startup-weekend-artificial-intelligence-tips-1e8b685b85e2
| false
| 2,326
|
Global Startup Weekend in Artificial Intelligence (GSWAI) connects entrepreneurs, organizing teams, and startup communities around the world.
| null |
GlobalSWAI
| null |
Global SWAI
|
globalSWAI@startupweekend.org
|
global-swai
|
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,AI,INTELLIGENCE ARTIFICIELLE
|
GlobalSWAI
|
Startup
|
startup
|
Startup
| 331,914
|
Charles Loumeau
|
Winner 2016 Startup Weekend : AI (@GlobalSWAI) / Founder @WorldTechscene / Volunteer @HackerNest / #Startup Addict / #Tech Passionate / Street Artist
|
7c1d37824b27
|
CharlesLoumeau
| 3
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
import pandas as pd
from sklearn.preprocessing import LabelEncoder
#reading data set
dataset = pd.read_csv('Data.csv')
X = dataset.iloc[:, :-1].values #Independent variable
Y = dataset.iloc[:, 3].values #Dependent variable
print(X)
print(Y)
#Creating LabelEncoder objects
labelEncodeer_x = LabelEncoder()
labelEncodeer_y = LabelEncoder()
#Transforming Country column
X[:, 0] = labelEncodeer_x.fit_transform(X[:, 0])
#Transforming Purchased column
Y = labelEncodeer_y.fit_transform(Y)
print(X)
print(Y)
Before encoding
=========================================================
X =
[
[France 44 72000]
[Spain 37 48000]
[Germany 40 45000]
[France 35 58000]
[Spain 43 52000]
]
Y = ['No' 'Yes' 'Yes' 'Yes' 'No']
After encoding
=========================================================
X =
[
[0 44 72000]
[2 37 48000]
[1 40 45000]
[0 35 58000]
[2 43 52000]
]
Y = [0 1 1 1 0]
Country Age Salary
===================
France 44 72000
Spain 37 48000
Germany 40 45000
France Spain Germany Age Salary
=================================
1 0 0 44 72000
0 1 0 37 48000
0 0 1 40 45000
1 0 0 44 72000
import pandas as pd
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
#importing data set
dataset = pd.read_csv('Data.csv')
X = dataset.iloc[:, :-1].values #Independent variable
Y = dataset.iloc[:, 3].values #Dependent variable
#Label encoding
labelEncodeer_x = LabelEncoder()
labelEncodeer_y = LabelEncoder()
#In our data first column holds categorical data
X[:, 0] = labelEncodeer_x.fit_transform(X[:, 0])
Y = labelEncodeer_y.fit_transform(Y)
#One hot encoding
#In our data first column holds categorical data
oneHotEncoder = OneHotEncoder(categorical_features=[0])
X = oneHotEncoder.fit_transform(X).toarray()
print(X)
print(Y)
X (before one hot encoder):
[France 44 72000]
[Spain 37 48000]
[Germany 40 45000]
[France 35 58000]
[Spain 43 52000]
X (After one hot encoding):
[1 0 0 44 72000]
[0 0 1 37 48000]
[0 1 0 40 45000]
[1 0 0 35 58000]
[0 0 1 43 52000]
Note : In above example
France replaced by : 1 0 0
Spain replaced by : 0 0 1
Germany replaced by : 0 1 0
| 14
| null |
2018-08-14
|
2018-08-14 09:46:53
|
2018-08-14
|
2018-08-14 13:45:49
| 1
| false
|
en
|
2018-08-15
|
2018-08-15 06:46:42
| 3
|
1e8c39731933
| 3.218868
| 3
| 1
| 0
|
In many practical Data Science activities, the data set contains categorical variables.These variables are typically stored as text values…
| 2
|
Encoding categorical data
In many practical Data Science activities, the data set contains categorical variables.These variables are typically stored as text values which represent various traits.
Examples :
color (“Red”, “Yellow”, “Blue”), size (“Small”, “Medium”, “Large”).
Many machine learning algorithms can support categorical values without further manipulation but there are many more algorithms that do not. We need to convert those categorical values to numeric values
Label encoding, One hot encoding are the two popular techniques to encode categorical data.
Label Encoding
Label encoding is simply converting each value in a column to a number.
For example if we have colors like (“Red”, “Yellow”,”Blue”) , in label encoding we are going to give a numeric value to each color. such as
Red - 0
Yellow - 1
Blue - 2
Label Encoding In Python:
We are using LabelEncoder class of “sklearn.preprocessing” for this task.
Lets consider we have a csv file (Data.csv) with given data
In above data Country,Purchased columns are categorical data. Given python code converts those columns data into numeric data using label encoding.
If we observe X and Y before and after encoding then the data is encoded.
In above example France replaced by 0, Spain replaced by 2, Germany replaced by 1. In same way Yes replaced by 1, No replaced by 0.
One Hot Encoding:
One hot encoding is a process by which categorical variables are converted into a form that could be provided to ML algorithms to do a better job in prediction.
It is a representation of categorical variables as binary vectors.
This first requires that the categorical values be mapped to integer values.Then, each integer value is represented as a binary vector that is all zero values except the index of the integer, which is marked with a 1.
For example Let’s say we have
After one hot encoding above data encoded to
In above table in a row in a column if we have 1 means the country associated with that row is the the country value of that columns.
If we see the first row
We have 1 at first column, value associated with that column is France so, France is the country in first row.
One Hot Encoding In Python:
We are using OneHotEncoder class of “sklearn.preprocessing” for this task.
One hot encoding first requires that the categorical values be mapped to integer values. As discussed before we will use LabelEncoder to encode categorical variables as numeric variables.
OneHotEncoder takes a important parameter categorical_features which accepts array of indexes.Where each index represents a categorical data column.
Example: If we have
oneHotEncoder = OneHotEncoder(categorical_features=[0, 2])
Above oneHotEncoder object treats first and third column as categorical variables
If we observe X and Y matrix after one-hot-encoding
We no need to encode matrix Y because it is holding two categorical variables and those categorical already encoded by label encoding.
Thank you for reading, enjoy machine learning :)
If you are interested in learning java (Tutorials are developed in telugu language), please subscribe my channel. https://www.youtube.com/watch?v=l5v9ZbjsRCo.
|
Encoding categorical data
| 8
|
encoding-categorical-data-1e8c39731933
|
2018-08-15
|
2018-08-15 06:46:42
|
https://medium.com/s/story/encoding-categorical-data-1e8c39731933
| false
| 800
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
srinivasarao aleti
| null |
7499ac7cbcb
|
srinivas.aleti03
| 18
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-31
|
2018-08-31 10:10:26
|
2018-08-31
|
2018-08-31 10:27:22
| 2
| false
|
en
|
2018-09-02
|
2018-09-02 23:14:33
| 1
|
1e8e8dd655f3
| 4.202201
| 0
| 0
| 0
|
Technology developers, FinTech entrepreneurs and innovators around the World are super excited about Blockchain technology, and have been…
| 4
|
A Common Future with Blockchain Technology
Technology developers, FinTech entrepreneurs and innovators around the World are super excited about Blockchain technology, and have been for a number of years now, and they should be! For a long time coming, this technology known as the Blockchain has almost human like desires. There I’ve said it, human like desires!
I’m quite often referred to as the ‘Spiritual Scientist’. You see, the left side of my brain is completely charged with creativity. I write poems, short stories, I meditate and have a background in graphic design. The right side however, thinks a bit more rationally, more logically and over the years, my programming team have made me aware of how code is defined to a set of mathematical rules. I get it, I used to design websites using HTML tables! Shhhh, don’t tell anyone! In my defence, this was over 15 years ago and before the introduction of Div structures and CSS, therefore I believe that…
Consumers do not care about Blockchain
It’s important to recognise that although Blockchain is consumer focused, in many cases and in the very near future, consumers will not care and possibly not even be aware that it is Blockchain technology that is powering their website or mobile application.
Before heading towards the future of Blockchain, it’s important to quickly reflect on consumer behaviour with the Internet over the last 20 years. Let’s to that, real quick.
How many consumers, who browse and use the Internet on a daily basis know that the URL they enter into a web browser is simply there to disguise a series of numbers known as an IP address? How many of them know that this IP is the address or location of a server that has a series of files which, when clicked on, open and display a web page. Not many I’m guessing. Do consumers even care? Of course they do not. The same thing is inevitable with Blockchain.
Now, just because consumers may not care, that does not mean the Blockchain community shouldn’t. We’re all super-charged with this revolutionary find, and I certainly am one of them. Then there is…
Early UX
Coming back to the mind of the consumer, let’s back track a little more. I remember browsing the Internet on my iMac, way back in 2000. There was this bright coloured, transparent, out of this World, alien-like machine on my desk at home and I would open up Internet Explorer. When accessing a website (and there weren’t many back then), I had to enter the full web address, including the prefix ‘www.’. This protocol is pretty much redundant now and is hardly ever used or seen on brands and their marketing campaigns. Did the consumer ever notice this change? Have they ever given this a second thought? The answer is no.
There are so many other examples of protocols and interactions with the Internet that have disappeared. I gave the URL example above, who even enters a URL nowadays? Today, it’s quicker to enter a few words into Google, click, click away and click again and you’ve found what you were looking for. Mobile apps are making the browsing experience less and less. However, I’m still optimistic about browser technology. It does need some looking at and there is plenty of room for disruption in this space too.
The above few examples are exactly why I encourage technology developers, corporations and Government organisations in this space to not get too carried away with promoting the fact that you are adopting Blockchain technology. Just get on with it. Research and innovation is far more important than singing from the roof-tops right now. This revolution is very new, so before we shout about, let’s harness its’ true potential and make it work.
That does not mean to say that organisations or application developers should not share knowledge and learn from each others findings. On the contrary. I seriously just want to see any company or World leading organisation that has a passion for change to hire the talent and seriously disrupt their industry, their internal processes, their personnel. They will be the ones who get my attention, and then there truly be…
A new World order with Blockchain
As more and more corporations begin to learn about the Blockchain, they will be forced to harness it’s true nature and will ultimately lead to being more ethical companies. The good thing about this is that, most of the larger corporations and World organisations are already striving towards a higher ethical standard of practice.
Adopting Blockchain technology does not mean reduced profits for shareholders. So if there are any shareholders reading this, don’t worry you’re completely safe. In fact, I believe it’ll encourage more profit because Blockchain will fill the gaps where there may be friction in current business processes, and we will know that where there is friction, there is cost. Blockchain will sit somewhere in between your existing technological infra-structure and what the end user or consumer will experience.
A Common Future
I like to think that we’re all heading towards a common future, not just technologically but emotionally and intellectually. More and more people today are adopting a vegetarian or vegan diet. People are using organic and natural products. Alternative medicine and ancient healing techniques are fast becoming the norm. This is a conscious shift for mankind.
We are now entering the realm of a superior technology, and the human like attributes and principles of the Blockchain will remain a constant in the future. It will certainly evolve and most likely take on a new identity, however we will one day look back and be reminded of a set of immutable rules and commands that resemble mankind’s desire to live in a fairer World.
Blockchain technology is likely to be the early adoption towards creating the perfect World. Nothing happens over night, but for the first time in a long time, there’s something to get excited about.
Mann Matharu
Certified Blockchain Expert
Founder & CEO at StarkTechnologies.co.uk
|
A Common Future with Blockchain Technology
| 0
|
a-common-future-with-blockchain-technology-1e8e8dd655f3
|
2018-09-02
|
2018-09-02 23:14:33
|
https://medium.com/s/story/a-common-future-with-blockchain-technology-1e8e8dd655f3
| false
| 1,012
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Mann Matharu
| null |
6aff302650ed
|
mannmatharu
| 1
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-22
|
2018-06-22 10:45:56
|
2018-06-22
|
2018-06-22 11:35:28
| 3
| false
|
en
|
2018-06-22
|
2018-06-22 11:35:28
| 4
|
1e8f6e2ce0af
| 5.134906
| 0
| 0
| 0
|
Artificial Intelligence (AI) and Machine Learning (ML) are two words casually thrown around in everyday conversations, be it at offices…
| 2
|
Machine Learning vs Deep Learning: Here’s what you must know!
Artificial Intelligence (AI) and Machine Learning (ML) are two words casually thrown around in everyday conversations, be it at offices, institutes or technology meetups. Artificial Intelligence is said to be the future enabled by Machine Learning.
Now, Artificial Intelligence is defined as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” Putting it simply means making machines smarter to replicate human tasks, and Machine Learning is the technique (using available data) to make this possible.
Researchers have been experimenting with frameworks to build algorithms, which teach machines to deal with data just like humans do. These algorithms lead to the formation of artificial neural networks that sample data to predict near-accurate outcomes. To assist in building these artificial neural networks, some companies have released open neural network libraries such as Google’s Tensorflow (released in November 2015), among others, to build models that process and predict application-specific cases. Tensorflow, for instance, runs on GPUs, CPUs, desktop, server and mobile computing platforms. Some other frameworks are Caffe, Deeplearning4j and Distributed Deep Learning. These frameworks support languages such as Python, C/C++, and Java.
It should be noted that artificial neural networks function just like a real brain that is connected via neurons. So, each neuron processes data, which is then passed on to the next neuron and so on, and the network keeps changing and adapting accordingly. Now, for dealing with more complex data, machine learning has to be derived from deep networks known as deep neural networks.
In our previous blogposts, we’ve discussed at length about Artificial Intelligence, Machine Learning and Deep Learning, and how these terms cannot be interchanged, though they sound similar. In this blogpost, we will discuss how Machine Learning is different from Deep Learning.
What factors differentiate Machine Learning from Deep Learning?
Machine Learning crunches data and tries to predict the desired outcome. The neural networks formed are usually shallow and made of one input, one output, and barely a hidden layer. Machine learning can be broadly classified into two types — Supervised and Unsupervised. The former involves labelled data sets with specific input and output, while the latter uses data sets with no specific structure.
On the other hand, now imagine the data that needs to be crunched is really gigantic and the simulations are way too complex. This calls for a deeper understanding or learning, which is made possible using complex layers. Deep Learning networks are for far more complex problems and include a number of node layers that indicate their depth.
In our previous blogpost, we learnt about the four architectures of Deep Learning. Let’s summarise them quickly:
Unsupervised Pre-trained Networks (UPNs)
Unlike traditional machine learning algorithms, deep learning networks can perform automatic feature extraction without the need for human intervention. So, unsupervised means without telling the network what is right or wrong, which it will will figure out on its own. And, pre-trained means using a data set to train the neural network. For example, training pairs of layers as Restricted Boltzmann Machines. It will then use the trained weights for supervised training. However, this method isn’t efficient to handle complex image processing tasks, which brings Convolutions or Convolutional Neural Networks (CNNs) to the forefront.
Convolutional Neural Networks (CNNs)
Convolutional Neural Networks use replicas of the same neuron, which means neurons can be learnt and used at multiple places. This simplifies the process, especially during object or image recognition. Convolutional neural network architectures assume that the inputs are images. This allows encoding a few properties into the architecture. It also reduces the number of parameters in the network.
Recurrent Neural Networks
Recurrent Neural Networks (RNN) use sequential information and do not assume all inputs and outputs are independent like we see in traditional neural networks. So, unlike feed-forward neural networks, RNNs can utilize their internal memory to process sequence inputs. They rely on preceding computations and what has been already calculated. It is applicable for tasks such as speech recognition, handwriting recognition, or any similar unsegmented task.
Recursive Neural Networks
A Recursive Neural Network is a generalisation of a Recurrent Neural Network and is generated by applying a fixed and consistent set of weights repetitively, or recursively, over the structure. Recursive Neural Networks take the form of a tree, while Recurrent is a chain. Recursive Neural Nets have been utilized in Natural Language Processing (NLP) for tasks such as Sentiment Analysis.
In a nutshell, Deep Learning is nothing but an advanced method of Machine Learning. Deep Learning networks deal with unlabelled data, which is trained. Every node in these deep layer learns the set of features automatically. It then aims to reconstruct the input and tries to do so by minimizing the guesswork with each passing node. It doesn’t need specific data and in fact is so smart that draws co-relations from the feature set to get optimal results. They are capable of learning gigantic data sets with numerous parameters, and form structures from unlabelled or unstructured data.
Now, let’s take a look the key differences:
The future with Machine Learning and Deep Learning
Moving further, let’s take a look at the use cases of both Machine Learning and Deep Learning. However, one should note that Machine Learning use cases are available while Deep Learning are still in the developing stage.
While Machine Learning plays a huge role in Artificial Intelligence, it is the possibilities introduced by Deep Learning that is changing the world as we know it. These technologies will see a future in many industries, some of which are:
Customer service
Machine Learning is being implemented to understand and answer customer queries as accurately and soon as possible. For instance, it is very common to find a chatbot on product websites, which is trained to answer all customer queries related to the product and after services. Deep Learning takes it a step further by gauging customer’s mood, interests and emotions (in real-time) and making available dynamic content for a more refined customer service.
Automotive industry
Autonomous cars have been hitting the headlines on and off. From Google to Uber, everyone is trying their hand at it. Machine Learning and Deep Learning sit comfortably at its core, but what’s even more interesting is the autonomous customer care making CSRs more efficient with these new technologies. Digital CSRs learn and offer information that is almost accurate and in shorter span of time.
Speech recognition
Machine Learning plays a huge role in speech recognition by learning from users over the time. And, Deep Learning can go beyond the role played by Machine Learning by introducing abilities to classify audio, recognise speakers, among other things.
Deep Learning has all benefits of Machine Learning and is considered to become the major driver towards Artificial Intelligence. Startups, MNCs, researchers and government bodies have realised the potential of AI, and have begun tapping into its potential to make our lives easier.
Artificial Intelligence and Big Data are believed to the trends that one should watch out for the future. Today, there are many courses available online that offer real-time, comprehensive training in these newer, emerging technologies.
SOURCE: WWW.LEARNTEK.ORG
|
Machine Learning vs Deep Learning: Here’s what you must know!
| 0
|
machine-learning-vs-deep-learning-heres-what-you-must-know-1e8f6e2ce0af
|
2018-06-22
|
2018-06-22 11:35:28
|
https://medium.com/s/story/machine-learning-vs-deep-learning-heres-what-you-must-know-1e8f6e2ce0af
| false
| 1,215
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
chandu
| null |
ebe7b71b4596
|
chandu_22532
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-28
|
2018-06-28 11:28:22
|
2018-06-28
|
2018-06-28 12:47:12
| 4
| false
|
en
|
2018-06-29
|
2018-06-29 08:58:45
| 4
|
1e90a29234c1
| 2.711321
| 4
| 0
| 0
|
By: Matej Ozim ı Motogram CEO & Co-founder
| 5
|
Motogram and the story behind the car scanner technology — AutoScan
By: Matej Ozim ı Motogram CEO & Co-founder
Matej Ozim ı Motogram CEO & Co-founder
I remember it like it was yesterday. It was a gloomy, gray day. While driving to Germany I was listening to a local radio station and preparing for a meeting with business partners. I heard a weather forecast on the news and it did not promise anything good. A hailstorm was coming.
The powerful hailstorm actually took place while I was in the meeting and of course, damaged my car. That’s when the ordeal started. I am always traveling round Europe so I wanted to repair the damage in the shortest possible time. Actually, I was about to take another business trip in a few days, which why I immediately called my insurance agent to help me with the damage evaluation process and repairs.
It all took a really long time. Needless to say, I had to reschedule the business trip I had planned. I couldn’t think about anything other than how to simplify and speed up the damage assessment process.
That’s how AutoScan was born!
AutoScan is a mobile or fixed unit, consisting of a tunnel, fully illuminated with neon lamps, equipped with five cameras and motion sensors. When the car enters the tunnel, AutoScan detects the car and starts the scanning / recording process. The next step is an advanced computer diagnostics algorithm that automatically detects any damage on the car’s body. Damage to the surface of the car results in deviations from the intended sample. In the first step, the system detects possible damaged areas, then analyzes it together with advanced artificial intelligence techniques. The extent of the deformed sample around the damage center is calculated in points and converted into pixels, so the extent of the damage is also accurately determined. The time needed for complete diagnosis of one car is less than 1 minute.
AutoScan — Motogram
Can you imagine how much faster it would all be and the financial, material and humanresources that would be saved with our technology?
If damage occurred, you would just drive your car through the scanner the same day. Within less than a minute the system would offer a complete damage report and you could drive your car to the bodyshop immediately.
***
Our mission is to create a global network of AutoScans that monitors the condition of the car from its production to the last customer. With body shots and photos, other key data of the car is entered by the user: kilometers, VIN number, the date of services and repairs.
AutoScan as a tunnel or dApp → Car Fingerprint
The next stage of development of the already-existing revolutionary car scanner technology AutoScan is digital application (dApp). This way we will achieve faster expansion, scalability and accessibility to all stakeholders in the automotive ecosystem, as well as with the help of blockchain technology. All this will lead to time and cost efficiencies.
Please check out our presentation video → MOTOGRAM.
After that be sure to join our active, thriving and ever-growing TELEGRAM community HERE and be the first to receive the news blasts, special announcements and information about our project! Our amazing team and community are waiting for you!
Best regards.
Matej Ozim l.r.
|
Motogram and the story behind the car scanner technology — AutoScan
| 66
|
motogram-and-the-story-behind-the-car-scanner-technology-autoscan-1e90a29234c1
|
2018-06-29
|
2018-06-29 08:58:45
|
https://medium.com/s/story/motogram-and-the-story-behind-the-car-scanner-technology-autoscan-1e90a29234c1
| false
| 533
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Motogram
|
Motogram is the new platform for fully digitalized and decentralized car value assessments, based on the revolutionary Car scanner technology.
|
1baf12696404
|
motogram
| 9
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-17
|
2018-01-17 23:29:25
|
2018-01-18
|
2018-01-18 16:46:15
| 4
| false
|
en
|
2018-06-21
|
2018-06-21 04:31:04
| 15
|
1e9170652175
| 4.673585
| 9
| 0
| 0
|
By Haoran Shi, Pengtao Xie, Zhiting Hu, Ming Zhang, Eric P. Xing
| 5
|
Illustration by Chenyu Wang
Automated ICD Coding Using Deep Learning
By Haoran Shi, Pengtao Xie, Zhiting Hu, Ming Zhang, Eric P. Xing
If you haven’t already read the first post on our work in AI for healthcare, Predicting Discharge Medications at Admission Time using Deep Learning, go check it out! This post will take a look at another aspect of our healthcare-specific machine learning (ML) platform — how it helps hospitals assign the correct codes to each patient visit.
The International Classification of Diseases (ICD) is a healthcare classification system maintained by the World Health Organization. Diseases and health statuses are classified according to certain rules and uniquely identified by character codes. The ICD was created in 1893, when a French doctor named Jacques Bertillon named 179 categories of causes of death. It has been revised every ten years since then and has become an important standard for information exchange between hospitals. Its application has expanded to various facets of healthcare such as insurance payment, administration management, and research.
The efficiency of ICD coding has begun to receive more attention because of how crucial it is for making clinical and financial decisions. Hospitals with better coding quality see many benefits, including more accurate classification and retrieval of medical records, and better communication with other hospitals to jointly promote healthcare quality and facilitate research (e.g. knowledge graphs, disease-related models, intelligent diagnosis, etc.).
Typically, ICD coding is performed by a professional coder who follows strict guidelines and chooses the appropriate codes according to a doctor’s diagnosis and the patient’s electronic medical record (EMR, or health information system (HIS) in China). This coding process is complex and extremely prone to errors since doctors often use abbreviations in diagnoses, causing ambiguous and imprecise matching to ICD codes. Additionally, many diagnoses don’t match exactly to an ICD code — often, two closely linked diagnoses will be encoded in a single combination ICD code and in some cases, doctors may write one diagnosis for a disease that should correspond to multiple ICD codes. The coding process requires a comprehensive consideration of each patient’s health condition. However, very few medical practitioners are capable of taking over the process since they lack training in professional coding.
In order to solve this industry-wide problem with ICD coding, we propose a new attention-driven deep learning model that automatically translates doctors’ diagnoses into the correct corresponding ICD codes. We are designing different recurrent neural networks that allow the model to automatically distinguish the different types of ICD definitions and written diagnoses and accurately capture hidden semantic information. To address mismatched ICD codes and written diagnoses, our model also introduces a mechanism of attention that allocates different weights to each diagnostic description a doctor writes.
The overall architecture of our model is shown in the following figure. Our experimental data comes from MIMIC-III, which is a free database that can be used for scientific purposes and contains nearly 60,000 inpatient records from 2001 to 2012 from the the Beth Israel Deaconess Medical Center.
To train our model, we first extracted written diagnosis descriptions from discharge summaries and discarded the records that did not contain descriptions. This gave us nearly 11,000 valid records including nearly 60,000 diagnosis sentences.
From there, we used two independent neural networks to learn two different kinds of texts: written diagnoses and ICD code definitions. Each neural network included character-level and word-level recurrent neural networks to obtain hidden semantic information for diagnostic texts. When checking each ICD code, each diagnosis sentence was allocated a different weight based on the hidden semantic representation of the ICD code and diagnosis sentences. The features of the diagnosis sentences were then weighted on an average and passed through a fully connected layer to get a confidence score. After regularization, our model conjectured the ICD code and provided a score of the probability that it should be assigned. We chose 50 of the most frequently occurring ICD codes as coding targets.
By experimenting with real data from a hospital, our ICD encoder achieved F1 values of 0.53 and AUC values of 0.90, significantly better than the coding model without attentional mechanisms. F1-score is a harmonic mean of precision and recall that is widely used to evaluate the performance of a binary classifier on imbalanced data. The AUC_ROC score is calculated as the area under the ROC curve, which is drawn by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. Intuitively, the AUC_ROC score measures the probability that the model assigns a higher score for a positive instance than a negative one, the lower bound of which is $0.5$. The performances of our intact model and ablation models are shown below in Table 2.
In order to verify the reliability of the model, we also analyzed the hidden semantic distribution and attention allocation of the model. We found that the character-level long short-term memory network (LSTM) word encoder can correct various typos and recognize different morphologies that appeared in written diagnosis descriptions by generating similar representations for them. In the process of attention distribution, our model can also effectively distinguish the importance of each diagnosis description when checking different ICD codes, so the accuracy of ICD codes is greatly improved.
Furthermore, our model can give probabilities between 0 and 1, and we can adjust the thresholds according to specific requirements to compromise between accuracy and sensitivity. For example, we can choose a smaller threshold to make the model more sensitive and hand over the model’s ICD code output to professional coders for secondary screening. Typically, the coder needs to select the appropriate code from among tens of thousands of ICD codes, but after initial coding with our model, they will be able to select from a small range of probable codes.
Although the diagnosis descriptions that we could extract from the discharge records are not complete due to the noisy data format, our model still shows very high coding precision. This demonstrates the great feasibility of automatic ICD coding based on physician diagnostic texts. By gaining more well-formed medical records, we firmly believe that the efficacy of the model can be further improved.
If you’re interested in more details and can’t wait for our next post, take a look at our paper: https://arxiv.org/abs/1711.04075
|
Automated ICD Coding Using Deep Learning
| 108
|
automated-icd-coding-using-deep-learning-1e9170652175
|
2018-06-21
|
2018-06-21 04:31:05
|
https://medium.com/s/story/automated-icd-coding-using-deep-learning-1e9170652175
| false
| 1,053
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Petuum, Inc.
|
One Machine Learning Platform to Serve Many Industries: Petuum, Inc. is a startup building a revolutionary AI & ML solution development platform
|
c0fa6af5e77f
|
Petuum
| 365
| 24
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-28
|
2018-02-28 12:27:48
|
2018-02-28
|
2018-02-28 14:53:13
| 1
| false
|
en
|
2018-03-01
|
2018-03-01 03:49:22
| 16
|
1e97fa9da67c
| 1.732075
| 42
| 0
| 0
|
Through my several hours of just being on the internet, I have compiled a list of resources which I wish I had, that can help YOU in you’re…
| 5
|
DON’T know these Machine Learning Resources? You’re missing out!
Through my several hours of just being on the internet, I have compiled a list of resources which I wish I had, that can help YOU in you’re Machine Learning career. These resources can take you very far from a Beginner.
This article is divided into 2 sections, Math and Machine Learning.
Math (For ML Beginners)
Machine Learning mostly requires the fundamental understanding of Linear Algebra, Statistics and Probability. While you can learn how to use all the advanced libraries to accomplish your ML tasks, once something breaks you won’t be able to fix it. Even worse, you won’t be able to understand any new studies being done in the field, since to understand them, you will need a somewhat deep understanding of mathematics. Also, you won’t be able to conduct your own studies or play around with mathematical ML concepts.
Here is list of links which I recommend in descending order of importance:
Khan Academy:
Linear Algebra
Statistics and Probability
2. YouTube:
Linear Algebra(REALLY GOOD): 3Blue1Brown
Statistics(as of writing this, the series is on going): CrashCourse Statistics
Machine Learning (For Everyone)
This being a very huge topic, it is somewhat hard to find good resources which explain the content properly. Hopefully the following help.
NOTE: I’m not sure why, but some YouTube links do not work on the mobile app.
YouTube:
3Blue1Brown
Luis Serrano
Hugo Larochelle
Siraj Raval (Great for advanced concepts)
giant_neural_network (Mostly for beginners)
Jabrils (His Neural Network explanation is the best in my opinion)
sentdex(Machine Learning) (Does all kinds of videos)
Brandon Rohrer (Great explanations)
2. Coursera:
Andrew Ng’s Machine Learning Course (REALLY GOOD)
Andrew Ng’s Deep Learning Specializaton
3. Podcast:
OCDevel’s Machine Learning Podcast (Suggested by Reddit user: java568)
4. Others:
Google’s Machine Learning Crash Course (Uses Tensorflow but will take you very far)
Although this might not seem like a lot, but trust me, once you start getting into the links and really start utilizing the content to the fullest, you will understand how much content there is in just these links.
I hope these resources provide for a good and wholesome learning experience. I for sure wish I had these when I first started out. If you’d like to stay connected with informative post like these, you can follow me as I plan to make more posts to help out the ML beginners.
NOTE: This article is meant for beginners.
|
DON’T know these Machine Learning Resources? You’re missing out!
| 192
|
dont-know-these-machine-learning-resources-you-re-missing-out-1e97fa9da67c
|
2018-06-11
|
2018-06-11 09:01:52
|
https://medium.com/s/story/dont-know-these-machine-learning-resources-you-re-missing-out-1e97fa9da67c
| false
| 406
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Suryansh S.
|
Bad at sarcasm and above average at programming. Twitter: @SuryanshTweets, Business?: business.suryansh@gmail.com
|
bf42a9b53d2b
|
SuryanshWrites
| 299
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-06
|
2018-04-06 12:54:18
|
2018-04-10
|
2018-04-10 06:44:21
| 2
| false
|
en
|
2018-04-10
|
2018-04-10 06:44:21
| 1
|
1e99b2770580
| 4.594654
| 9
| 0
| 0
|
Why do customers return items? This question has been on minds of fashion e-commerce executives for a long time. The answer might surprise…
| 3
|
The true nature of returns in fashion e-commerce
Why do customers return items? This question has been on minds of fashion e-commerce executives for a long time. The answer might surprise you — as nearly 70–75% of all returns in fashion e-commerce are unnecessary and can be prevented. In this post we shed light on the common misconceptions and motivations shoppers have for buying and returning.
What return forms are not telling
Ask any e-commerce manager about the most common return reasons in their shop, and they will instantly refer to the latest numbers from return forms.
Most online shops rely on (often exclusively) the return forms, filled by shoppers, to determine why an item is returned and how to deal with a shopper’s return request. As logical as this seems, how reliable are the shoppers inputs really?
Size related reason is usually a the top of return forms
Size is usually one of the leading responses with an average 65–70% of shoppers stating this as the reason for their return. Finding the right size is a well-known and a long-standing problem in the industry, so it’s no surprise that people will return due to the wrong size and fit.
However, sometimes shoppers select this reason, because it won’t require any additional explanation. Size is typically placed at the top of the form, and a shopper will just tick this off to save time and not think more of it.
Return forms are used on an assumption that shoppers will be truthful and will pick the most suitable reason for a return. In reality, shoppers often don’t bother to find the right reason in a form and simply pick an option that will not raise any question. This is specially relevant in cases where a shopper intends to exploit return policies.
When a return is not what it seems to be
Why do shoppers exploit return policies? The answer to this question derives from the shopper’s motivation for making a purchase in the first place. In an ideal world, you buy what you want and, unless there is a problem with an item, you keep it and don’t need to think about returns. But e-commerce is more messy and complicated than that and that introduces other reasons for purchasing. For example:
1. Buy-to-try
The primary thing that separates the shopping experience in regular brick-and-mortar stores from online shops is the possibility of trying on fashion before buying it. Shoppers can test it out to find the perfect size, fit or put together a new look. In online shops this possibility is limited (fully or partially), so some shoppers will intentionally purchase more items than they intend to keep and will replicate this fitting room experience in their homes.
Common misconception: “Yes, shoppers will end up returning some items but at least they will keep some and will definitely be happy with their choice”.
Reality: It is a burden on the shops inventory and increases shipping and return costs. The shop failed to educate and provide enough information to help shoppers make the right decision.
2. Buy-to-rent
This is another example of shopping habits from brick-and-mortar stores being brought into e-commerce. “Buy-to-rent” is a behaviour when a shopper wears an item for a special occasion with a label still intact and returns it within the period allowed by the return policy.
Common misconception: “A full refund will only be allowed if the item is in good condition. So if the labels are not intact or an item has visibly been worn, there will be no refund. It’s a good protection for shops”.
Reality: This introduces additional costs to check, sort, and clean items as well as costs related to shipping and returns. This behaviour rarely translates into loyalty — shoppers simply use the policies to their advantage with little to no regard for the shop.
3. Resellers
A behaviour typically seen at flash-sales sites or during sales campaigns and deep discounts. Shoppers buy a lot of items at an attractive price, try to resell it, and return everything they didn’t manage to sell back to a shop.
Common misconception: “This will contribute positively to our revenue and GMV”.
Reality: Returned items are usually hard to sell (sales campaigns will be over), so it often turns out to be a loss for shops.
4. Instagrammers
With the rise of social media and influencer marketing, some shoppers purchase fashion for the sake of social media likes and subsequently return it back to the shop after taking photos.
Common misconception: “This can be an interesting social media campaign”.
Reality: As with the others, there are additional costs to check, sort, and clean items and the costs to ship and return. Similarly to the “buy-to-rent” scenario these returns rarely turn into a tangible business opportunity for shops (especially multi-brand shops or marketplaces).
What behavioural returns are and why you should care
Some returns are absolutely necessary and serve to mitigate shopper’s risk. What if the wrong item is shipped? What if the item looks different from the photos? What if the item is broken or damaged ? All these things occur regularly and a shopper must have the opportunity to return.
However, often returns happen due to a shopper’s mistake (for example, buying the wrong size) or intention to exploit return policies. These type of returns are called behavioural returns — as they happen because of a certain type of user behaviour during shopping (whether it’s a mistake or an intention to return). Behavioural returns usually contribute to 70–75% of all returns, so by addressing this issue shops can significantly reduce their returns.
As shown above, behaviour of users varies but all these types of returns have something in common— they are unnecessary and can be prevented.
For a long time, shops only focused on making the experience of return as pleasant as possible is essential to online shops — not a lot was done to prevent returns from happening in the first place.
By understanding the true motivation and nature of a return, shops can take measures to reduce these unnecessary behavioural returns.
At Easysize we use a machine-learning algorithm to figure out why your shoppers make purchases and whether they will return them. We analyse how shoppers behave on a website, what shopping and returns patterns they had historically, and how the same brands & items perform across shops. We continuously train our algorithm to recognise high-risk behaviours — whether it’s buying the wrong size, instagrammers, buying-to-rent etc.
To know more about the nature of returns and how Easysize works visit our website www.easysize.me.
|
The true nature of returns in fashion e-commerce
| 95
|
the-true-nature-of-returns-in-fashion-e-commerce-1e99b2770580
|
2018-04-13
|
2018-04-13 09:17:20
|
https://medium.com/s/story/the-true-nature-of-returns-in-fashion-e-commerce-1e99b2770580
| false
| 1,116
| null | null | null | null | null | null | null | null | null |
Ecommerce
|
ecommerce
|
Ecommerce
| 46,740
|
Easysize
|
Preventing returns in fashion e-commerce. www.easysize.me
|
bfef97ae1583
|
EasySize
| 391
| 692
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-22
|
2018-06-22 05:25:43
|
2018-06-22
|
2018-06-22 05:29:01
| 0
| false
|
en
|
2018-06-22
|
2018-06-22 05:29:01
| 2
|
1e9a7b5e2526
| 2.724528
| 0
| 0
| 0
|
According to a research conducted by Gartner, by 2020, almost 85% of our interaction with businesses will be without another human…
| 4
|
Top 5 Tips To Develop A Successful Chatbot For Your Business
According to a research conducted by Gartner, by 2020, almost 85% of our interaction with businesses will be without another human interference. Instead, we will be extensively using self-service options and Chatbots. In addition to this, as per an Oracle survey, 80% of entrepreneurs said they currently use or are planning to use chatbots by 2020 so as to enhance the reach of their business.
Chatbots are automated chat programs which engage with website visitors, to facilitate conversation in the form of answering questions, providing information, recommending products, and helping customers on their purchasing journey. One of the most striking features of Chatbots is that they learn from the past interactions and become intelligent and smarter over the time. But to develop a comprehensive bot isn’t a cakewalk. Creating a product prototype of your Chatbot is mandatory for you in order to test its usability in reality. Here are five tips which will let you develop a successful Chatbot:
1. Track Down Your Target Audience: First of all, you need to figure out who is your target audience and which chat platform they use most often. As a result of this evaluation, you can easily find out a Chatbot program that will integrate with your platform of choice. Tracking down your target audience will give you a detailed insight into their specific requirements and you can create a bot accordingly. Such a Chatbot is capable to satisfy and please your target customers so that to retain them in the long-term.
2. Identify Which Bot You Should Create: Next, you will have to discover which type of role you want your Chatbot to play. It is crucial to decide the primary function of your Chatbot. Whether it will generate leads, facilitate a transaction, provide information or engage and entertain. You should not develop a bot without clearly determining its purpose and function. Think carefully and figure out which area would benefit your customers the most and develop a bot in accordance with that.
3. Look For A Third-Party Chatbot Program: You don’t need to build your own Chatbot from scratch. There are several third-party Chatbot programs that let you easily create bots which can be integrated with WhatsApp, Viber, WeChat or Facebook Messenger. Let’s take a look at a few Chatbot programs which are compatible with Facebook Messenger:
1). Flow XO: A fully flexible Chatbot solution which let users create fully automted bots.
2). ChattyPeople: A popular Chatbot platform, which can be used to create a bot for you or you can learn to create your own.
3). Chatfuel: You can create a Chatbot in sven minutes without coding.
4. Create Engaging & Brand-Relevant Copy: Successful Chatbots promise a carefully built messaging journey, and you have to make sure that you develop engaging and brand-relevant copy so that to keep your audience involved. It is a good idea to incorporate human-like features in your bot. For example, avoid creating responses that sound very robotic or canned. One of the biggest challenges while developing a Chatbot is to keep the dialogue as natural as possible. You can provide your customers with a positive experience when they involve themselves in a genuine conversation with your bot.
5. Consider Seeking Help From Experts: It is easy to build your Chatbot, but developing one that works for you is a daunting task. Powerful back-end technology, creativity, and marketing prowess help in creating a successful bot. In such a scenario, it is recommended to get expert advice from seasoned ai consultants who are having a wealth of experience in creating exceptional Chatbots. They can help you in building a comprehensive Chatbot which lets you connect to and retain a larger customer base.
In this competition driven world, it is important for you to provide your customers with instant solutions to their problems. Developing a Chatbot which is carefully constructed by keeping your customers’ specific needs in mind, can take your acquisition and conversion strategies to the next level. It will enhance your customer base and you can easily gain new customers. Get in touch with a renowned AI development company in order to harness the benefits of AI technology in order to develop an influential Chatbot for your business.
|
Top 5 Tips To Develop A Successful Chatbot For Your Business
| 0
|
top-5-tips-to-develop-a-successful-chatbot-for-your-business-1e9a7b5e2526
|
2018-06-22
|
2018-06-22 05:29:01
|
https://medium.com/s/story/top-5-tips-to-develop-a-successful-chatbot-for-your-business-1e9a7b5e2526
| false
| 722
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Chris Evans
| null |
556967f84920
|
avirajarkenea
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
#create an environment
$ conda create name_of_environment
#Activate the environment
$ source activate your_environment
#Update and upgrade your linux apt-get
$ sudo apt-get update
$ sudo apt-get upgrade
#Install all dependencies
$ sudo apt-get install python-pip3 python3-dev
$ sudo apt-get install python3-numpy python3-scipy python3-matplotlib python-yaml
$ sudo apt-get install graphviz
#Install Tensorflow
$ conda install tensorflow
#Install Keras
$ conda install keras
#Running to test your installation
$python
Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19)
[GCC 7.2.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import tensorflow as tf
>>> import keras as k
>>> from keras.models import Sequential
>>> from keras.layers import Dense
>>> model = Sequential()
>>> model.add(layers.Dense(32, activation=’relu’, input_shape=(784,)))
>>> model.add(layers.Dense(10, activation=’softmax’))
>>> model.compile(optimizer = ‘rmsprop’, loss=’binary_crossentropy/categorial_crossentropy.. depending on your task’, metrics = [‘acc’])
>>> model.fit(x_train, y_train, epochs = 10, batch_size=32, validation_split = 0.2)
| 13
|
cbc92d52befb
|
2018-03-29
|
2018-03-29 18:58:02
|
2018-03-30
|
2018-03-30 08:38:50
| 12
| false
|
en
|
2018-03-30
|
2018-03-30 08:38:50
| 25
|
1e9b96d2d013
| 7.270755
| 13
| 1
| 1
|
It has been four weeks of shedding light on some deep learning APIs/libraries at AISaturdays Lagos and just before you make up your mind…
| 5
|
AISaturdaylagos: “Karessing” Deep Learning with Keras
source: google.com
It has been four weeks of shedding light on some deep learning APIs/libraries at AISaturdays Lagos and just before you make up your mind for or against deep learning especially after hanging out last week with granny Theano, permit Olalekan Olapeju to sweep your feet off the ground with Keras.
…but wait! We got our first guest speaker last week, Farouq Oyebiyi 💃 and not just that, Christopher Giles, a reporter from CNN Africa came to say hi — which was kinda cool 😎.
Anyways, the most fun part after CS231n’s lecture on Segmentation and Detection as well as Generative Model from fast.ai’s lecture of course is #teamKeras beautifully delivered presentation which I obviously can’t wait to share with you.
This section was written by Olalekan Olapeju, team leader of #teamKeras 🔥 🔥. You can check his post out here.
How this article is structured:
Introduction
Why Keras?
Installation of Keras
Implementation of Deep Learning using Keras
Conclusion
Introduction
It has been four weeks of shedding light on some deep learning APIs/libraries at AISaturdays Lagos and just before you make up your mind for or against deep learning especially after hanging out last week with granny Theano, permit me to sweep your feet off the ground with Keras.
Imagine driving interstate in 2018 with a Ford Model T and a Ford Mustang 2018 model. Just to be “fair”, you are driving on an international no speed-limit day (just keep imagining and don’t ask google if such a day exists)
Ford Model T
Ford Mustang 2018 Model
I believe, all things being equal, you will get to your destination and if you chose Model T, I can assume you will get to your destination looking happy like the couple below. Living happily ever or never after? considering it a waste of computing power to design a model to answer this, let us just say time will tell.
A model with such a performance
But as regards being “fair”, we are in the information age where thousands of researchers, like us — Keras team of AISaturdays, are looking at solving same problems that we face as human beings i.e. automation across all works of life 😎
The advent of cloud services are beginning to bridge the gap of the inaccessibility to scare resources such as computing resources — Tensor Processing Unit (TPU), Graphical Processing Unit (GPU), etc. Also, we get recognized for what we publish or produce and the faster we do, the more we make living easier and cozy for humanity — imagine self driving cars with zero records of accidents or a receptionist or bank teller that would not give you “attitude”. Hence a call to simplify a researcher’s life through an easy to use and less technical library for deep learning implementation.
Ladies and Gentlemen readers, I present to you (drums rolling)
Designed by Francois Chollet (hope he forgives me for the ‘c’ without accent, can’t find accent on my keyboard at the moment) of Google Inc., Keras is a cross platform, open source neural network (NN) API/library written in Python programming language. It can run on Theano (Universite de Montreal — just hope this accent thing does not hunt me), Tensorflow (Google), Cognitive Toolkit (Microsoft) or Mxnet (Amazon).
The name “Keras” means either Horn (Greek) or Ivory (Latin). Can hear someone asking the reason for the name, though I am not Chollet, it refers to where dream spirits arrive on earth either through a gate of ivory for those that deceive men with false visions or a gate of horn for those who announce the future that will come to pass. Either one you believe, the future is nearer than we once thought. Need not figure out the gate I come from because I am not a dream spirit though I announce the future.
Keras is a high level NN API that takes away the stress of trying to deal with low-level operations such as tensor differentiation, manipulation (reshaping, dot, elementwise operations, etc), etc. Imagine having to understand how each part of your car engine functions before you can be allowed to commence your driving classes, frustrating? Sure, just an understatement, right?
What lies beyond the Keras curtain
Why Keras?
Keras was developed to enable researchers focus on experimentation as the key to doing good research is being able to transform one’s ideas to results with least possible delay or bottleneck through users’ friendliness, modularity and extensibility.
It supports Convolutional and Recurrent Neural Networks. It runs on Central Processing Unit (CPU) and Graphical Processing Unit (GPU). I believe it will run on Google’s Tensor Processing Unit (TPU) soon because it sits on Tensorflow. It has stronger adoption in both industry and research community. If you use Netflix, Uber, Yelp, Square among others, then have been interacting with Keras.
Keras Models are easily deployed across greater range of platforms that other deep learning frameworks. On iOS using CoreML, Android using TensorFlow Android rumtime, in browser vis GPU-accelerated JavaScript runtimes such as keras.js and WnDNN, on Google Cloud via TensorFlow-Serving, on Raspberry PI, etc.
Installation of Keras
Keras runs better on linux and installing it requires a system with:
32 or 64 bit operating system,
min of 4 to 8 GB RAM
Alternatively you can use cloud services like Amazon Web Services, Google Collaboratory — this is free, Microsoft Azure, etc.
To install Keras on your local system on a tensorflow backend, after installing Anaconda, you have to create an environment using:
Implementation of Deep Learning using Keras
Using Keras to implement deep learning reminds me of the concept of playing with bricks from Lego, a danish company that produces toys consisting of mainly interlocking plastic bricks. First you have a problem you intend to solve using CNN like image classification or RNN like text or genome sequencing or both problem just like a child trying to describe his or her imagination to her friend.
First step is to define your training data. Just as a child sorts through the various pieces of Lego bricks, one has to define the training dataset as tensors (input and target) as data used in NN are represented in tensors.
Tensors are data containers in NN varying from 0 dimensional (0D) tensors (scalars) to 5D tensors for holding video frames.
Looking familiar?
Secondly, define a network of layers or model. Just as a child decides on a plan and commences the construction of the conceived imagination, Keras offers you the option of building your model using either Sequential API or Functional API.
Sequential API requires you putting a layer over another; a brick over another to build a model, which is the most common architecture while Functional API provides you the option to build an arbitrary architecture model that either requires shared layer (a mean of reusing a layer or model like a function in any programming language), multiple inputs (text and image) or multiple outputs (captioning an image). A simple model using a sequential API is shown below
Thirdly, configure the learning process by choosing a loss function, optimiser and other metrics to monitor the learning process of the model. This is as simple as writing:
Lastly, iterate on your training data by calling the fit() method as shown below:
Diagrammatically, implementing a Keras model is shown below:
A simple flow for implementing a Model
From our Lego view, you have the below. That this might be complex for a child? Never underestimate the power of imagination.
More than an imagination
Conclusion
Below is a cheat sheet for Keras for www.datacamp.com.
Keras Cheat Sheet
Trust me, Keras is as simple as it seems. In doubt, ask Chollet or other users on https://groups.google.com/forum/#!forum/keras-users or https://keras-slack-autojoin.herokuapp.com/
#TeamKeras
Thanks to Keras AISaturdays Team:
Olalekan Olapeju (My humble self Olalekan Olapeju)
2. Aderinto Sadiq — email
3. Olusegun Komolafe — github
***All images are gotten from google.com ***
Thanks #teamKeras for giving us tour of your framework. Despite being the team with the smallest number of participants, they did amazing 💪. Well-done guys!
AISaturdayLagos wouldn’t have happened without my friend & fellow ambassador, Azeez Oluwafemi, our Partners FB Dev Circle Lagos, Vesper.ng and Intel.
A big Thanks to Nurture.AI for this amazing opportunity.
Also read how AI Saturdays is Bringing the World Together with AI
See you next week 😎.
Links to Resources
Keras Presentation Slide — https://goo.gl/3VS1Ki
Deep Learning with Python by Francois Chollet
https://keras.io
www.datacamp.com for cheatsheet for Keras
Practical deep learning for coders
Deep learning Theories
Convolutional Neural Networks
A friendly introduction to Convolutional Neural Networks and Image Recognition
Setting up Google Colab 1
Setting up Google Colab II
|
AISaturdaylagos: “Karessing” Deep Learning with Keras
| 35
|
aisaturdaylagos-karessing-deep-learning-with-keras-1e9b96d2d013
|
2018-04-23
|
2018-04-23 00:57:22
|
https://medium.com/s/story/aisaturdaylagos-karessing-deep-learning-with-keras-1e9b96d2d013
| false
| 1,569
|
Making rigorous AI education accessible and free, in 50+ cities globally. Sign up at https://nurture.ai/ai-saturdays
| null | null | null |
AI Saturdays
|
info@nurture.ai
|
ai-saturdays
|
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,DATA SCIENCE,AI
|
AISaturdays
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Tejumade Afonja
| null |
44e0f445aa49
|
tejuafonja
| 699
| 306
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-13
|
2018-05-13 01:15:39
|
2018-05-13
|
2018-05-13 02:27:33
| 0
| false
|
en
|
2018-05-13
|
2018-05-13 02:27:33
| 2
|
1e9cadf2eefb
| 1.041509
| 1
| 0
| 0
|
This week tech giants showcased a bold set of demonstrations involving drones, vision I/O and AI capability with many sentiments unsaid…
| 4
|
Indistinguishable Experiences
This week tech giants showcased a bold set of demonstrations involving drones, vision I/O and AI capability with many sentiments unsaid about the human interaction. Google’s Assistant made an eerie call to book an appointment at a hair salon. A near perfect livestream broadcast a pitch perfect, gender and age appropriate AI swiftly and successfully booking this appointment with an endearing and rather personal interaction with the unsuspecting receiver who was comfortable with the like-sounding caller.
The AI-aware conference audience at Google roared with delight.
The receiver never knew they were interacting with a computer and left perfectly satisfied and engaged in the easy interaction. There was no lag on the call, no foreign accent or recording to signal the systems communication and no additional audio prompts requesting a more patient and deliberate response from the receiver.
While the usefulness of the order taking is clear there was a strange uncertainty in the capability foreshadowing questions on this unsuspecting un-named receiver who never once knew they were interacting with a computer.
Microsoft’s Satya Nadella opened the Build conference with a discourse around privacy protection as a human right stating new principles around the design for privacy and tools to debias the handling of user data and intellectual property to protect intended use.
Rockwell Automation was one of the companies profiled for its cloud research efforts for the connected enterprise.
Nadella reasoned that technology would fuse together experiences of our future but that the user experience should solve for this reasoning now. The Assistant fused together experiences of easily booking this appointment, simple enough. It raises the question for what is reasonable.
|
Indistinguishable Experiences
| 1
|
indistinguishable-experiences-1e9cadf2eefb
|
2018-05-14
|
2018-05-14 19:56:33
|
https://medium.com/s/story/indistinguishable-experiences-1e9cadf2eefb
| false
| 276
| null | null | null | null | null | null | null | null | null |
Microsoft
|
microsoft
|
Microsoft
| 19,490
|
RW Patel
|
Helping teams find digital relevancy through user experience and data-driven insights.
|
e4f0718a390c
|
rwpatel
| 89
| 185
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
8ca78a98cce8
|
2017-12-07
|
2017-12-07 15:13:09
|
2017-12-07
|
2017-12-07 15:13:32
| 1
| false
|
en
|
2017-12-07
|
2017-12-07 15:13:32
| 0
|
1e9e6b50dc57
| 1.271698
| 0
| 0
| 0
|
by Ray Hayes
| 5
|
The Robots are coming!
by Ray Hayes
I don’t think I’m going out on a limb when I say that Artificial Intelligence (AI) is going to be big over the next 10 years. The emergence of AI will undoubtedly make life easier and bring tons of revenue to those able to tap into its resources, but, with this ease of lifestyle, a lack of employment may emerge as well. Mark Cuban, Elon Musk, and many others have already sounded the alarm that technology, while helpful, may lead to a larger issue moving forward. After all, if robots can perform a skill set quicker, cheaper, and with less errors than humans, why hire humans?
According to research by McKinsey & Co., “as many as 800 million workers worldwide may lose their jobs to robots and automation by 2030, equivalent to more than a fifth of today’s global labor force.” This news will have a negative affect on everyone from both developed and emerging countries.
New skill sets will be vitally important for job seekers in the coming generation, and the need to understand technology will be of crucial importance. For example, the need to understand data science and data mining will be a key skill in developing robots, and one that potential job seekers should begin to study today.
Bloomberg said it best when it comes to humans moving into a new economy. “The good news for those displaced is that there will be jobs for them to transition into, although in many cases they’re going to have to learn new skills to do the work. Those jobs will include health-care providers for aging populations, technology specialists and even gardeners, according to the report.”
|
The Robots are coming!
| 0
|
the-robots-are-coming-1e9e6b50dc57
|
2018-06-12
|
2018-06-12 03:23:52
|
https://medium.com/s/story/the-robots-are-coming-1e9e6b50dc57
| false
| 284
|
News covering diversity and small business from around the globe
| null | null | null |
Supplierty News
|
contact@globaldiversitynews.com
|
supplierty-news
|
DIVERSITY,SMALL BUSINESS,SUPPLIERS,SUPPLY CHAIN,BUSINESS
|
gdnnetwork
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Ray Hayes
| null |
becc3d45fd44
|
rayhayes_24406
| 35
| 37
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
83c8ddcca2a
|
2018-06-13
|
2018-06-13 10:44:01
|
2018-06-13
|
2018-06-13 10:59:05
| 1
| false
|
en
|
2018-06-13
|
2018-06-13 11:08:16
| 2
|
1ea26878207
| 1.037736
| 3
| 0
| 0
|
The Black Sea, July and … Data — Data Summer Conf 2018!
| 5
|
Data Summer Conf 2018
The Black Sea, July and … Data — Data Summer Conf 2018!
On the 21st of July Odessa will become a networking stage for data geeks and speakers and let them dig deeper into Data Science & Big Data streams. Join us and share the expertise, insights, and trends. Listen to the world data leaders and get real hands-on experience at our data workshops.
Our goal is to get data techies together, build a strong community and have fun here near the Black Sea coast, where the water meets the horizon.
Our first speakers:
— Giuseppe Angelo Porcelli, Solutions Architect at Amazon Web Services;
— Jonathan Taws, Data Scientist at Amazon Web Services;
— Javier Rodriguez Zaurin, Head of Data Science at Simply Business;
— Jacek Laskowski, Spark & Kafka Developer and Technical Instructor at Kafka Streams Consultant;
— Sri Sri Perangur, Senior Data Scientist at Spotify;
— Fedor Navruzov, Senior Data Scientist at SWAG Speak With A Geek;
— Roman Strochak, Computer vision CTO at DataAI;
— Rudradeb Mitra, Product Mentor at Google Developers;
— Dmitry Korobchenko, Deep Learning R&D Engineer at NVIDIA;
Tickets move fast: https://provectus.com/datasummer/#tickets
Early: $50 (till June 16 inclusive)
Regular: $70 (June 17 — July 8 inclusive)
Late: $100 (July 9 — July 21)
Get 10% discount and use promo code: FlyElephantDataFriends.
|
Data Summer Conf 2018
| 48
|
data-summer-conf-2018-1ea26878207
|
2018-06-13
|
2018-06-13 11:08:17
|
https://medium.com/s/story/data-summer-conf-2018-1ea26878207
| false
| 222
|
All-in-one automated workflow platform
| null |
flyelephant.net
| null |
FlyElephant
|
support@flyelephant.net
|
flyelephant
|
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,SUPERCOMPUTER,CLOUD COMPUTING
|
FlyElephantNet
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Dmitry Spodarets
| null |
a5ad744a085a
|
m31
| 749
| 741
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-23
|
2018-07-23 14:44:04
|
2018-07-23
|
2018-07-23 14:46:35
| 2
| false
|
en
|
2018-07-23
|
2018-07-23 14:46:35
| 11
|
1ea2db2790f3
| 3.934277
| 0
| 0
| 0
|
Originally published on ZeoLearn blog.
| 1
|
Writing your first machine learning code
Originally published on ZeoLearn blog.
The whole world is buzzing with artificial intelligence these days. Some people predict that it might change the way the world works into the future. For developers, it presents us with a fresh opportunity to be a part of a new paradigm shift.
I started learning AI about two months back and it has been a long road since. There are lots of developments happening in AI every day. From bots that develop their own language to an AI that can beat professional players in DOTA. From self-driving cars to computers that are better at diagnosing patients than experienced doctors. There is a lot of ground to cover.
Before we go further, we must understand that there are a lot of disciplines in AI. Some of them are easier to get into than others. Obviously, your first AI program cannot be to make a self-driving car. For beginners, it is best to start with a branch of machine learning called supervised learning.
What is supervised learning?
Easily put, supervised learning means you give a bunch of data to a computer program and it uses mathematical models to draw inferences on that data. This type of learning is used for very simple regression and classification problems, but are really handy in solving many real-world problems.
Let’s start
Consider this very simple data set that I’ve prepared artificially just for the purpose of this example. This dataset follows three numbers X, Y, and Z. There exists a relationship between X, Y, and Z that we don’t know yet. Our goal in writing this program is to find this relationship. This data also contains some noise that is usually present in most real data sets.
82.95761557036997,15.49770283330364,53.746988734062285
41.831058370415896,74.6908398387234,76.91731716843984
0.45109458673243674,15.880369177717512,12.400022057356612
65.84526760369872,29.757778929447664,55.432668761221926
38.02804990326463,94.94571562617034,90.18671003364872
… and 95 more simple pairs like this.
For us to understand much easier, let’s put the equation in mathematical terms-
Z = a*X + b*Y
Our goal is to find a and b so that we can use any other pairs of X and Y to calculate Z. Such kinds of problems are solved by a mathematical technique called linear regression.
It is one of the most simple applications of Machine Learning and easiest to get into. Before we get into the more difficult aspects of machine learning, later on, it is important to understand that this exercise is very helpful to build up confidence in this field.
After all, bigger battles are won with the confidence of smaller victories.
Applications of linear regression.
These values of X, Y and Z can represent anything. They can be any three variables that are related by a linear equation. For example, if you remember high school kinematics, the velocity of any given object at a given time when it starts from an initial velocity under the influence of gravity is given by the equation v(t) = u + At where A is a constant gravitational acceleration that we don’t know.
If we get a lot of experimental values for u, v and t, running a simple multivariate linear regression over these values can give us the coefficient values of u and t. We will find after our analysis that the coefficient of u will come out to be 1. If our data is to be accurate, we will get the value of A as 9.8 meters per second squared.
While this is just one area I have used for illustration, it is important to understand that linear relationships occur everywhere in nature.
Getting into the code
We will use Javascript to write this simple regression program by using an npm module called smr. To set up the project, you must have nodejs installed. We could have used python as well, but installing and working with Scipy is not in our scope for now. We’ll gradually go there with time. For the time being,
Start by creating an empty directory for your project
Install smr by using npm install smr.
Create a new file called index.js and put in the code.
The code is as follows:
var smr = require('smr');
var regression = new smr.Regression({ numX: 2, numY: 1 })
//read all the data by creating a read stream
var lineReader = require('readline').createInterface({
input: require('fs').createReadStream('data.txt')
});
//read it line by line and fit it into the regression object
lineReader.on('line', function (line) {
line = line.split(',')
regression.push({x:[line[0],line[1]], y:[line[2]]})
t = regression.calculateCoefficients()
console.log(t)
});
You can get the dataset from this GitHub repo that I’ve created for this code at the end of this article.
Running the code
We run the code by using node index.js on the same directory as the code. Before we do that, let’s remember the data is in the form z = a*X + b*Y
The code fits the data into the regression object and prints coefficient values of a and b. When we run the code, we begin to see this output.
[ [ 0.5007233183107977 ], [ 0.7496919531492985 ] ]
[ [ 0.5007720353300695 ], [ 0.749708917362593 ] ]
[ [ 0.500762395739748 ], [ 0.7497046748655358 ] ]
[ [ 0.5007149604193888 ], [ 0.7496660120512422 ] ]
[ [ 0.5009035966316537 ], [ 0.7495249465185432 ] ]
The value of a begins to converge at 0.50 and the value of y begins to converge at 0.75. This is remarkable, as when you see the repo there is a quick python script that is used to prepare the data. The value of a and b that I had chosen were indeed 0.50 and 0.75.
That means our model is good and we have successfully created our first machine learning program.
Link of repo: https://github.com/archimedes14/linear_regression_simple
|
Writing your first machine learning code
| 0
|
writing-your-first-machine-learning-code-1ea2db2790f3
|
2018-07-23
|
2018-07-23 14:46:36
|
https://medium.com/s/story/writing-your-first-machine-learning-code-1ea2db2790f3
| false
| 941
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
ZeoLearn
|
Learn. Get mentored. Build your portfolio.
|
cc27e03caf4f
|
zeolearn
| 51
| 43
| 20,181,104
| null | null | null | null | null | null |
0
|
import os
import random
import numpy as np
from scipy.ndimage import imread
from scipy.misc import imresize
from keras.models import Model
from keras.callbacks import ModelCheckpoint
from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, GlobalAveragePooling2D, Dense
from keras.applications.inception_v3 import InceptionV3
from keras.utils import to_categorical
filenames = []
labels = []
for filename in os.listdir("./images"):
filenames.append(os.path.join("./images/", filename))
labels.append(filename.split(".")[0])
zipped = list(zip(filenames, labels))
random.shuffle(zipped)
filenames, labels = zip(*zipped)
X_train = filenames[0:int(len(filenames)*0.8)]
y_train = labels[0:int(len(filenames)*0.8)]
X_val = filenames[int(len(filenames)*0.8):]
y_val = labels[int(len(filenames)*0.8):]
def image_generator(data_X, data_y, batch_size=64):
X_arr = []
y_arr = []
while True:
for i in range(len(data_X)):
x = imread(data_X[i], mode="RGB")
x_reshaped = imresize(x, (224, 224, 3))
x_reshaped = x_reshaped / 255.
# cat = 0, dog = 1
y = int(data_y[i] == "dog")
X_arr.append(x_reshaped)
y_arr.append(y)
if len(X_arr) == batch_size:
X_out = np.array(X_arr)
y_out = np.array(y_arr)
X_arr = []
y_arr = []
yield (X_out, to_categorical(y_out))
bs = 64
train_data_generator = image_generator(X_train, y_train, batch_size=bs)
val_data_generator = image_generator(X_val, y_val, batch_size=bs)
inception_v3 = InceptionV3(weights=None,
include_top=False,
input_shape=(224,224,3))
x = inception_v3.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(inception_v3.input, predictions)
inception_v3 = InceptionV3(weights='imagenet',
include_top=False,
input_shape=(224,224,3))
x = inception_v3.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(inception_v3.input, predictions)
for layer in inception_v3.layers:
layer.trainable = False
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['acc'])
Epoch 00091: val_acc did not improve
Epoch 92/100
312/312 [==============================] - 145s 464ms/step - loss: 2.3751e-06 - acc: 1.0000 - val_loss: 0.2401 - val_acc: 0.9655
Epoch 00092: val_acc improved from 0.96534 to 0.96554, saving model to classifier.h5
Epoch 93/100
312/312 [==============================] - 145s 464ms/step - loss: 1.8455e-06 - acc: 1.0000 - val_loss: 0.2416 - val_acc: 0.9655
Epoch 00093: val_acc did not improve
Epoch 94/100
312/312 [==============================] - 145s 464ms/step - loss: 1.5706e-06 - acc: 1.0000 - val_loss: 0.2452 - val_acc: 0.9657
Epoch 00094: val_acc improved from 0.96554 to 0.96575, saving model to classifier.h5
Epoch 95/100
312/312 [==============================] - 145s 464ms/step - loss: 1.2430e-06 - acc: 1.0000 - val_loss: 0.2483 - val_acc: 0.9657
Epoch 00095: val_acc did not improve
Epoch 96/100
312/312 [==============================] - 145s 465ms/step - loss: 1.0860e-06 - acc: 1.0000 - val_loss: 0.2500 - val_acc: 0.9657
Epoch 00096: val_acc did not improve
Epoch 97/100
312/312 [==============================] - 145s 464ms/step - loss: 8.6695e-07 - acc: 1.0000 - val_loss: 0.2527 - val_acc: 0.9655
Epoch 00097: val_acc did not improve
Epoch 98/100
312/312 [==============================] - 145s 464ms/step - loss: 7.8315e-07 - acc: 1.0000 - val_loss: 0.2547 - val_acc: 0.9655
Epoch 00098: val_acc did not improve
Epoch 99/100
312/312 [==============================] - 145s 464ms/step - loss: 6.2808e-07 - acc: 1.0000 - val_loss: 0.2568 - val_acc: 0.9655
Epoch 00099: val_acc did not improve
Epoch 100/100
312/312 [==============================] - 145s 464ms/step - loss: 5.6699e-07 - acc: 1.0000 - val_loss: 0.2568 - val_acc: 0.9655
Epoch 1/100
312/312 [==============================] - 92s 296ms/step - loss: 0.1918 - acc: 0.9198 - val_loss: 0.0982 - val_acc: 0.9728
Epoch 00001: val_acc improved from -inf to 0.97276, saving model to classifier.h5
Epoch 2/100
312/312 [==============================] - 91s 291ms/step - loss: 0.1266 - acc: 0.9479 - val_loss: 0.0896 - val_acc: 0.9756
Epoch 00002: val_acc improved from 0.97276 to 0.97556, saving model to classifier.h5
Epoch 3/100
312/312 [==============================] - 91s 292ms/step - loss: 0.1132 - acc: 0.9531 - val_loss: 0.0733 - val_acc: 0.9816
Epoch 00003: val_acc improved from 0.97556 to 0.98157, saving model to classifier.h5
Epoch 4/100
312/312 [==============================] - 91s 292ms/step - loss: 0.0928 - acc: 0.9639 - val_loss: 0.0679 - val_acc: 0.9832
Epoch 00004: val_acc improved from 0.98157 to 0.98317, saving model to classifier.h5
Epoch 5/100
312/312 [==============================] - 92s 293ms/step - loss: 0.0786 - acc: 0.9681 - val_loss: 0.0952 - val_acc: 0.9798
Epoch 00005: val_acc did not improve
Epoch 6/100
312/312 [==============================] - 91s 292ms/step - loss: 0.0625 - acc: 0.9769 - val_loss: 0.1265 - val_acc: 0.9774
Epoch 00006: val_acc did not improve
Epoch 7/100
312/312 [==============================] - 91s 293ms/step - loss: 0.0560 - acc: 0.9787 - val_loss: 0.0980 - val_acc: 0.9830
Epoch 00007: val_acc did not improve
Epoch 8/100
312/312 [==============================] - 91s 293ms/step - loss: 0.0463 - acc: 0.9830 - val_loss: 0.0754 - val_acc: 0.9868
Epoch 00008: val_acc improved from 0.98317 to 0.98678, saving model to classifier.h5
Epoch 9/100
312/312 [==============================] - 92s 294ms/step - loss: 0.0465 - acc: 0.9818 - val_loss: 0.0738 - val_acc: 0.9868
Epoch 00009: val_acc did not improve
Epoch 10/100
312/312 [==============================] - 91s 292ms/step - loss: 0.0528 - acc: 0.9791 - val_loss: 0.0731 - val_acc: 0.9870
Epoch 00010: val_acc improved from 0.98678 to 0.98698, saving model to classifier.h5
Epoch 11/100
312/312 [==============================] - 91s 293ms/step - loss: 0.0580 - acc: 0.9743 - val_loss: 0.0830 - val_acc: 0.9866
Epoch 00011: val_acc did not improve
Epoch 12/100
312/312 [==============================] - 91s 293ms/step - loss: 0.0588 - acc: 0.9769 - val_loss: 0.1921 - val_acc: 0.9718
Epoch 00012: val_acc did not improve
Epoch 13/100
312/312 [==============================] - 91s 292ms/step - loss: 0.0552 - acc: 0.9782 - val_loss: 0.1874 - val_acc: 0.9762
Epoch 00013: val_acc did not improve
Epoch 14/100
312/312 [==============================] - 92s 294ms/step - loss: 0.0416 - acc: 0.9835 - val_loss: 0.1687 - val_acc: 0.9778
Epoch 00014: val_acc did not improve
Epoch 15/100
312/312 [==============================] - 91s 292ms/step - loss: 0.0320 - acc: 0.9875 - val_loss: 0.1464 - val_acc: 0.9804
Epoch 00015: val_acc did not improve
Epoch 16/100
312/312 [==============================] - 91s 293ms/step - loss: 0.0241 - acc: 0.9916 - val_loss: 0.1786 - val_acc: 0.9766
Epoch 00016: val_acc did not improve
Epoch 17/100
312/312 [==============================] - 91s 293ms/step - loss: 0.0186 - acc: 0.9941 - val_loss: 0.1482 - val_acc: 0.9842
Epoch 00017: val_acc did not improve
Epoch 18/100
312/312 [==============================] - 91s 293ms/step - loss: 0.0157 - acc: 0.9949 - val_loss: 0.1401 - val_acc: 0.9834
Epoch 00018: val_acc did not improve
Epoch 19/100
312/312 [==============================] - 91s 291ms/step - loss: 0.0186 - acc: 0.9937 - val_loss: 0.2895 - val_acc: 0.9665
Epoch 00019: val_acc did not improve
Epoch 20/100
312/312 [==============================] - 91s 293ms/step - loss: 0.0194 - acc: 0.9929 - val_loss: 0.1065 - val_acc: 0.9868
Epoch 00020: val_acc did not improve
Epoch 21/100
312/312 [==============================] - 91s 292ms/step - loss: 0.0172 - acc: 0.9938 - val_loss: 0.1616 - val_acc: 0.9820
Epoch 00021: val_acc did not improve
Epoch 22/100
312/312 [==============================] - 91s 293ms/step - loss: 0.0188 - acc: 0.9932 - val_loss: 0.1071 - val_acc: 0.9868
Epoch 00022: val_acc did not improve
Epoch 23/100
312/312 [==============================] - 91s 292ms/step - loss: 0.0199 - acc: 0.9931 - val_loss: 0.0898 - val_acc: 0.9884
Epoch 00023: val_acc improved from 0.98698 to 0.98838, saving model to classifier.h5
| 44
| null |
2018-06-01
|
2018-06-01 02:24:27
|
2018-06-01
|
2018-06-01 09:09:23
| 1
| false
|
en
|
2018-06-01
|
2018-06-01 09:26:53
| 3
|
1ea3c760e2b3
| 7.781132
| 0
| 0
| 0
|
It is undeniable that Convolutional Neural Networks (CNN) have made a huge progress in the field of computer vision. Most state-of-the-art…
| 5
|
The Power of Transfer Learning
It is undeniable that Convolutional Neural Networks (CNN) have made a huge progress in the field of computer vision. Most state-of-the-art models in image classification and object detection have been some form of a CNN. Convolutional Neural Networks work by convolving pixels of image data, down-sampling, and applying non-linearity after each successive layer. The result is called a “feature map” or a learned abstraction of an image. What does it learn? It is hard to quantify concretely what a deep learning model learns. However, it is generally known via visualizing weights and layer activations that feature maps earlier in the network learns the general concepts of an image — for example, edges, colors, and shapes — and that feature maps later in the network learns the specifics (e.g. pixel-level features) of an image. The last feature map of an image may be fed into a classifier (e.g. SVM, Feed Forward Neural Network) to output a probability of an image being in a class.
Transfer learning is a popular technique in deep learning where one may use a pre-trained model learned in one task, and fine-tune the weights to fit on a new dataset. This technique works very well in practice because it allows the network to use the features it previously learned, mix and match them in new combinations, and use it to classify a new set of images. In fact, it is generally accepted that one should never train a CNN from scratch anymore. Thanks to the ImageNet dataset, people have made lots of pre-trained models where these models are trained over millions of images to solve the specific task of classification of thousands of classes of images. The result is a model that is rich in pre-trained features and has lots of understanding of images in general (e.g. features of cars, animals, trees, tools, etc.). In practice, transfer learning works much better than training a model from random initialization: the model converges much faster, and in many cases more accurate.
For the remainder of this post, I will be stepping through a simple example of dogs vs. cat image classification. The details of this dataset and its goals can be found here https://www.kaggle.com/c/dogs-vs-cats. Basically, I have 25,000 images labeled as dogs or cats. I want to train a CNN so that it is able to classify new images it has never seen as dogs or cats. First, let’s import the necessary tools and load the data:
The images are given in a single directory (here I named it “images”). The label is given in the name. For example, an image of a cat might be cat.XXXX.jpg where XXXX is some number. Essentially, I have made a shuffled list of filenames and the label either as “cats” or “dogs”. Then, the data is split into 80% training and 20% validation.
Next, we create the a custom image data generator. Keras has a class ImageDataGenerator, but we would need to have our images structured in directories in a very specific way. With this custom image data generator, it will generate a batch of images of size (224, 224, 3) into a numpy array of shape (batch_size, 224, 224, 3).
Random Initialization
Now we can define our model. Here we will be using InceptionV3, a very successful architecture of CNN. Here we will set weights=None to demonstrate the case when we initialize the weights randomly. The output of the InceptionV3 is shaped (None, 5, 5, 2048). The output is fed into a GlobalAveragePooling2D layer to average the (5, 5, 2048) features into 2048 features, averaging the (5, 5) numbers. Essentially, you have 2048 numbers that summarizes the (224, 224, 3) image you input into the network. Next, we feed this into a Dense layer with 1024 hidden nodes before it is fed into a prediction layer with 2 nodes using the softmax activation function.
Transfer Learning
For the case of transfer learning, we can simply change weights=None to weights=”imagenet” (or simply remove it, default is imagenet). Further, I’m going to choose to freeze all the weights in InceptionV3 learned from ImageNet, and train only the Dense layer. It is possible to fine-tune the whole network and whether we do it will depend on the nature of our dataset. If the dataset is similar to ImageNet (e.g. cars, animals, trees, tools), then it may be unnecessary to fine-tune the whole network. You can read more about the best practices from here: http://cs231n.github.io/transfer-learning/.
Results
Let us now compare the accuracies and the time it takes to train these models. Below is the training run for the case of random initialization:
Validation accuracy reached its highest at epoch 94 at 96.56%. Total training time is roughly 4 hours with a single NVIDIA GTX 1080 TI. The accuracy could probably go higher as it seem to still be increasing slowly. Further, we could tune our training by adding dropout regularization to reduce overfitting, add more Dense layer(s) before the prediction layer, tune the number of hidden nodes of each Dense layer, or change the optimizing method (e.g. rmsprop, SGD, change batch size, change learning rates, etc). I encourage you to experiment with these ideas!
Now let’s look at the result for the case of transfer learning.
Notice that the validation accuracy beats that of random initialization after the first epoch. In 91 seconds, we were able to beat a 4-hour long training…! After 23 epochs (34 minutes), we tested the model on a different Kaggle competition on cats vs. dogs (link: https://www.kaggle.com/c/skoltech-cats-vs-dogs/submissions?sortBy=date&group=all&page=1), and the model gets 99.25% test accuracy (#2 on the public leaderboard). This suggests that our model is accurate and generalizes well to other datasets.
As I have shown, transfer learning not only gives better prediction results, but it also trains faster. Transfer learning is probably a more realistic way in which humans learn; we don’t relearn everything from scratch every time we are given a new task, but we build on top of the knowledge we already have. Aside from computer vision, transfer learning have been very successful in Natural Language Processing as well. For example, a form of pre-training in NLP is when words are made vectors by a process called Word2Vec. With transfer learning, we are building an intelligent AI by building on top of existing knowledge, datasets after datasets, accumulating knowledge over time. Transfer learning has brought us so far, and looking forward, I believe that it is where AI is headed.
|
The Power of Transfer Learning
| 0
|
the-power-of-transfer-learning-1ea3c760e2b3
|
2018-06-01
|
2018-06-01 09:26:54
|
https://medium.com/s/story/the-power-of-transfer-learning-1ea3c760e2b3
| false
| 2,009
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Tanin Phongpandecha
|
Chief Data Officer at Data Wow
|
aac3bc2a29d4
|
aim_55440
| 37
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-21
|
2017-09-21 15:51:56
|
2017-09-21
|
2017-09-21 15:51:56
| 6
| false
|
en
|
2017-09-21
|
2017-09-21 15:58:23
| 15
|
1ea46678b644
| 9.391509
| 1
| 1
| 0
|
The competition in understanding natural language from unstructured text is thickening. Google just launched two new features for their…
| 5
|
Google Natural Language vs Watson Natural Language Understanding
The competition in understanding natural language from unstructured text is thickening. Google just launched two new features for their Google Natural Language API, categories and sentiment. Those have been in the Watson Natural Language Understanding API for a while now, but let us see how the two APIs compare to each other overall.
Let us start with a head to head comparison with a real example.
Google Natural Language API vs Watson Natural Language Understanding Head-to-Head
I thought an article about another player in the game could be in order, so I entered an article in Fast Company about the Microsoft CEO Satya Nadella “Satya Nadella Rewrites Microsoft’s Code”
I will use the demo-interfaces for both services, they can be found here for Google NL and here for Watson NLU. The only difference I could find in how you post information to the two services is that you in the case of Watson, just can post a URL to the API and Watson does the rest. It is a simple feature that makes analyzing of web pages much easier, but the result is the same and also, someone has probably already built something similar for Google NL and put it on Github. If you do try the services I suggest looking at the actual API results as well, not only the demo-interfaces since those only show parts of the results.
So, how did they compare?
Document Sentiment:
Watson NLU returns a 0.19 positive sentiment on document level
Google NL returns a 0 neutral sentiment
…so very similar, which I would have considered very strange otherwise given the length and depth on the topic in the article.
Winner: Shared victory
Sentiment breakdown
Both services provide a breakdown on sentiment so it is possible to determine sentiment on entities etc, but Google NL also provides sentiment on sentences, which can come in handy since it puts the sentiment in context immediately in the result from the API.
Entities
I started by listing a few entities to compare, but it does not give a great perspective of the capabilities since those numbers need to be in context, do run it yourself and check the result for details. Overall they are very similar, naturally with some differences in the result, but overall similar. Watson NLU provides a slightly better granularity, but Google NL has the sentence result which is very good, so overall very similar in terms of sentiment.
Winner: Shared victory
Categories
Again, very similar. The major difference is that Google added the news and business categories, while Watson was a bit more rigid and stuck to tech and software. Even though the entities in the article mainly are tech-related I did like that Google NL classified the article as Business / Industrial at a 0.89 score, while Watson NLU did not include any business-related category, but classified the major category as /technology and computing/software at a 0.67 score.
Winner: Google
Entities
This one was a bit peculiar. Entity identification is naturally a difficult thing, but was a bit surprised by the results from Google NL, while Watson NLU was quite solid. Let us just look at the top 4 from each
Watson NLU (the score is relevance score)
Microsoft, Company, 0.87
Satya Nadella, Person, 0.81
CEO, JobTitle, 0.55
Steve Ballmer, Person, 0.38
Google NL (the score is salience score)
Satya Nadella, Person, 0.47
Microsoft, Organisation, 0.42
learner, Person, 0.02
CEO, Person, 0.01
The two things that surprised me was the drop in salience score already after the second entity, it stayed at 0 for all the rest of entities, as well as the type for “learner” and CEO….and also that “learner” was classified in that way at all. If I look through the entire list in Google NL, I cant get my head around it completely.
It also seems like Watson NLU has a bit better capability in business related types and Google NL is a bit more focused on consumer types. Watson NLU clearly more structured.
Winner: Watson
Conclusions of the test
The main differences between the two is that Watson NLU supports more features, like emotions, as well as the opportunity to apply custom ML models to the Watson NLU. This gives Watson NLU the capability of learning entities and relations in your specific domain.
Google NL has the benefit of being straightforward and support all their features in all languages as well as having a bit more granularity in their score (salience and magnitude).
Is it actually working? I would say that both services are good at what they do, but I would give the win at this stage to Watson due to the more extensive features as well as the capability of adding custom models. This is from an enterprise perspective, if you are in the consumer space it might be worth to do a POC on both. I like how IBM has started to be more modern in their approach with Watson and I think the APIs are working very similar. They are open, well documented and easy to work with (please note that I am not a developer).
Also, it is worth noting that much of Watson NLU have been around for a few years now (through the IBM acquisition of AlchemyAPI 2015 ). Google has been in the game for many years as well, but not in the enterprise space with a packaged service for natural language. If Google continuous to focus on this space I think they will be a real threat to IBM if they do not keep their pace up (which I see is a risk given it is IBM).
I would say as of this date, Watson NLU is the winner in the test, but I think Google is working at a high pace to package it’s extreme knowledge in the space quickly and I expect a lot of progress at a high pace. So, even if Watson is a leader today, they might not be tomorrow. The difference seems to be in the packaging, not the domain expertise.
For a bit of breakdown on pricing, terminology etc, keep reading.
What is Natural Langauge in this context?
Simply put it is the capability to do text analysis through natural language processing. It gives us the possibility to extract the following:
Entities
Extract people, companies, places, landmarks, organisations etc etc
Categories Automatic categorization of the text. Both Google NL and Watson NLU has an impressive list of categories. Google state total 700 and I have not counted Watsons, but seems to be about the same.
List of categories for Google NL.
List of categories for Watson NLU.
Sentiment
Is a text positive or negative, but nowadays it does not stop there, it is also possible to break it down further to target the sentiment at specific entities or words (differs between Google and Watson, more on that later in the post).
Syntax / Semantic Roles
Linguistic analytics of the text by splitting the text into parts and identify nouns, verbs as well as subject, action and object etc. The Google Cloud Natural Language Syntax feature seems to be a bit more extensive than Watson Natural Languages Semantic Roles.
Keywords, emotions, and concepts (Watson only)
Emotions are …. emotions like joy, anger, sadness etc. A great feature for customer service or similar products.
Keywords are words that are important in the text.
Concepts are words that might or might not appear in the text but reflect a concept.
Terminology
The two services use similar terminology. Google uses Syntax where Watson uses Semantic Roles, otherwise very similar terminology.
In Watson NLU all results are returned with a confidence score. Google has added two additional things to consider, magnitude and salience. Personally I like the simplicity in only using the confidence score, but naturally, the two other values can provide additional value in some cases.
Confidence Score: Is a score between 0 to 1 and the closer to 1 it is, the more confident it is. Usually above 0.75 is considered confident, but that is naturally depending on the subject and domain, you do not want a car to only be 75% sure that it is ok to do something, but if a customer service representative is getting a ticket that is 75% confident to be a Lost Password ticket, that will do.
Sentiment Score: Is a score between -1 and +1. When close to 0 it is fairly neutral, the closer to 1 the more positive and when close to -1 it is pretty negative. Watson actually sent the positive/neutral/negative-label in the API, Google only the score. Google Natural Language also sends a Magnitude parameter. Magnitude is a score to complements the sentiment score by telling us how strong the sentiment is.
Salience: Shows how central an entity is in the entire provided text or document. It is a score between 0 to 1. This is a good feature to if you need to see how “heavy” an entity is in a text. Only available in Google Natural Language.
To see explanations of Google Natural Language terminology as well as examples of JSON results for each of above, do visit Google Natural Language Basics.
To see explanations of Watson Natural Language Understanding terminology as well as examples of JSON results for each of above, do visit the Watson Natural Langauge Understanding API reference documentation. There is also an API Explorer if you want to play with the API.
Custom ML-models?
If you are an enterprise this feature is usually very important, this so it is possible to extract domain-specific entities and relations. If you have build an ML-model it is very easy to deploy it to Watson Natural Language Understanding, but I could not find a way to do it with Google Natural Language. Since I am not entirely familiar with the Google APIs I might be mistaken here, so feel free to correct me and point me in the right direction.
It might also be as simple as that IBM comes from the enterprise angle and applying custom models in more of a pre-requisite for IBM than for Google that comes from the consumer space.
Supported Languages
In terms of AI / Cognitive / Machine Learning the language is always a tricky beast. I have written extensively about what languages Watson understands, and will in this context only compare Watson NLU vs Google NL. I would say they are on-par with each other on this topic. Watson supports Arabic and Russia, while Google NL is supporting Chinese (both traditional and simplified). As a Swede, I will give Watson the victory, since Watson NLU actually partially supports Swedish as well, but that is a very biassed Watson victory.
Additionally, the comparison here is a bit difficult. I interpret that Google NL supports the listed languages for all features in the API, which is very good. Watson NLU has more features but does not support all features in all languages, so dependent on your task one or the other might support it.
Supported Languages for Watson Natural Language Understanding
Supported Languages for Google Natural Language
What is the price for Google Natural Language
Monthly prices per 1000 text-records. One text-record can contain up to 1000 unicode characters. It might seem complicated, but if you have followed my posts of pricing prior, it is clear that they all are equally complicated. Full details available at the Google NL pricing site.
What is the price for Watson Natural Language Understanding
Watson NLU is also charged on a per “block” per month price-model, they call it units and a unit is about 10.000 characters, so bigger units. IBM also charges for enrichment features. As an example: if you want a 18.000 character text analysed for entities and categories, it is 4 NLU Units (independent on how many categories or entities that are returned). Two units for the text and two units for the features. Looking for pricing for the rest of the Watson APIs, I have a post with a spreadsheet with the cost for all Watson APIs.
Given that the prices for Watson NLU are labeled in Swedish krona (since it is my live Bluemix account I have taken the screenshot from), I also attached a simplified model so it is easy to compare to USD as well.
Conclusion on pricing
This is a tough one since these models are hard to interpret before you have worked with them live and actually been invoiced, which I have not from Google, but from IBM Watson.
Nevertheless, I get the impression that you get more bang for the buck with Watson in this case. I sense that the free tire is more generous as well. But, this is a tough one for me to come to a clear conclusion, so it is more of a sense than a fact that I think Watson is more bang for the buck. The day I will receive an invoice from Google with NL on it I might update.
Disclaimer: I have been working with the Watson APIs for many years and know them pretty well, I am not as deep with Googles APIs. With that said I am open to others to complement my analysis and / or conclusions.
Top Image: The image is a wallpaper from the game Crysis 2.
Originally published at fredrikstenbeck.com on September 21, 2017.
|
Google Natural Language vs Watson Natural Language Understanding
| 1
|
google-natural-language-vs-watson-natural-language-understanding-1ea46678b644
|
2018-08-14
|
2018-08-14 15:18:15
|
https://medium.com/s/story/google-natural-language-vs-watson-natural-language-understanding-1ea46678b644
| false
| 2,237
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Fredrik Stenbeck
|
Make things happen
|
cc3da9f09a04
|
stenbeck
| 655
| 958
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-19
|
2018-05-19 07:30:34
|
2018-05-20
|
2018-05-20 19:24:46
| 11
| false
|
en
|
2018-05-20
|
2018-05-20 19:38:23
| 29
|
1ea5a073e448
| 5.050943
| 29
| 1
| 0
|
This year’s openvisconf provided a venue for pressing conversations in the visualization domain. While many conferences fall into a trap of…
| 2
|
Visualizing Racism, Enhancing Perception, and Explaining Machine Learning: Reflections on Openvisconf 2018
This year’s openvisconf provided a venue for pressing conversations in the visualization domain. While many conferences fall into a trap of only looking inward — celebrating successes that have little effect on anyone outside of their domain, debating inane distinctions between design tactics — this conference seemed to pull off something more connected to the purpose and growth of our field.
While these two days were not without their (appropriately celebrated) wonky discussions (amazing, @enjalot!) or frivolous animations , there seemed to be a genuine discussion about applications of the field of visualization to ongoing issues of social equity and the promotion of open science. My experience at the conference was, of course, dictated by who I talked to, my racial/gender/national/sexual/class identity, and other factors, so I don’t mean to claim that it was a universal or universally positive experience. That being said, I found the following discussions worth reflecting on and sharing.
Visualizing Racism
Aaron Williams talk on “How data, and the visualization of it, helps us understand ‘us’” took on measuring and expressing racial segregation in the United States. The talk began with an historical take at how visualization has expressed race data over time, largely pioneered by W.E.B. Du Bois.
W.E.B. Du Bois’ visualization of urban/rural populations, from the Library of Congress
William’s 21st century take on visualizing race in the U.S. appeared as a captivating piece in the Washington Post this May. In the piece, he extends former visualizations on diversity to delve deeper into metrics of segregation, a more nuanced take on the topic:
Visualizing segregation, by Aaron Williams (from the Washington Post)
Not only was his work beautiful and clear, he dared to talk unabashedly about the role of visualization in an unjust society. It was inspiring to see someone so directly remind us of the (potential) purpose of our work. This was a clear call to action to the audience, challenging them to not only channel their skills towards maximizing profits but also towards discovering and un-do injustices.
Slide from Aaron William’s talk
In a similar vein, Amanda Cox and Kevin Quealy’s “Disagreements” talk — a favorite amongst many attendees — highlighted recent work on racism in America. While they drove the discussion with a set of snarky and endearing interpersonal debates from their years of collaboration, at the center of their disagreements was a common goal — a commitment to crafting honest and impactful visual designs.
Intergenerational wealth shifts, from The Upshot
The intensity and granularity of their Disagreements expressed their passion for using visualization as a tool to drive understanding and compassion around evolving political discussions. For example, their quippy debates about variation in visual form were grounded in a concern about accessing and interpreting complex data in their article on the Punishing
Reach of Racism for Black Boys. The final product not only effectively expressed the data, but crafted and emotionally engaging piece that shows how the wealth of Black men drops in a racist society.
Federica Fragapane’s work also centered issues of race, particularly looking at immigration into Italy. In her work The Stories Behind a Line, she took on the challenge of humanizing data to show the deeply personal and complex paths towards immigration.
One immigrant’s journey to Italy, featured in Stories Behind a Line
Given the purpose of visualization so well expressed by these talks, conversations about how to make visualization better seemed all the more important.
Enhancing Perception
In the context of such pertinent work, discussions about how to make visualization better seemed crucial, rather than frivolous. This was taken on most directly by Steven Franconeri, whose research delved into the preattentive perceptual processing used to identify patterns in visualization.
Tracking eye movements in visual comparisons, from Steven Franconeri’s talk (downloaded from Twitter)
While a minor delay in eye movements may seem trivial, ensuring rapid understanding is needed given the limited attention of audiences.
Research into visual communication was further echoed by Heather Krause who investigated common fallacies in data analysis and visual communications. This talk was more concerned with the analysis rather than visualization of data. She noted various fallacies that distort interpretations by both analysts and their audiences.
Fallacies presented by Heather Krause, (downloaded from Twitter)
Using visualization as a tool for understanding research was further explored by the talks around Machine Learning.
Explaining Machine Learning
Data visualization has been used not only as a tool for understanding data, but for understanding the things we do with data (i.e., data science). In this way, visualization plays a pertinent role in making data science techniques intuitive and interpretable. Because these algorithms can influence anything from who goes to jail to whose food stamps get accepted, understanding how they work is important for everyone (for more on Data Violence, see this robust work by Anna Lauren Hoffmann).
Prior work in explaining machine learning, by Stephanie Yee and Tony Chu
This year’s Openvisconf featured a number of excellent visual explanations of data science techniques. In Matthew Kay’s talk on Visualizing Uncertainty, he crisply and clearly delineated between Bayesian and Frequentist methods. In doing so, he took care to explain not only how to visualize uncertainty, but why we have uncertainty is the estimates we generate.
Uncertainty generated from Bayesian and Frequentist approaches, by Matthew Kay (see slides)
This type of statistical literacy is becoming increasingly important so that people can understand the quantitative information being presented to them. The importance of understanding uncertainty became abundantly clear in November 2016.
New York Times election needle, accessed on Google Images
Shan Carter furthered this discussion of peeling back the layers of Machine Learning in his presentation. In the past year, distill.pub has championed visual explanations of machine learning, which Shan Carter described in great depth. While some of the internal mechanics of Neural Networks remained obtuse (to me, at least), this talk showcased some of the pressing research that exposes the complexities of image recognition.
What a Neural Network Sees — presented by Shan Carter, featured here
(In the vein of open science, all of my workshop materials teaching D3 users how to use React to scaffold their visualizations can be found here.)
All that said — perhaps I just saw what I wanted to see (or heard what I wanted to hear). I’m heartened to see the topics people are tackling with visualization, and humbled by the impressive ways they’re doing it. A major thanks to the speakers, the program director Lynn Cherny, and the program committee that pulled this all together. Already looking forward to next year!
|
Visualizing Racism, Enhancing Perception, and Explaining Machine Learning: Reflections on…
| 126
|
visualizing-racism-enhancing-perception-and-explaining-machine-learning-reflections-on-1ea5a073e448
|
2018-06-09
|
2018-06-09 09:03:52
|
https://medium.com/s/story/visualizing-racism-enhancing-perception-and-explaining-machine-learning-reflections-on-1ea5a073e448
| false
| 994
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Mike Freeman
|
Faculty at @UW_iSchool teaching #datavis, #rstats, #webdev, and their impacts on society. Views are my own. He/him.
|
7796f80e1f9d
|
mf_viz
| 330
| 139
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-15
|
2017-09-15 09:37:17
|
2017-09-15
|
2017-09-15 09:41:18
| 1
| false
|
en
|
2017-09-15
|
2017-09-15 15:41:44
| 6
|
1ea5b6a077bd
| 2.354717
| 5
| 0
| 0
|
Why Energi Mine?
| 5
|
The Future of Energy is Here
Why Energi Mine?
Energimine is a tech company that provides business energy consumers with an alternative to a traditional energy broker. We are responding to the current problems the energy sector exhibits. There are many issues within the energy market place, however, we have identified four main issues at the core that have led to the stagnation of the industry:
It is centralised
It is opaque
There are a lack of incentives to use less energy
There is a lack of competition
The industry is centralised so that a small number of large companies supply tens of millions of customers who are essentially price takers. Electricity is traded OTC (over the counter) between energy companies or banks and there is no transparency to the users of energy. The markets are opaque and the lack of transparency is not beneficial to the end users who are, as stated above, just price takers as consumers are not kept up to date on market movements, leading them to make uninformed decisions. Following on from the first two issues, consumers are at a further disadvantage as there are no real incentives for them to use less energy. Suppliers make money from selling more energy at a high price and incentivising customers to change their behaviour would reduce the suppliers revenue stream. The lack of competition only allows them to continue to behave in this way, as barriers to entry are high with the complexity of regulation and costs associated pushing smaller, more ethical suppliers out of the market and allowing the monopolies and oligopolies to continue to take the lion share in all major global power markets.
Traditional energy brokers typically have lengthy manual processes that cost customers time and money whilst exposing them to human error along the way, we have implemented innovative technologies and used cutting-edge Artificial Intelligence to change the way ‘energy brokers’ operate in the sector. By automating our processes, we can eliminate human error, be far more adaptable to our customers’ needs, complete tasks efficiently and have more time to have human interaction with clients.
By introducing technology to our internal processes, we have found we can take processes that can take up to 3 months and complete them in under 20 minutes. This saves us time, money and resource that reduces our business overheads and places Energimine at the forefront of the market.
The issue of the centralised market can be overcome by implementing a blockchain system to decentralise it, leading to a more transparent market overall with the help of AI.
Energi Token will help incentivise consumers to use energy in a far more efficient manner, saving them money on their energy bills and allowing them to accumulate tokens through this positive behaviour. The lack of competition would be countered by introducing blockchain, whereby users purchase directly from generators rather than buying energy from a traditional energy supplier, where they have little to no control over how much they are paying on their energy bill. This extends to those consumers who generate their own electricity, giving them the ability to sell any excess energy directly to other end users.
This is just a short summary of what Energimine are planning, we shall be expanding on AI, blockchain and EnergiToken so stay tuned!
Register at our website:
https://energitoken.com/
Connect with us on:
LinkedIn: https://www.linkedin.com/company/11005552/
Twitter: https://twitter.com/EnergiMine
Telegram: t.me/energitoken
Slack: energimine.slack.com
Medium: https://medium.com/@energitoken
|
The Future of Energy is Here
| 54
|
why-energi-mine-1ea5b6a077bd
|
2018-04-25
|
2018-04-25 22:43:30
|
https://medium.com/s/story/why-energi-mine-1ea5b6a077bd
| false
| 571
| null | null | null | null | null | null | null | null | null |
Energy
|
energy
|
Energy
| 22,189
|
EnergiToken
|
EnergiToken rewards energy saving behaviour. Our blockchain solution will create a platform to reward energy efficient behaviour through EnergiToken.
|
2cf505f296c0
|
EnergiMine
| 158
| 50
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
64a72e5a36d3
|
2018-01-29
|
2018-01-29 09:14:03
|
2018-01-29
|
2018-01-29 14:38:40
| 6
| false
|
en
|
2018-04-26
|
2018-04-26 05:58:23
| 5
|
1ea604a16d24
| 2.957547
| 651
| 0
| 0
|
Singapore-based solution-as-a-service financial technology company SQ2 Fintech has signed a memorandum of understanding (MoU) with…
| 5
|
SQ2, Plaza join forces on blockchain-based “freedom lifestyle” solution
Chinatown at sunset, Singapore. By Erwin Soo (CC BY 2.0) via Wikimedia
Singapore-based solution-as-a-service financial technology company SQ2 Fintech has signed a memorandum of understanding (MoU) with blockchain-based e-commerce and consumer lifestyle developer Plaza Systems.
SQ2 will provide the technological foundation upon which Plaza Systems will build its vision of a “freedom lifestyle”, wherein the advantages of emerging technologies such as blockchain and artificial intelligence are seamlessly integrated into consumers’ lives.
Plaza Systems plans to raise up to 100,000 ETH via a token generation event (TGE) to fully develop and take to market an e-commerce-specific blockchain dubbed the MerchantChain and a suite of familiar consumer touchpoints that interact with it, including AI-assisted applications, a smart speaker, and a debit card.
SQ2 Fintech CEO Anthony Lau
“Plaza Systems has laid out a vision for the integration of blockchain, AI, and payment systems which very much aligns with SQ2’s mission,” SQ2 Fintech CEO Anthony Lau said.
SQ2 Head of Technology & Plaza Systems Technology Director Ronald Aai
SQ2 Director & Head of Technology Ronald Aai added: “My SQ2 colleagues and I look forward to working closely with Kevin and his team at Plaza Systems in building out our shared ideas.”
Under the MOU, Mr Aai will serve as Plaza Systems’ Technology Director.
Plaza Systems CEO & Chief Architect Kevin Johnson
Plaza Systems CEO & Chief Architect Kevin Johnson said: “Anthony, Ronald and their team are developing some very powerful products which fit neatly with our vision for a blockchain-based solution that everyone can use; for savings, for convenience, and to take back control of their personal information.”
Both Mr Lau and Mr Johnson suggested that the relationship between the two companies would likely deepen after the ICO but refused to indicate how.
“Speculation is likely to mount,” Mr Johnson said. “It’s the nature of things, especially in an emerging tech space such as crypto.”
Mr Lau added: “Actually we have been working together since November last year. This MoU signing now reflects a comfort level we both share.”
About SQ2 Fintech
SQ2 Fintech is a financial technology solution-as-a-service provider headquartered in Singapore with offices in China and Malaysia. Focussed on building innovative technological solutions, platforms and ecosystems, SQ2 has developed numerous hardware and software solutions including: IOT devices; online and mobile platforms; mobile e-wallets; global prepaid debit card solutions; backend management consoles and systems; business productivity platforms; service staff appreciation and rewards systems; mobile e-commerce platforms; chat, streaming and social network modules; and more.
About Plaza Systems
Plaza Systems occupies the intersection of lifestyle and technology.
We are developing the Total bCommerce™ solution, which includes the fast and future-proof MerchantChain™ — a blockchain commerce (bCommerce) infrastructure upon which others can build decentralised applications — as well as the Freedom Lifestyle™.
The Freedom Lifestyle suite of product search & payment tools offer:
The ability to quickly and conveniently browse the best shopping deals across the whole internet, anytime, and from anywhere;
The privacy, savings, and security of cryptocurrency payments;
The sensible flexibility to enjoy the products of your favourite vendors using a payment system everyone is familiar with; and
That’s not all! Follow the links below to find out more …
Website: plaza.systems
White Paper: plaza.systems/whitepaper
More news & opinion from Plaza Systems: medium.com/plaza-systems
|
SQ2, Plaza join forces on blockchain-based “freedom lifestyle” solution
| 1,444
|
sq2-plaza-join-forces-on-blockchain-based-freedom-lifestyle-solution-1ea604a16d24
|
2018-06-17
|
2018-06-17 23:25:38
|
https://medium.com/s/story/sq2-plaza-join-forces-on-blockchain-based-freedom-lifestyle-solution-1ea604a16d24
| false
| 532
|
At the intersection of lifestyle & technology with the FASTEST blockchain designed for business, trade, and commerce
| null |
plazasystems
| null |
Plaza (PLAZA)
|
press@plaza.systems
|
plaza-systems
|
TECHNOLOGY,CRYPTOCURRENCY,BLOCKCHAIN,ECOMMERCE,BUSINESS
|
plazasystems
|
Fintech
|
fintech
|
Fintech
| 38,568
|
David Gillbanks
|
Content & communications pro @ newmedia.pro
|
3d68bf021a51
|
davidgillbanks
| 707
| 23
| 20,181,104
| null | null | null | null | null | null |
0
|
Next Location = Gradient Downhill x Step Size
| 1
| null |
2018-07-03
|
2018-07-03 08:36:59
|
2018-07-29
|
2018-07-29 19:20:30
| 8
| false
|
en
|
2018-07-29
|
2018-07-29 19:20:30
| 2
|
1ea6b0e4b331
| 3.431447
| 0
| 0
| 0
|
Think about a hiker who is trying to find his way off a mountain at night. He can’t look at where he wants to go; all he can do is look at…
| 4
|
Simple explanation of Gradient Descent
Think about a hiker who is trying to find his way off a mountain at night. He can’t look at where he wants to go; all he can do is look at the ground below him and follow the gradient downhill.
This simple idea of following the gradient can be used in complex neural networks but to understand the intuition behind what’s happening, it’s worth looking at a simple example.
Simple example
Let’s say that we wanted to find the lowest point of the following function. It only has one parameter, the x-value. So we only want to find an x-value that produces the smallest f(x).
This function is plotted below and it is trivial to find the lowest point, if you can see the whole curve. However, if you could not see the whole graph but could only see the small area at x = 7, how would you find the lowest point?
(NB: For such a simple function, there are direct analytical ways of finding the lowest point. We are going to look at another way because this method can then be used to calculate more complicated functions.)
We determine the gradient at x = 7, which is shown as the black line. We then take a step so that we are moving down the gradient, following the black line.
We are now at x = 4 and we do the same as last time by taking another step downhill. The larger the step the faster we move but if the step is too large we could miss the bottom.
If we kept following this approach, we would reach the bottom. This technique can be summed up as:
Where you keep moving until you reach the bottom of the function —you know you’re there when your next steps takes you uphill instead of down. So what happens when the function is more complex?
A little more complex
Now, let’s say that we have some complex function that takes in two parameters:
An example plot of this new function is depicted below. Again the technique for finding the lowest point could be finding the gradient at each location taking a step down hill.
Source: Andrew Ag
As can be seen above, there may be many local low points. The goal of this method is to find the global minimum, but in reality this is often infeasible and a local minimum will still yield useful results.
global min = lowest point in the search space
local min = lowest point in local area
The local minimum is very useful as shown by the results neural nets have obtained. One famous example of a neural net trained using this approach is Alphago and Lee Sedol. Of course these functions can get extremely complicated but this fundamental technique of Gradient Descent still works.
This approach of training neural networks is surprisingly simple but unfortunately getting them to train in practice can be difficult. Imagine the initial parameters are set to a very high local minimum. It is like the hiker being dropped on top of a mountain but in a hole.
Think about this hiker trying to find his way off the mountain at night. Again the hiker would follow the gradient but this would only take him to the middle of this hole. The importance of initial conditions is just one example where training can be difficult.
Conclusion
Gradient Descent is a very useful method for finding a good local minimum of a function. This simple method can also be used to train neural nets yielding great results.
|
Simple explanation of Gradient Descent
| 0
|
simple-explanation-of-gradient-descent-1ea6b0e4b331
|
2018-07-29
|
2018-07-29 19:20:30
|
https://medium.com/s/story/simple-explanation-of-gradient-descent-1ea6b0e4b331
| false
| 609
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Andrew Arderne
| null |
abc92293b0d8
|
z.arderne
| 2
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-13
|
2018-08-13 21:41:14
|
2017-09-27
|
2017-09-27 07:00:44
| 1
| false
|
en
|
2018-08-13
|
2018-08-13 21:49:04
| 3
|
1ea704dc9548
| 4.196226
| 0
| 0
| 0
|
By Tucker Davey
| 5
|
Explainable AI: A Discussion With Dan Weld
By Tucker Davey
Machine learning systems are confusing — just ask any AI researcher. Their deep neural networks operate incredibly quickly, considering thousands of possibilities in seconds before making decisions. The human brain simply can’t keep up.
When people learn to play Go, instructors can challenge their decisions and hear their explanations. Through this interaction, teachers determine the limits of a student’s understanding. But DeepMind’s AlphaGo, which recently beat the world’s champions at Go, can’t answer these questions. When AlphaGo makes an unexpected decision it’s difficult to understand why it made that choice.
Admittedly, the stakes are low with AlphaGo: no one gets hurt if it makes an unexpected move and loses. But deploying intelligent machines that we can’t understand could set a dangerous precedent.
According to computer scientist Dan Weld, understanding and trusting machines is “the key problem to solve” in AI safety, and it’s necessary today. He explains, “Since machine learning is at the core of pretty much every AI success story, it’s really important for us to be able to understand what it is that the machine learned.”
As machine learning (ML) systems assume greater control in healthcare, transportation, and finance, trusting their decisions becomes increasingly important. If researchers can program AIs to explain their decisions and answer questions, as Weld is trying to do, we can better assess whether they will operate safely on their own.
Teaching Machines to Explain Themselves
Weld has worked on techniques that expose blind spots in ML systems, or “unknown unknowns.”
When an ML system faces a “known unknown,” it recognizes its uncertainty with the situation. However, when it encounters an unknown unknown, it won’t even recognize that this is an uncertain situation: the system will have extremely high confidence that its result is correct, but it will be wrong. Often, classifiers have this confidence because they were “trained on data that had some regularity in it that’s not reflected in the real world,” Weld says.
Consider an ML system that has been trained to classify images of dogs, but has only been trained on images of brown and black dogs. If this system sees a white dog for the first time, it might confidently assert that it’s not a dog. This is an “unknown unknown” — trained on incomplete data, the classifier has no idea that it’s completely wrong.
ML systems can be programmed to ask for human oversight on known unknowns, but since they don’t recognize unknown unknowns, they can’t easily ask for oversight. Weld’s research team is developing techniques to facilitate this, and he believes that it will complement explainability. “After finding unknown unknowns, the next thing the human probably wants is to know WHY the learner made those mistakes, and why it was so confident,” he explains.
Machines don’t “think” like humans do, but that doesn’t mean researchers can’t engineer them to explain their decisions.
One research group jointly trained a ML classifier to recognize images of birds and generate captions. If the AI recognizes a toucan, for example, the researchers can ask “why.” The neural net can then generate an explanation that the huge, colorful bill indicated a toucan.
While AI developers will prefer certain concepts explained graphically, consumers will need these interactions to involve natural language and more simplified explanations. “Any explanation is built on simplifying assumptions, but there’s a tricky judgment question about what simplifying assumptions are OK to make. Different audiences want different levels of detail,” says Weld.
Explaining the bird’s huge, colorful bill might suffice in image recognition tasks, but with medical diagnoses and financial trades, researchers and users will want more. Like a teacher-student relationship, human and machine should be able to discuss what the AI has learned and where it still needs work, drilling down on details when necessary.
“We want to find mistakes in their reasoning, understand why they’re making these mistakes, and then work towards correcting them,” Weld adds.
Managing Unpredictable Behavior
Yet, ML systems will inevitably surprise researchers. Weld explains, “The system can and will find some way of achieving its objective that’s different from what you thought.”
Governments and businesses can’t afford to deploy highly intelligent AI systems that make unexpected, harmful decisions, especially if these systems control the stock market, power grids, or data privacy. To control this unpredictability, Weld wants to engineer AIs to get approval from humans before executing novel plans.
“It’s a judgment call,” he says. “If it has seen humans executing actions 1–3, then that’s a normal thing. On the other hand, if it comes up with some especially clever way of achieving the goal by executing this rarely-used action number 5, maybe it should run that one by a live human being.”
Over time, this process will create norms for AIs, as they learn which actions are safe and which actions need confirmation.
Implications for Current AI Systems
The people that use AI systems often misunderstand their limitations. The doctor using an AI to catch disease hasn’t trained the AI and can’t understand its machine learning. And the AI system, not programmed to explain its decisions, can’t communicate problems to the doctor.
Weld wants to see an AI system that interacts with a pre-trained ML system and learns how the pre-trained system might fail. This system could analyze the doctor’s new diagnostic software to find its blind spots, such as its unknown unknowns. Explainable AI software could then enable the AI to converse with the doctor, answering questions and clarifying uncertainties.
And the applications extend to finance algorithms, personal assistants, self-driving cars, and even predicting recidivism in the legal system, where explanation could help root out bias. ML systems are so complex that humans may never be able to understand them completely, but this back-and-forth dialogue is a crucial first step.
“I think it’s really about trust and how can we build more trustworthy AI systems,” Weld explains. “The more you interact with something, the more shared experience you have, the more you can talk about what’s going on. I think all those things rightfully build trust.”
This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.
Originally published at futureoflife.org on September 27, 2017.
|
Explainable AI: A Discussion With Dan Weld
| 0
|
explainable-ai-a-discussion-with-dan-weld-1ea704dc9548
|
2018-08-17
|
2018-08-17 17:43:02
|
https://medium.com/s/story/explainable-ai-a-discussion-with-dan-weld-1ea704dc9548
| false
| 1,059
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Future of Life
|
FLI catalyzes and supports research and initiatives to safeguard life and develop optimistic visions of the future. Official account.
|
e33e2d2a809c
|
FLIxrisk
| 1,361
| 93
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-25
|
2018-07-25 19:26:03
|
2018-07-25
|
2018-07-25 19:41:31
| 1
| false
|
en
|
2018-07-25
|
2018-07-25 20:34:17
| 13
|
1ea70f16050a
| 3.369811
| 22
| 0
| 0
|
What a first year this has been.
| 5
|
Basis Set Ventures Turns One
What a first year this has been.
Over the past year we’ve partnered with entrepreneurs who impress us with their drive and ideation, and we now have over a dozen portfolio companies in the BSV community that match our mission of using artificial intelligence to fuel work productivity.
We invest in AI that improves the way people work, both inside and outside the office, and we’ve written before about some of the white collar productivity ideas we’re excited about. We’ve also been spending more time understanding the traditional blue collar worker space as we grow our portfolio to include companies that are using AI to solve complex real-world problems. These companies are addressing issues like the labor shortage in farms (where we invested in FarmWise together with our friends at Felicis and Playground); the high churn of hourly workers (Workstream helps hourly workers find employment in restaurants and factories); machine downtime in factories (early BSV portfolio company Falkonry recently closed a Series A round); and lack of security (our investment in Turing Video helps make offices and factories more secure). For these and the many other unique companies in our growing portfolio, we couldn’t be more thrilled to partner with the talented, inspirational entrepreneurs bringing these solutions to market.
Building a fund is one of one hardest things I’ve done. Raising two kids is by far the hardest thing I’ve done. Building a fund while raising two young kids (or rather, “incubating” my second child while fundraising for and then launching BSV) has taken that to a new level. It’s the BSV team and support of our friends, founders, and partners that made this possible.
Last year we welcomed John Mannes to the investing team, and this spring we were thrilled to have Anna Chang join the team as well. Anna is an experienced leader in sales and operations who previously worked at Dropbox and Stripe. Our portfolio companies are already benefitting from her experience as she helps them develop go-to-market strategies and scale their business. Our advisors work with the BSV team and portfolio companies day-in and day-out; Niniane has been particularly instrumental in helping us build what is becoming the strongest technical women leaders network.
A big part of being a good partner to great companies means continuing to immerse ourselves in the space they operate in and all the rapid changes around the future of work, so this past year has also been filled with some unique side projects. In February, for example, I returned to the Midwest on a “Comeback Cities Tour” with Rep. Tim Ryan (D-Ohio), Rep. Ro Khanna (D-Calif.) and a dozen venture capitalists to see the innovation taking place there and how we can get more investment flowing to companies in this region. What really stood out to me was the range of market-specific solutions to address locational working challenges, driven by entrepreneurs uniquely positioned to understand and empathize with their users and needs they’re solving for. More recently, I’ve become a contributing author to Forbes where I’ve shared my thoughts on topics like the ways AI can create more work opportunity and finding a work-life balance through technology.
Over the past year, BSV has launched and grown a thriving Women in AI Community. What started as a group of 15 female CTOs and founders gathering for dinner once a month to discuss industry-related topics, has turned into a full-fledged community of women who help each other grow in tech and expertise just as they provide feedback and advice on leadership and other challenges. We also continue to host intimate events that bring together industry leaders and startup CEOs to spark conversation and discuss opinions and advice on themes like voice recognition, robotics, managing workforces, and insurance tech. We love being able to offer our office as the hub to gather together and foster an even closer community.
On that note, as the extended BSV family continues to work together, we’re excited to be able to regularly invite our growing community to bring their kids to our office so collaborative work can be just as attainable for working parents. Whether for our portfolio companies, advisors, pitching entrepreneurs, or anyone else in our community busy raising a family while developing AI innovations, it’s tremendously rewarding to foster a workspace where kids can amuse themselves in BSV’s Future of Play area while mom or dad takes a meeting or participates in our events.
As we embark on our second year as a firm, we are more optimistic than ever about the opportunity for AI to improve the productivity, process, and success of our many and varied collective work lives. We look forward to continuing this journey alongside our talented entrepreneurs who are tackling the problems of today to make our lives better tomorrow. If you or someone you know shares our beliefs, we would love to meet!
Lan Xuezhao, founding and managing partner, on behalf of the BSV team
|
Basis Set Ventures Turns One
| 241
|
basis-set-ventures-turns-one-1ea70f16050a
|
2018-07-25
|
2018-07-25 20:34:17
|
https://medium.com/s/story/basis-set-ventures-turns-one-1ea70f16050a
| false
| 840
| null | null | null | null | null | null | null | null | null |
Venture Capital
|
venture-capital
|
Venture Capital
| 32,826
|
Basis Set Ventures
|
We invest in companies that harness the opportunities for artificial intelligence (AI) to improve our work lives
|
a3af92fc575b
|
bsv
| 189
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
661161fab0d0
|
2018-07-29
|
2018-07-29 00:46:51
|
2018-07-29
|
2018-07-29 04:09:21
| 5
| false
|
en
|
2018-07-31
|
2018-07-31 12:36:33
| 0
|
1ea928a74a49
| 4.746541
| 4
| 0
| 0
|
The Year is 2056
| 5
|
Free Bobby
The Year is 2056
“Mr. Shmurda, we are impressed by your amicable behavior”, the magistrate said. “As you are aware, it is within my power to reduce your sentence down to 40 years and that time has come. Did you anticipate this day?”
Of course Bobby did. The last time he spoke to the magistrate, she was applauding his leadership in the prison gospel sessions. It was inspiring to see the inmates embrace religion, even as they were sheltered from the religious and technological changes of the outside world.
Clay from which we came
About two decades earlier, humanity had solved the coreference problem, coined as the “New Turing Test”. Given the sentences:
“She poured water from the jug into the cup until it was full”, and
“She poured water from the jug into the cup until it was empty”.
The coreference problem asks, what is “it”?
For humans it is quite obvious. But to understand this, we must understand concepts like cup, pouring, and physics, while combining them with language; an impossibly hard task for artificial intelligence. Until 2035, even the most sophisticated AI systems had not fully pieced together all the different domains of knowledge necessary to be considered “General Artificial Intelligence”.
Turns out silicon wasn’t enough for advanced intelligence. Even the best silicon TPUs could only make AI as smart as a frog, and from 2022 to 2034, the world entered yet another AI Winter. What we realized was that advanced intelligence required ever changing neurons, which was not possible with silicon unless we had nanobots. And nanobots were not possible without advanced intelligence to pioneer the manufacturing process. It was quite the chicken and egg problem, and it seemed like every industry from aerosols to quantum computing hit this computational dead-end.
But then came carbon-based TPUs, powered by genetically engineered STEM cells. When they first came out in 2025, GMO STEMs could only create simple replacement organs. Nevertheless it was heralded as a new era of human prosperity. Over the years, startups played around with making crazier and crazier organisms… if it was fair to call them that. Most lacked self-sufficient organs, requiring pre-processed “food” to provide the necessary chemical ingredients for molecular manufacturing. The world saw the birth of amazing organisms like spy-ravens, but they could neither survive autonomously nor reproduce. By 2030, most companies had given up on making the extremely complex bio-systems needed for self-sufficient life, save for a few persistent healthcare conglomerates like Avro.
5 years later, a joint team of Chinese and New Korea researchers proved in their landmark paper that manufactured, self-sufficient life was possible — all you had to do was give up control. By randomly substituting natural and artificial genes in animal STEM cells, eventually you’d get one that could communicate with the primitive electrical impulses used by brain scanners, and hope that the STEM cells would listen. Immediately, research teams across the world attempted to re-create silicon TPUs with carbon neurons that could self-update with directed STEM cells. Within months, multiple teams succeeded in creating biological brains that were able to self-improve with every 24 hour generation. The implications were so profound that the world unofficially adopted a new era-term. Humanity was no longer in the CE (Common Era) or AD (After Death, of Christ). We were now in the PC Era — Post Creation. And the PC Era was the era of General Artificial Intelligence.
If you ask anyone alive during this time, they will tell you that everything from 2035 P.C. onwards was a blur. Every month a new breakthrough was announced, and every month another couple millions of people rejected this brave new world. Numerous new religions sprung forth, and old religions also saw a massive resurgence in faith. By 2040 the commonly accepted creation myth was Panspermia. Life in the universe was seeded by an advanced alien lifeform who had mastered carbon and spread it via interstellar ships. But no matter how advanced technology could get, life would never be able to travel faster than 15% the speed of life. It was simply impossible, and thus, the distances between star systems could never be crossed. All life could hope for was lucky longshots.
2056 P.C., On his first day out, Bobby walked into a garden of Eden — one of many in New York. He stared upwards at the tree in front of him, with the name of the designing company, Membio, etched in the wood. The branches hung heavy with bio-engineered moss, each branch manufacturing a different medicine. With human lifespans extended by cheap medicine and replacement organs, people became more poetic. In this case, the medicine tree was nicknamed “The Tree of Life”. Breathing in slowly and heavily, Bobby calmed his anxious heart and continued to walk through the garden.
The sky was bright blue and clean. No A.I. doomsday ever came about, and despite the religious secularism that divided humanity, the world was peaceful. No one could explain why or how we got here, and it was almost surreal to think about. Children ran through the garden park with not a care in the world, as did their parents. People seemed to genuinely be happy, taking selfies with strangers and renting free bicycles and canoes. Public services were available to everyone, and quality of life was visible in the amusement parks, quaint countryside villages and free coffee accessible to everyone. The only time people were stressed was when they visited the zoos, and every garden of eden had one.
Passing through the interior of the zoo, Bobby paused to examine the birds. It was near impossible to distinguish the natural from the artificial. The birds looked the same, acted the same, and bled the same. As did the pandas, chimps and velociraptors. Like everyone else who stared at the mixed organisms, Bobby began to question his own existence. But unlike everyone else who shared these same thoughts, his life in prison overpowered any tech-enabled existentialism. He knew what it felt like to have no choice, no movement, no vote. Breathing in his new-found freedom, he knew that at least today, he was alive and real.
|
Free Bobby
| 113
|
free-bobby-1ea928a74a49
|
2018-07-31
|
2018-07-31 12:36:33
|
https://medium.com/s/story/free-bobby-1ea928a74a49
| false
| 1,037
|
where the future is written
| null | null | null |
Predict
|
predictstories@gmail.com
|
predict
|
FUTURE,SINGULARITY,ARTIFICIAL INTELLIGENCE,ROBOTICS,CRYPTOCURRENCY
| null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Kangze Huang
|
Finance, Code and Entrepreneurship. Founder of RentHero AI
|
2a0ed9f9a5df
|
kangzeroo
| 267
| 44
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-07
|
2018-06-07 21:54:54
|
2018-06-08
|
2018-06-08 11:11:01
| 3
| false
|
en
|
2018-06-08
|
2018-06-08 11:11:01
| 3
|
1eaa2f8f9133
| 1.674528
| 4
| 0
| 0
|
As one of the newest member of this Data Science universe, it is always jaw-dropping to see someone else’s demo, especially python gurus or…
| 4
|
Most Useful Keyboard Shortcuts for Jupyter Notebook
Photo by Courtney Corlew on Unsplash
As one of the newest member of this Data Science universe, it is always jaw-dropping to see someone else’s demo, especially python gurus or code-ninjas, on their code building in the Jupyter Notebook. Here are the most useful keyboard shortcuts that I enjoy using it and curious about how the world that code-expert did some magic right in front of my eye as instantly switching and jumping this code block to there.
Command Mode vs Edit Mode
Before we are talking about the shortcuts, make sure that you are in the right mode in the Jupyter notebook. If it is not working, you will check again with the color in the code block on your left.
Command Mode in Blue
Edit Mode in Green
Essential keyboard shortcuts in Jupyter notebook:
Enter Edit Mode and Escape to Command Mode: return key in command mode and escape key in edit mode
Change cell between Markdown and Code: M key in command mode to make Markdown and Y key in command mode to make Code
Insert cell above and below: A key to insert above, B key to below, both are in command mode.
Delete cell: hit double D in command mode as D, D
Merge multiple cells: select targeted cells by a cursor and hit M key. (make sure you are in command mode)
Slice cells: ctrl + shift + - in edit mode, put your cursor at the slice point.
Toggle line numbers: L in command mode.
Run cell & insert below: shift + return in command/edit mode
These basic shortcuts will make your Jupyter notebook experience much fruitful and one step closer to the code ninja. Adding to that, as a practitioner, it will keep you with the keyboard to enhance your muscle memory in code building.
|
Most Useful Keyboard Shortcuts for Jupyter Notebook
| 104
|
most-useful-keyboard-shortcuts-for-jupyter-notebook-1eaa2f8f9133
|
2018-06-15
|
2018-06-15 19:22:56
|
https://medium.com/s/story/most-useful-keyboard-shortcuts-for-jupyter-notebook-1eaa2f8f9133
| false
| 298
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Kihoon Sohn
|
Data Scientist | Patient Listener
|
8e4c1dbe157c
|
kihoon.sohn
| 14
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
3bff3ecd1295
|
2018-04-16
|
2018-04-16 19:18:00
|
2018-04-16
|
2018-04-16 19:26:28
| 1
| false
|
en
|
2018-05-05
|
2018-05-05 11:56:12
| 20
|
1eab0fe4651b
| 0.988679
| 2
| 0
| 0
|
In the next thirty or ten or five years, 47% or 60% or between 6% and 39% of jobs will change, disappear, or be automated.
| 5
|
What’s a Human to Do?
In the next thirty or ten or five years, 47% or 60% or between 6% and 39% of jobs will change, disappear, or be automated.
None of us have a crystal ball, but every time we see a drone, make Alexa laugh, or let our phones pay for groceries, we sense the labor market is changing–and changing big.
Looks friendly enough…
Reports and studies warning of imminent joblessness or promising increased opportunity fill our feeds. Solutions are few–training, self-employment, relocation–and can lack context. It’s as if we’re starting from scratch.
But we don’t have to. There are lots of answers in what workforce systems are already doing. Regional industry partnerships, peer networks, and labs, labs, and more labs. Apprenticeship, TAACCCT, and anything, anytime, anywhere training. Mission-centered innovations around youth, entrepreneurship, good business. Better data.
More tools and resource are emerging in the social and private sectors — Universal Basic Income, platform-based learning, the restructuring of organizations, work, and employment.
Let’s hive-mind and work together so, we’re ready for coming changes, tomorrow — and every day after.
You in? Let us know how you and yours are contributing to a better tomorrow. http://bit.ly/2HD4IPN (It’s short, I promise). And ping us at Social Policy Research.
|
What’s a Human to Do?
| 41
|
whats-a-human-to-do-1eab0fe4651b
|
2018-05-05
|
2018-05-05 11:56:13
|
https://medium.com/s/story/whats-a-human-to-do-1eab0fe4651b
| false
| 209
|
A humane Future of Work calls for modern social safety net, smarter use of gigs, hustles, and platforms, top-notch training and reskilling, and AI and robot friends where we need them most. We have choices. Let’s make good ones.
| null |
TheTomorrowProjectatWork
| null |
The Tomorrow Project
|
Kristin_Wolff@spra.com
|
the-tomorrow-project
|
UNIVERSAL INCOME,GIG ECONOMY,FUTURE OF WORK,AI,AUTOMATION
|
kristinwolff
|
Basic Income
|
basic-income
|
Basic Income
| 2,763
|
kristin wolff
|
Thinker, doer, aspiring rainmaker. Thinks, does at Social Policy Research, thinkers+doers. #workoutloud #FutureofWork #socinn #socent #work #civictech #opendata
|
4d7a968b4e43
|
kristinwolff
| 276
| 323
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-03
|
2018-05-03 12:13:42
|
2018-05-03
|
2018-05-03 12:17:15
| 2
| false
|
en
|
2018-05-03
|
2018-05-03 12:17:15
| 4
|
1eabf8ab043c
| 2.153145
| 0
| 0
| 0
|
A proper chatbot strategy can be helpful to determine investment and forecasted gains, unify the approach across your business, and gain…
| 5
|
Guide to Building Your Enterprise Chatbot Strategy
A proper chatbot strategy can be helpful to determine investment and forecasted gains, unify the approach across your business, and gain stakeholder buy-in.
Smart virtual assistants or chatbots are here and a reality for enterprises everywhere. By offering a closed-loop approach to better customer engagement, bots offer new advantages for businesses looking to get ahead of the competition to capture new, repeat, and referral business, and even reduce operational spend in doing so. Bots turn the traditional, frustrating digital experiences your customers are used to into conversational, personalized, and instantly gratifying engagements, resulting in smarter, higher-value purchase and service interactions. Artificial intelligence (AI) powered enterprise chatbot solutions are the future!
Chatbots have been around for some time, but organizations have now started adapting chatbot technology from the business point of view. The reason for adapting chatbot for business is very simple as more number of people are increasingly using chat services other than any communication medium.
Leveraging the power of AI bots
Customer experience angst has a measurable business impact — and it isn’t positive. 87% of customers indicate their customer service experience impacts their decision to do business with a vendor. 82% are likely to stop spending with a company due to a bad service experience.
Using a combination of Natural Language Processing (NLP), machine learning, and AI, bots are poised to transform the digital customer experience. Customers no longer need to decide which medium best matches their requirements.
Also Read: What is an Enterprise Chatbot Platform? A Guide and Checklist To Elevate Your Chatbot Strategy and Capabilities
Creating a chatbot strategy for enterprise
If you are a Chief Information Officer (CIO), Innovation Lead, or responsible for your organization’s technology roadmap, and understand the substantial productivity gains AI and chatbots yield across areas like ITSM, Sales, Banking, Finance, ERP, Retail, HR, and customer service. Having a proper chatbot strategy can be helpful to determine investment and forecasted gains, unify the approach across your business, and gain stakeholder buy-in.
Kore.ai’s findings reveal that the benefits of an enterprise chatbot platform extend well beyond the simple cost of acquiring the technology and encompass how the platform is architected, implemented, and modified over time. In order to see the true cost and payoff of enterprise chatbot deployment, buyers must carefully evaluate the immediate and long term costs of acquiring and maintaining the technology.
Access the complete CIO Toolkit for valuable resources to help you define, develop, and execute a chatbot strategy with quantifiable results for any industry or enterprise function. This kit provides practical, experience-driven resources to help executives and enterprise technology teams plan, define, and execute a conversational computing strategy that’s a win-win for your customers, employees and business. Get the CIO Toolkit
Originally published at blog.kore.ai.
|
Guide to Building Your Enterprise Chatbot Strategy
| 0
|
guide-to-building-your-enterprise-chatbot-strategy-1eabf8ab043c
|
2018-05-03
|
2018-05-03 12:17:16
|
https://medium.com/s/story/guide-to-building-your-enterprise-chatbot-strategy-1eabf8ab043c
| false
| 469
| null | null | null | null | null | null | null | null | null |
Chatbots
|
chatbots
|
Chatbots
| 15,820
|
Anirban Guha
|
Seasoned inbound marketing professional and tech blogger. Loves to write about #AI, #Marketing, #Chatbots and #Technology. Follow me on Twitter @anibeg25
|
d1943a8d3b6e
|
anirbanguha_73947
| 55
| 83
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-10
|
2018-08-10 06:51:21
|
2018-08-10
|
2018-08-10 06:58:34
| 8
| false
|
en
|
2018-08-10
|
2018-08-10 07:02:21
| 6
|
1eaca8f2f63e
| 5.348428
| 4
| 0
| 0
|
Introduction
| 5
|
Understanding Batch Normalization
Introduction
Batch Normalization is a very well know method in training deep neural network. Batch Normalization was introduced by Sergey Ioffe and Christian Szegedy from Google research lab. Batch Normalization is about normalizing the hidden units activation values so that the distribution of these activations remains same during training. During training of any deep neural network if the hidden activation distribution changes because of the changes in the weights and bias values at that layer, they cause rapid changes in the layer above it. This slows down the training a lot. The change in distribution of the hidden activations during is called internal covariate shift which effect the training speed of the network. The batch normalization is though and designed to overcome the problem of internal covariate shift. In 1989, this paper by Yann Lecun, Leone bottou and few others named “Efficiant BackProp “ shows that doing normalization of the input data or whitening of the input data helps in training neural network very efficiently. This same method should applied at the activation level also in order to efficiently learn the deep neural networks. Consider the picture shown bellow.
Here we have a deep neural network with 3 hidden layers along with an input and an output layer. Each hidden layer has its own weight matrices and bias vectors as shown in figure. The input at each layer goes through some affine transformation using weight matrix and bias vector. For example output from hidden layer L2 acts as input Hidden layer L3. The layer 2 (L2) hidden activation values are transfromed by multiplying Layer 3 weight matrix and added with bias values. This output is passed through an activation function like sigmoid , relu or tanh and output of hidden layer(L3)is obtaned. This process gets repeated at every hidden layer. As we saw, the layer 3 activations are actually affected by layer 2 activations. If the distribution of the layer 2 activation values changes rapidly then it affects the efficiency in training of the Deep neural network.
Consider the picture shown above where we look at the one single Deep neural network into multiple subnetworks. During deep neural network training we apply normalization to the inputs [x1,x2,…….xN] to train the Deep neural network efficiently. Now if we split one single DNN into multiple subnetworks the we can think of the hidden activations from layer 1 as the inputs to the second subnetwork. Now we can think of applying the same normalization to the hidden activation of layer 1(inputs to layer2).
Batch Normalization
In the above section we saw what is the necessity of applying Normalization to the hidden unit activation values. In this section we will see how to normalize the hidden activations. Normalization of any data is about finding the mean and variance of the data and normalizing the data so that the the data has 0 mean and unit variance. In our case we want to normalize each hidden unit activation. Consider we have d number of hidden units in a hidden layer of any Deep neural network. We can represent the activation values of this layer as x=[x1,x2,………xd]. Now we can normalize the kth hidden unit activation using the formula bellow.
Here x^ the normalized value of the kth hidden unit. E(x^k) is the expectation of the kth units values also called the mean value and Var(x^k) is the variance of the kth hidden unit. After normalization each hidden unit will have zero mean and unit variance but we typically do not want 0 mean and variance of 1. Instead we want the network to learn and adapt these mean and variance values. For this we introduce 2 new variable, one for learning the mean and other for variance. These parameters are learned and updated along with weights and biases during training. The final normalized scaled and shifted version of the hidden activation for the kth hidden unit is given bellow.
Typically when we are training any deep neural network we don’t feed the entire data in one shot because of the computation complexity increases. Instead neural networks are training with stochastic optimization techniques where small batch of data is sampled from the whole dataset and the network parameters are updated based on the loss values of that batch. The assumption of that the optimization will still find a better minima as described in the classic work by Herbert Robbins on stochastic optimization. In training any Deep neural network we use mini-batch of size 32,64,128,etc. So in that case batch normalization can be applied as described in the steps bellow.
Assume we have a minibatch of m training examples. We pass this minibatch to our neural network. At layer i we get The hidden activations matrix Hi. We then compute the mean and variance for each column as shown in figure bellow and apply batch normalization transformation .
During training we update the batch normalization parameters along with the neural networks weights and biases. One more important observation of batch normalization is that, batch normalization acts as a regularization because of the randomness shown by using mini-batches.
Batch Normalization during inference
During testing or inference phase we can’t apply the same batch-normalization as we did during training because we might pass only sample at a time so it doesn’t make sense to find mean and variance on a single sample. For this reason we compute a running average of mean and variance of kth unit during training and use those mean and variance values with trained batch-norm parameters during testing or inference phase.The process can be understood from the picture bellow which explains the step during inference phase.
How to use Batch-Normalization in Deep learning libraries
Tensorflow
In Tensorflow you can use tf.nn.batch_normalization api to add it to your deep neural networks. The speed of training increases by 14x as quoted in the original paper. This API normalizes the mean and variance and applies the batch-norm transformation.
Pytorch
In pytorch we can use torch.nn.BatchNorm2d or to apply batch norm to your neural network layer. The picture bellow is the code that i wrote for 1d convolution for speech signals which use batch-norm at every convolution layer.
About Me
I currently work in an AI company based in Bangalore called Cogknit Semantics. We work on Speech, Computer vision and NLP problems. We have built very good solutions for any Speech, Image or NLP problems. We have published many papers in both national and international conferences. Our speech team is the runner up in building speech recognition system for 3 Indian languages conducted by Microsoft. Feel free to chat with us. Visit our company website here.
|
Understanding Batch Normalization
| 22
|
understanding-batch-normalization-1eaca8f2f63e
|
2018-08-10
|
2018-08-10 07:02:21
|
https://medium.com/s/story/understanding-batch-normalization-1eaca8f2f63e
| false
| 1,117
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Krishna D N
|
I am a research engineer at Cogknit Semantics and i want to understand how brain works.
|
2076601b691b
|
krishna_84429
| 6
| 30
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
395e365436f0
|
2018-01-05
|
2018-01-05 06:18:30
|
2018-01-05
|
2018-01-05 06:56:27
| 3
| false
|
en
|
2018-06-26
|
2018-06-26 04:09:58
| 4
|
1eaf3f2693e7
| 3.633019
| 9
| 1
| 0
|
Today, I decided to blow a crap ton of money on a deep learning rig.
| 5
|
Deep Learning Rig — Part One
Today, I decided to blow a crap ton of money on a deep learning rig.
What is a Deep Learning Rig?
A deep learning rig is a computer that uses graphics cards to parallelize math operations commonly found in deep learning.
To be a little more concrete, I’ll give my personal view of what such a computer looks like:
A computer that runs Linux software (because Tensorflow and Windows don’t mix), paired with powerful graphics cards made by NVidia. As of today, Ubuntu seems to be the most prevalent OS distribution that runs most Deep Learning (DL) frameworks reliably (Tensorflow, CNTK, PyTorch, Keras). As of today, NVidia GPUs (Graphics Process Units) are the only graphics cards that have DL code written for them.
Graphics Cards: Usage
Graphics cards do hundreds of thousands of math operations at once. Each operation is performed slightly slower than if were handled on a CPU, but this sacrifice produces high throughput for GPUs. GPUs are greatly more efficient than CPUs when dealing with massive amount of numerical data that needs to be math’d.
The connection to deep learning lies in the massive amount of numerical data that needs to be math’d. Ironically, the mining process for blockchain currencies also relies on this same principle to generate cryptosecurity, so GPUs are sold out in Amazon and are just annoying to find.
Graphics Cards: Politics
Graphics cards are the breath and heartbeat of the deep learning community. They’re the extension of Moore’s Law that allowed for modern neural networks to become trainable in a workable amount of time. Using them is imperative to conducting any sort of research, business practice, or competition (@Kaggle).
This is the gold rush of the deep learning era. Where we all charge towards the promise of untold riches by applying math and computer science techniques on problems previously unbreakable. (Side note, we’re not that sure these algorithms are guaranteed to work forever because we don’t understand how our why they’re efficient.) And as with all gold rushes, it’s the clothing, washing, and food stores that make all the money. In DL era, that’s NVidia & AWS. You just can’t deep learn anything, without NVidia, or at scale, without AWS.
Basically, AWS charges ~$1/hr for using a single GPU accelerated Ubuntu instance for deep learning. However, they charge another ~$1/hr for persistent data storage of large datasets, which are (! surprise !) widely prevalent in the DL community. And that cost quickly racks up as you train models overnight, or if you need to create multiple GPU accelerated instances for different experiments. It’s a great service, don’t get me wrong. So much research, especially at Berkeley’s AI Research (BAIR) lab is dependent on this service.
It’s just expensive and a long-term thorn in your ass. Now, the computer.
The Rig: Process
Because I’m stupid and because I used up all my funds from 3 years of savings. Strangely, I couldn’t find any parts lists for a Deep Learning computer on either pcparts.com or Amazon’s built in “idea lists.” But, I didn’t look that hard, so they may be out there. Also, not many popular guides?? Lot’s of small blog posts, but many people have different $ spend and different GPUs…
My links to a lot of build guide blogs came from Andy Twigg. But the one I followed basically to the “T” was from Slav Ivanov. Slav’s got a really great guide, and I hope that his installation of the drivers will help me when my parts arrive.
As of writing this piece, I believe the last piece, the case, will come in around 5 days from now. Which gives me 3 days to set the rig up and either leave it running at my house, and set up a secure SSH workflow to access it from Berkeley, or bring it to my dorm. Not much space at my dorm, and I’ll definitely need ethernet cables, and since I’ll definitely be running the GPUs day and night––it’ll be very loud. I’ll cross that bridge when I get there. I may regret saying that later, but that’s just where I’m at right now. If I get burned by my poor decision making habits, then I will be well on my way to having learned a valuable lesson.
The Rig: Parts
While, I’d love to spell out all the parts, I’ve used, the majority come from Slav’s post which is linked above. ATM, I’m planning on picking up 2 GeForce 1080 Ti’s from Frys tomorrow. The only difference between my parts and Slav’s is that I picked a different wifi adapter because I’m edgy like that.
For my full parts list (minus the Frys GPUs), I created an Amazon “Idea List”.
That’s all for now! I’ll post an update when I actually begin building my new computer and try to install stuff.
Flat ethernet cable for my dorm, so I can slip the cord in between the desks and bed.
Z270 motherboards the only motherboards compatible with the 7th generation Intel CPU processors. Thanks to Mai at Frys who helped me with that.
MSI GeForce GTX 1080’s that I’m putting into the TUF motherboard above.
Ciao, mis amigxs!
|
Deep Learning Rig — Part One
| 59
|
deep-learning-rig-part-one-1eaf3f2693e7
|
2018-06-26
|
2018-06-26 04:09:58
|
https://medium.com/s/story/deep-learning-rig-part-one-1eaf3f2693e7
| false
| 817
|
ML and Data Science Stuff.
| null | null | null |
Imploding Gradients
| null |
implodinggradients
|
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,STATISTICS,KAGGLE
| null |
Deep Learning
|
deep-learning
|
Deep Learning
| 12,189
|
Noah Gundotra
|
UC Berkeley ’21 Computer Science and Math Major
|
6b8fc902a287
|
ngundotra
| 79
| 63
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
70f217fc23a8
|
2018-09-05
|
2018-09-05 20:49:56
|
2018-09-05
|
2018-09-05 09:00:59
| 2
| true
|
en
|
2018-09-05
|
2018-09-05 23:20:59
| 0
|
1eb2adea398e
| 2.866352
| 31
| 1
| 0
|
Machine translation works well for sentences but turns out to falter at the document level, computational linguists have found
| 4
|
Human Translators Are Still on Top — for Now
Machine translation works well for sentences but turns out to falter at the document level, computational linguists have found
Photo: Nazar Abbas Photography/Moment/Getty Images
By Emerging Technology from the arXiv
You may have missed the popping of champagne corks and the shower of ticker tape, but in recent months computational linguists have begun to claim that neural machine translation now matches the performance of human translators.
The technique of using a neural network to translate text from one language into another has improved by leaps and bounds in recent years, thanks to the ongoing breakthroughs in machine learning and artificial intelligence. So it is not really a surprise that machines have approached the performance of humans. Indeed, computational linguists have good evidence to back up this claim.
But today, Samuel Laubli at the University of Zurich and a couple of colleagues say the champagne should go back on ice. They do not dispute their colleagues’ results but say the testing protocol fails to take account of the way humans read entire documents. When this is assessed, machines lag significantly behind humans, they say.
At issue is how machine translation should be evaluated. This is currently done on two measures: adequacy and fluency. The adequacy of a translation is determined by professional human translators who read both the original text and the translation to see how well it expresses the meaning of the source. Fluency is judged by monolingual readers who see only the translation and determine how well it is expressed in English.
Computational linguists agree that this system gives useful ratings. But according to Laubli and co, the current protocol only compares translations at the sentence level, whereas humans also evaluate text at the document level.
So they have developed a new protocol to compare the performance of machine and human translators at the document level. They asked professional translators to assess how well machines and humans translated over 100 news articles written in Chinese into English. The examiners rated each translation for adequacy and fluency at the sentence level but, crucially also at the level of the entire document.
The results make for interesting reading. To start with, Laubli and co found no significance difference in the way professional translators rated the adequacy of machine- and human-translated sentences. By this measure, humans and machines are equally good translators, which is in line with previous findings.
However, when it comes to evaluating the entire document, human translations are rated as more adequate and more fluent than machine translations. “Human raters assessing adequacy and fluency show a stronger preference for human over machine translation when evaluating documents as compared to isolated sentences,” they say.
The researchers think they know why. “We hypothesise that document-level evaluation unveils errors such as mistranslation of an ambiguous word, or errors related to textual cohesion and coherence, which remain hard or impossible to spot in a sentence-level evaluation,” they say.
For example, the team gives the example of a new app called “微信挪 车,” which humans consistently translate as “WeChat Move the Car” but which machines often translate in several different ways in the same article. Machines translate this phrase as “Twitter Move Car,” “WeChat mobile,” and “WeChat Move.” This kind of inconsistency, say Laubli and co, makes documents harder to follow.
This suggests that the way machine translation is evaluated needs to evolve away from a system where machines consider each sentence in isolation.
“As machine translation quality improves, translations will become harder to discriminate in terms of quality, and it may be time to shift towards document-level evaluation, which gives raters more context to understand the original text and its translation, and also exposes translation errors related to discourse phenomena which remain invisible in a sentence-level evaluation,” say Laubli and co.
That change should help machine translation improve. Which means it is still set to surpass human translation — just not yet.
|
Human Translators Are Still on Top — for Now
| 276
|
human-translators-are-still-on-top-for-now-1eb2adea398e
|
2018-09-05
|
2018-09-05 23:20:59
|
https://medium.com/s/story/human-translators-are-still-on-top-for-now-1eb2adea398e
| false
| 658
|
MIT Technology Review
| null |
technologyreview
| null |
MIT Technology Review
| null |
mit-technology-review
|
TECHNOLOGY,TECH,ARTIFICIAL INTELLIGENCE
|
techreview
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
MIT Technology Review
|
Reporting on important technologies and innovators since 1899
|
defe73a9b0ba
|
MITTechReview
| 23,166
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-18
|
2018-06-18 21:00:30
|
2018-06-18
|
2018-06-18 21:03:49
| 2
| false
|
en
|
2018-06-18
|
2018-06-18 21:03:49
| 7
|
1eb6608a263b
| 2.311635
| 0
| 0
| 0
|
How neural networks are producing mental images
| 3
|
How neural networks are creating mental images
How neural networks are producing mental images
Beau Perkins
06/17/2018
Pipe cleaner animals demonstrate how our minds process shape (source: https://pdfs.semanticscholar.org/bdf9/b8f4de001f5f37bc844efbce1210d581d599.pdf)
At the turn of the seventies, the first wave of scientific research on mental images posed some important questions about how we process information. What are our constraints? What is essential to determining the shape of an object? What kind of coordinate system is best to represent spatial data? And within that system, what do we need to focus on to get the optimal result for the least cost?
Each of these questions is part of the bigger picture: How do we determine the whole from its parts? We want to know what kind of magic our brain does to construct an entire episode from fragments because if we model it, we learn even more about it.
Five decades later…
Not only a few days ago, the Generative Query Network (GQN) came out in Science magazine. Just as the mind absorbs visual information and rapidly locates it within a larger context, so the GQN learns about scenes through a handful of observations of its surroundings. Give it a few 2D viewpoints in a scene, and it will actually render an entire 3D environment based on inference.
Unlike earlier attempts at scene representation, the GQN’s learning isn’t supervised or in need of labeled datasets or preprogrammed lighting rules. It generates lighting, shadows, and perspective all on its own. Not only that, but it can classify the objects with incredible accuracy — a long ways from the kind of computation imagined in the 20th century.
The GQN is composed of two different neural networks linked end-to-end. The first network forms a representation of the data with their relative positions between one another, color, etc. If the representation is good, then the information passed to the second network, learned via backpropagation, will generate a good 3D scene from an arbitrary perspective.
(source: http://science.sciencemag.org/content/360/6394/1204.full, Fig. 1)
This research is a landmark in modeling the formation of mental images, although it only works for synthetic scenes. It’d be an error, however, to say that it’s the first of its kind. Three years ago, Google undertook a much more complicated task called DeepStereo.
Back in time, from synthetic to real
Trained off panoramas taken from a moving vehicle, DeepStereo predicts and generates new perspectives from “real-world, natural images”. For such a complicated problem, it was a precocious project. They figured that, since convolution networks using Stochastic Gradient VB were able to produce different poses of faces given initial images acceptably well, they could probably do the same for giant scenes.
Though the result is fascinating and impressive, it’s not as honest as it could be. Where it can’t account for uncertainty due to lack of detail, DeepStereo opts to blur the image. In addition, it’s easy to see the video’s jumping graphics from time-to-time. But what’s important is that it showed we can eventually, with lots of research and increasing hardware capacity, create a system for scene representation with the accuracy of the GQN and the scope of DeepStereo.
|
How neural networks are creating mental images
| 0
|
how-neural-networks-are-creating-mental-images-1eb6608a263b
|
2018-06-18
|
2018-06-18 21:03:50
|
https://medium.com/s/story/how-neural-networks-are-creating-mental-images-1eb6608a263b
| false
| 511
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Beau Perkins
| null |
cce0c307206f
|
beau.perkins18
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-16
|
2017-11-16 01:59:56
|
2017-11-16
|
2017-11-16 02:08:55
| 1
| false
|
en
|
2017-11-16
|
2017-11-16 02:08:55
| 9
|
1eb854fbd52c
| 3.007547
| 1
| 0
| 0
|
With the latest technological breakthroughs, companies have realized the need to use AI to gain a competitive sales edge.
| 5
|
The Future of Sales is Artificial Intelligence. Here’s Why…
Excerpt from an article from Acuvate blogs. Read the full post here.
Needless to say, artificial intelligence is currently leading the conversation in the tech industry.
From changing functionalities of a basic Google search to boosting adoption rates of a corporate intranet, AI technologies have been transforming every aspect of both our professional and personal lives.
And most predictions are positive too.
AI will be a top five investment priority for more than 30 percent of CIOs — Gartner
With the latest breakthroughs in the areas of Natural Learning Processing, Machine Learning etc. companies have realized the need to use AI technologies to gain a competitive edge.
AI is bound to benefit various teams across an organization but we predict that AI led transformation will be particularly key for the sales department.
More than any other department, the efficiency of sales is highly dependent on automating labor-intensive tasks and obtaining the right data at the right time. And AI might just be the perfect answer for these problems.
The AI market is expected to be worth USD 16.06 Billion by 2022, growing at a CAGR of 62.9% from 2016 to 2022 — Markets and Markets
Here are four ways AI is transforming the sales department:
Increased CRM adoption
Sales teams in many organizations suffer from low CRM adoption rates. Research has shown that CRM adoption is less than 50% and 74% of sales teams using CRM systems have poor adoption rates.
Most CRM systems today are often filled with tons of high-volume data making it difficult for sales reps to get the required information quickly.
AI technologies like chatbots ingest data from not just CRMs but also other LOB systems like ERP or Data warehouses to generate and analyze key information. By integrating a chatbot into your CRM systems, Sales reps can just ask a natural language question and the bot will present the accurate information within seconds.
Sales Forecasting With Predictive Analytics
Whether its to identify new product/market segments, achieve higher OTIF delivery rate or drive better MSL(Must Stock List) compliance or estimate demand, sales forecasting is a must.
However sales forecasting has become a rather tedious task for organizations and there is also an uncertainty in the accuracy of estimations. In fact 54% of deals forecasted by sales reps never close.
Another significant branch of AI is Predictive Analytics. Powered with the combination of artificial intelligence, machine learning, statistical modelling and data mining,this technology increases the accuracy of important estimations in sales forecasting. For industries where there is a constant struggle for sales teams to run the right promotions — say FMCG, predictive analytics can even recommend the correct promotions which yield the maximum ROI.
Hassle free data management
The efficiency of a sales team is highly dependent on one aspect — right information at the right time.
In a typical work day, a sales rep has to go through tons of reports, spreadsheets and data before making a decision. In addition, this data may not always be structured in the way he/she wants.
Again, AI chatbots save all the hassle here.
If integrated with organizational messaging platforms like Skype For Business, Slack, Skype, Yammer etc. the bot can act as a virtual assistant and sales representatives don’t have to keep switching applications for accessing data. Chatbots can present the data in multiple formats — simple text, rich media or mixed making it simple and intuitive for the sales team to quickly grasp the key data points.
Sales Intelligence with Prescriptive Insights
Prescriptive Analytics is relatively a new field of predictive analytics. With prescriptive analytics sales teams can not only get insights on what is going to happen in the future but also on why it’s going to happen.
With prescriptive analytics, sales reps can easily spot the key factors which are affecting sales performances and can also get powerful insights on why these factors are crucial.
Conclusion
Traditionally the sales job was a tiresome one. With artificial intelligence, a sales rep can automate mundane and time-consuming tasks and focus only on the ones which require human involvement.
62% of enterprises will use AI technologies by 2018 — Narrative Science
Considering current predictions, market trends and uses cases of AI, it is evident that the faster a sales department adopts AI, the greater its competitive edge will be.
Excerpt from an article from Acuvate blogs. Read the full post here.
|
The Future of Sales is Artificial Intelligence. Here’s Why…
| 1
|
the-future-of-sales-is-artificial-intelligence-heres-why-1eb854fbd52c
|
2017-11-16
|
2017-11-16 14:04:36
|
https://medium.com/s/story/the-future-of-sales-is-artificial-intelligence-heres-why-1eb854fbd52c
| false
| 744
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Abhishek Shanbhag
|
Practice Head — AI & Automation focusing on #AI #Bots #RPA #ML #IoT at @Acuvate
|
eab2ce92ca35
|
acuvabi
| 31
| 32
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-14
|
2018-06-14 23:53:39
|
2018-06-14
|
2018-06-14 23:55:12
| 4
| false
|
en
|
2018-06-14
|
2018-06-14 23:55:12
| 1
|
1eb8a7a9bf19
| 3.567925
| 0
| 0
| 0
|
If you have read Part 1 and Part 2 of the Visualizing Network Data with Python posts you have probably noticed one missing piece of data…
| 2
|
Visualizing Network Data Using Python: Part 3
If you have read Part 1 and Part 2 of the Visualizing Network Data with Python posts you have probably noticed one missing piece of data; graphing data over time. At first this might seem very easy — just create a list of timestamps and bytes and you are done. The problem is that you will have thousands of packets which are hard to view and most of them will be at or close to your max MTU (usually 1500 bytes). It would look something like this and not be very helpful:
So the next thought is to group the data into bins of a set time. Which is easy to do if you use the Pandas library in your code. The first step is to install Pandas
pip3 install pandas
Building on our last example you will import Scapy, Plotly and datetime along with pandas.
from scapy.all import *
import plotly
from datetime import datetime
import pandas as pd
Recall the first few steps are to read the PCAP using Scapy and then loop through the file adding bytes and timestamps to lists. In Scapy, bytes are accessed with pkt.len and times are pkt.time. Times are in Unix Epoch, which I like to convert to a string using strftime so I can easily print them using pretty table or other tools.
#Read the packets from file
packets = rdpcap(‘example.pcap’)
#Lists to hold packet info
pktBytes=[] pktTimes=[]
#Read each packet and append to the lists.
for pkt in packets:
if IP in pkt:
try:
pktBytes.append(pkt[IP].len)
pktTime=datetime.fromtimestamp(pkt.time)
pktTimes.append(pktTime.strftime(“%Y-%m-%d %H:%M:%S.%f”))
except:
pass
Next we start using Pandas. In Pandas the key element is a data frame. Our data frame is made of the bytes and time stamps. Bytes are a series creates from the list we created earlier. We do that by using pd.Series , with astype(int) so it will convert them as an int not a string.
#This converts list to series
bytes = pd.Series(pktBytes).astype(int)
You will then create the list of times to a Pandas datetime element. You will use to_datetime with the option “errors=coerce” to handle errors.
#Convert the timestamp list to a pd date_time
times = pd.to_datetime(pd.Series(pktTimes).astype(str), errors=’coerce’)
Now you create the dataframe with the elements.
#Create the dataframe
df = pd.DataFrame({“Bytes”: bytes, “Times”:times})
Then you will use the the element times as the index.
#set the date from a range to an timestamp
df = df.set_index(‘Times’)
If you want to do a little trouble shooting you have some options. To print the data simply issue a print(df). Or issue a df.describe to see the types of data. I also usually do a print(df.tail()) to see just the last few lines of the data .
We still haven’t binned the data. To do that we will create a new dataframe with the option resample(timePeriod). This example bin’s the data in to 2 second bins summing the data. You can also take an average using .mean() .
#Create a new dataframe of 2 second sums to pass to plotly
df2=df.resample(‘2S’).sum()
print(df2)
And just like before we will create a graph using plotly with the newly binned data.
#Create the graph
plotly.offline.plot({
“data”:[plotly.graph_objs.Scatter(x=df2.index, y=df2[‘Bytes’])],
“layout”:plotly.graph_objs.Layout(title=”Bytes over Time “,
xaxis=dict(title=”Time”),
yaxis=dict(title=”Bytes”))})
Output
The complete program looks like this:
#!/usr/bin/env python3
from scapy.all import *
import plotly
from datetime import datetime
import pandas as pd#Read the packets from file
packets = rdpcap(‘example.pcap’)#Lists to hold packet info
pktBytes=[] pktTimes=[]
#Read each packet and append to the lists.
for pkt in packets:
if IP in pkt:
try:
pktBytes.append(pkt[IP].len)
#First we need to covert Epoch time to a datetime
pktTime=datetime.fromtimestamp(pkt.time)
#Then convert to a format we like
pktTimes.append(pktTime.strftime(“%Y-%m-%d %H:%M:%S.%f”))
except:
pass
#This converts list to series
bytes = pd.Series(pktBytes).astype(int)
#Convert the timestamp list to a pd date_time
times = pd.to_datetime(pd.Series(pktTimes).astype(str), errors=’coerce’)
#Create the dataframe
df = pd.DataFrame({“Bytes”: bytes, “Times”:times})
#set the date from a range to an timestamp
df = df.set_index(‘Times’)
#Create a new dataframe of 2 second sums to pass to plotly
df2=df.resample(‘2S’).sum()
print(df2)
#Create the graph
plotly.offline.plot({
“data”:[plotly.graph_objs.Scatter(x=df2.index, y=df2[‘Bytes’])], “layout”:plotly.graph_objs.Layout(title=”Bytes over Time “,
xaxis=dict(title=”Time”),
yaxis=dict(title=”Bytes”))})
Conclusion
I hope you found this helpful. You can easily build from this on to more and more complex and unique programs to solve security and other IT problems. Since there was so much interest in this topic, I will follow up with one more complex model of data visualization in the next week. So stayed tuned for Part 4! Thanks for reading and please reach out if you have any questions.
As previously published on Automox.com
|
Visualizing Network Data Using Python: Part 3
| 0
|
visualizing-network-data-using-python-part-3-1eb8a7a9bf19
|
2018-06-14
|
2018-06-14 23:55:13
|
https://medium.com/s/story/visualizing-network-data-using-python-part-3-1eb8a7a9bf19
| false
| 760
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Automox
|
Automate your inventory, patching and endpoint compliance through one reliable platform.
|
31ec6960e2b2
|
automox
| 10
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-16
|
2018-08-16 23:39:27
|
2018-08-16
|
2018-08-16 23:44:47
| 1
| false
|
en
|
2018-08-17
|
2018-08-17 09:52:02
| 1
|
1eb92779ee26
| 0.486792
| 0
| 0
| 0
|
Even though octave website doesn’t go into details on how GNU Octave can be installed for various distros of Linux, a quick ppa search of…
| 5
|
Installing Octave on Ubuntu through ppa:octave/stable Channel
Even though octave website doesn’t go into details on how GNU Octave can be installed for various distros of Linux, a quick ppa search of Octave on Google gives the stable release channel, to install Octave execute following commands on your terminal :
When above installs, Octave can simply be launched by either entering : octave on your terminal (or) by Octave Desktop launcher found through the Windows key.
|
Installing Octave on Ubuntu through ppa:octave/stable Channel
| 0
|
installing-octave-on-ubuntu-through-ppa-octave-stable-channel-1eb92779ee26
|
2018-08-17
|
2018-08-17 09:52:02
|
https://medium.com/s/story/installing-octave-on-ubuntu-through-ppa-octave-stable-channel-1eb92779ee26
| false
| 76
| null | null | null | null | null | null | null | null | null |
Ubuntu
|
ubuntu
|
Ubuntu
| 4,084
|
Himanshu Sharma
|
Undergrad @ NITA - High on ML, AI, OpenSource and Coffee. Primarily want to build better Recommender Systems, Image classification & Segmentation systems.
|
8f8658588de9
|
himanshuxd
| 6
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
9c321fdafce5
|
2017-09-06
|
2017-09-06 13:45:16
|
2017-09-06
|
2017-09-06 13:50:42
| 1
| false
|
en
|
2018-10-11
|
2018-10-11 20:53:09
| 0
|
1ebc58b5e389
| 2.396226
| 2
| 0
| 0
|
Planning for a trip can be a stressful time for travellers because it requires hours of research time. You might need a little help here…
| 5
|
#4 Bots + Travel Industry ✈️🌍
Planning for a trip can be a stressful time for travellers because it requires hours of research time. You might need a little help here and there. That’s where a bot steps in. A bot is a computer program that has a conversation with users using artificial intelligence. A travel bot could help users save time, organize their trip or recommend places. It is available on your favourite messaging apps such as Facebook Messenger and Telegram. All the user needs to do is add the bot as a friend and you are good to go. It will be available 24/7, support different languages and provides immediate response. The bot will be able to help users with different types of problems ranging from planning a trip to finding what to do in a city.
Travellers experience wide range of issues such as planning, travel essentials and arriving safely from A to B. Chatbots might finally be the way of providing a hassle-free travel experience. A bot mixing personal travel data and destination content could become the perfect travel companion. Having a travel bot available to users can make planning for their holidays much easier. A good travel bot should be able to understand meet the user’s needs:
Baggage Allowance
Finding accurate reviews
Free WIFI
Flight delays
This will save the user time because the bot will be able to provide an accurate and immediate response. Need help finding a travel insurance package? The bot will give you a variety of travel insurances so you can compare which one is suitable to your needs. Once you booked your holiday you could tell the bot when and where you are going. It will remind and update you on any changes to your trip. This will improve the way you travel so you can focus on the more important things.
Some of the current issues tourists face at Heathrow London Airport is finding the best route to the city, how to use the transport services, how to overcome language barriers and remembering flight information. The bot is your own personalised travel guide who can help you deal with any problems you are facing at the airport. A bot is the best option for this because it will respond immediately. Therefore, it will help reduce the user searching time on the internet or not having to disturb other travellers at the airport.
A travel bot shouldn’t be limited to providing airport and flight information. It should also be able to inspire potential travellers and even allow them to book on the go. Being able to give personalised destination recommendations based on the user’s location or preferences could be interesting for travellers. If the bot can notify users to any changes on their trip it will bring real value, save money and time.
Customer service is one of the first areas that will benefit from a Travel bot. It will improve the travel industry because it will save companies money and offer a better user travel experience. Hotels can improve their customer experience by using Chatbots. They will be able to send a welcome or goodbye message, provide room info and offer a personalised room service. Also, the bot can help airlines because they will be able to offer smart flight entertainment based on the user’s preferences. By designing a travel bot which works well and meets the user’s needs will enhance their travelling experience by offering a more tailored service.
|
#4 Bots + Travel Industry ✈️🌍
| 2
|
how-chatbots-can-impact-the-travel-industry-1ebc58b5e389
|
2018-10-11
|
2018-10-11 20:53:09
|
https://medium.com/s/story/how-chatbots-can-impact-the-travel-industry-1ebc58b5e389
| false
| 582
|
Follow us to explore a new way of storytelling.
| null |
iampopin
| null |
I AM POP
|
hello@iampop.in
|
iampop
|
FACEBOOK MESSENGER,MARKETING,CHATBOTS,MUSIC BUSINESS,TECH
|
iampopin
|
Bots
|
bots
|
Bots
| 14,158
|
ㅤㅤㅤZakaria
|
tech. thinker. optimist
|
2cf18d5399fc
|
zackaria01
| 37
| 0
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-08
|
2018-02-08 13:40:07
|
2018-03-12
|
2018-03-12 09:11:02
| 7
| false
|
en
|
2018-03-12
|
2018-03-12 09:11:02
| 0
|
1ebd56b5d181
| 3.827358
| 2
| 0
| 1
|
Biometric verifications is an authentication system that uses unique human information. This information has no mirror in the world. It is…
| 5
|
Biometric Verification with ML
Biometric verifications is an authentication system that uses unique human information. This information has no mirror in the world. It is needed physical access to enter authentication information. Because of this, this type of verification mechanism using in real world secure mechanism. The most commonly known biometric verification systems are based on these;
Fingerprint Recognition
Finger Vein Recognition
Retina and Iris Recognition
Hand/Palm Recognition
Voice Recognition
Signature Recognition.
Face Recognition
These verification techniques commonly used around the world. For example, it is commonly witnessed that fingerprint recognition system using in ATM machines, secure facility entering and entrance to working area for employees. Voice recognition is commonly used in call centers for the purpose of identifying customers. Palm recognition is commonly used in medical services for verification patient with high accuracy rate. Retina and Iris recognition is commonly used secure facility entrance etc.
In this part, it is explained that how can system recognize these pattern using machine learning techniques. Fingerprint recognition, retina-iris recognition, a hand-palm recognition which are most commonly used biometric systems are explained with some detailed information. Other techniques explained shortly.
Fingerprint Recognition
Fingerprint recognition one of the most commonly using biometrics types in the world. For the recognize fingerprint, firstly it must be scanned finger with the fingerprint scanner. The output of fingerprint scanner mechanism is a single image which shows finger surface with black/white colored. And with the image processing techniques, this image is processed and extracted features.
In the training phase, it is collected fingerprint images for every user more than once and calculated feature information is saved to the database.
In the test phase, collect user fingerprint image through fingerprint scanner sensor in real time. After this step, it is calculated feature information about test image. Finally, with the most basic approach, these informations are compared with the informations which are stored in the database for real users. Distance metrics are used for comparison. If test image is similar enough to user’s train images, the authentication result is successful, else authentication failed. Defining selected features, selected machine learning algorithm which will be used for decision mechanism, and selected distance metrics are directly influential to accuracy rate.
Fingerprint detection is one the most commonly used authentication technique in personal life. So that, mobile phone producers implements fingerprint detection systems into their mobile phones. This type of systems make authentication easier to mobile phone’s owner and make harder for other people. This is a good example of the widespread use of user authentication systems with machine learning in daily life.
Retina and Iris Recognition
Retina recognition is a biometric technique that uses the unique patterns on a person’s retina for person identification. The retina is the layer of blood vessels situated at the back of an eye. The eye is positioned in front of the system at a capture distance ranging from 8 cm to one meter. The output of eye scanner sensor is a blood vessel image of the retina. Every human’s blood vessel figure is unique. With image processing techniques, extracted features of the image, then making a decision using machine learning techniques. An example of retina image given on the right.
Another biometric verification technique which is based on the eye is an iris recognition. The iris is the part of the eye that is colored and it is responsible for controlling the amount of light entering the eye. Iris has a veined structure and unique for every human in the world. The structure of iris extracted by image processing techniques and creating a decision mechanism using machine learning techniques.
Hand/Palm Recognition
Palm detection based two different logic; first of these scanning hand surface like fingerprint scanning, the second one is extracted blood vessel of palm. In hand recognition systems, the hand is scanned by the visual scanner and extracted surface of hand information. In palm detection systems, palm scanned by an infrared sensor, and output of this type of techniques is a blood vessel of palm picture. Features extracted by image processing algorithms in both techniques. And creating decision mechanism using machine learning algorithms.
Voice recognition is a type of signal processing. Every human has a unique voice, and this information detectable by machine learning techniques. Biometric verification can be done using finger vein information, signature shapes and face recognition. Accuracy rates of biometric verification techniques have given in the figure below.
|
Biometric Verification with ML
| 2
|
biometric-verification-with-ml-1ebd56b5d181
|
2018-04-12
|
2018-04-12 12:01:30
|
https://medium.com/s/story/biometric-verification-with-ml-1ebd56b5d181
| false
| 736
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Ebubekir Büber
| null |
ff57f208d81d
|
EbubekirBbr
| 154
| 54
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
8625dfd4fef0
|
2018-05-28
|
2018-05-28 07:35:46
|
2018-06-08
|
2018-06-08 04:15:17
| 1
| false
|
en
|
2018-06-08
|
2018-06-08 15:48:13
| 4
|
1ebf4a6969cd
| 4.422642
| 1
| 0
| 0
|
SINGAPORE, 8 June 2018 — Viola.AI, the blockchain-powered dating and relationship AI has announced its strategic partnership with AI…
| 5
|
Viola.AI, AI Singapore and Singapore Management University Announced Strategic Partnership to Develop Robust AI Matching and Recommendations Engine for World’s First Lifelong Love AI
SINGAPORE, 8 June 2018 — Viola.AI, the blockchain-powered dating and relationship AI has announced its strategic partnership with AI Singapore, a Singapore government initiative to catalyse, synergise and boost the nation’s AI capabilities, and Singapore Management University (SMU) to provide dating and relationship solution for singles, couples and married couples with quality matches, date arrangements, helpful relationship advice and recommendation of goods and services.
Viola.AI is the latest product by Singapore’s home-grown 14-year-old dating company Lunch Actually Group. Co-founders Violet Lim and Jamie Lee are leveraging on blockchain technology to employ a verification system that helps singles have the confidence that they are meeting genuine singles through real-time face scan, photo and social media checks. The Viola. AI team is also creating an evolving smart Love AI based on the user’s relationship status to help singles find their match and help couples nurture and grow their current relationships.
In 2017, Singapore’s total fertility rate dropped to 1.16, the second lowest ever recorded. According to the General Household Survey, the proportion of singles in the age group of 25–29 age group has sharply increase from 51.7% in 2000 to 71.4% in 2015 in the country. And despite most of them indicated that they wanted to get married in the future, nearly half of singles in Singapore have not dated seriously before, based on the latest study by National Population and Talent Division (NPTD). The top reason for not dating was not being able to find a partner.
“When we first heard of AI Singapore, we approached them immediately and was one of the first company to showcase our proposal to them. We are glad that they understood our BHAG (big hairy audacious goal) of creating one million happy marriages and they have been extremely supportive of our project. AI Singapore has since connected us to SMU and facilitated the collaboration. We’re very excited for the impact that this strategic partnership will have for Viola.AI to make it the smartest AI Love Advisor,” said Violet Lim, CEO & Co-Founder of Viola.AI.
Viola.AI is set to revolutionize the industry and solve these current problems by creating a robust and holistic AI matching and recommendation engine that leverages on Lunch Actually Group’s existing matching model, database of more than 1.4 million users and their domain knowledge in the matchmaking service for the past 14 years.
Lim added, “Our goal is to develop a powerful, more effective matching model which is not just key in giving singles better matches, but also serves as the strong foundation for developing robust advisory engine that can understand the complex relationship issues and provide appropriate relationship advice to singles, couples or married couples.”
As part of this collaboration, SMU will conduct a research project to feed data to the personality match engine model that incorporates thorough review on spouse selection and family formation, and lead the various focus groups and surveys on the sociology and technical aspects on matching.
The team at SMU will benefit from Lunch Actually Group’s 14 years of industry experience, and their in-house personality-matching algorithm for their existing online and offline hybrid dating service, esync. Viola.AIwill also receive tremendous support in resources through experts and the AI Apprenticeship Program by AI Singapore and SMU to create a revolutionary, efficient matching engine.
Professor Paulin Straughan, SMU Sociologist said, “Despite the many initiatives to encourage more couples to get married and start a family, it is still tough for singles, especially in Singapore, who have high pressure to do well in their career, to find the time to meet and choose the right partner. There is also shows more to be done to help couples nurture their relationship so that they can continue to strengthen marital bonds, to better prepare for marriage challenges.”
Prof. Straughan who will serve as the Principal Investigator in this project added, “The insights from sociological research that we will conduct will be essential in building the AI matching and recommendation engine, and help us better understand the singles and couples, and anticipate their needs. We expect the AI to be pro-active in recommending future actions based on users’ characteristics, needs and values to ensure a lasting and happy relationship. While face-to-face advisory remains an important aspect of marriage advisory, we anticipate that many will find it more convenient and conducive to have an AI platform that draws on a comprehensive database to provide ideal matches in a safe environment which protects their privacy.”
The strategic partnership between Viola.AI, AI Singapore and SMU is an important milestone in AI Singapore’s cooperation with different industries in the area of AI development, research and education.
“We are delighted to partner with Viola.AI and SMU in this exciting project. Viola.AI is an example of an innovative product which shows that AI technology is here to change the game and puts homegrown startup like Lunch Actually Group at the forefront of the dating industry. We’re glad to be a part of it,” said Laurence Liew, Director of AI Industry Innovation, AI Singapore.
Viola.AI will be starting their Public Sale on 17 June 2018 at 8pm (UTC+8), and its MVP product has been launched to its selected Alpha testers this week.
Users can join the Whitelist now at http://www.viola.ai/ and take advantage of the highest bonus of 25% bonus tokens on 17–18 June 2018 only.
About Viola.AI
Viola.AI is the world’s first blockchain-powered Relationship Registry and Love AI to restore trust and transparency to the Love industry. Viola.AI evolves with you depending on the stage of relationship you are in — from single, to courtship, to engagement, to marriage and give users personalized advice and recommendations. Viola.AI also verifies the identity and relationship/marital status for all users and couples through its decentralized Relationships Registry.
About AI Singapore
AI Singapore (AISG) is a national programme launched by the National Research Foundation (NRF) to catalyse, synergise and boost Singapore’s artificial intelligence (AI) capabilities to power our future, digital economy. AISG is driven by a government-wide partnership comprising NRF, the Smart Nation and Digital Government Office (SNDGO), the Economic Development Board (EDB), the Infocomm Media Development Authority (IMDA), SGInnovate, and the Integrated Health Information Systems (IHiS). AISG will also bring together all Singapore-based research institutions and the vibrant ecosystem of AI start-ups and companies developing AI products, to perform use-inspired research, grow the knowledge, create the tools, and develop the talent to power Singapore’s AI efforts.
Press Contact
Christina Thung
Head of PR
christina@viola.ai
Telegram: @ChristinaThung
Viola.AI community: www.t.me/ViolaAI
|
Viola.AI, AI Singapore and Singapore Management University Announced Strategic Partnership to…
| 1
|
viola-ai-ai-singapore-and-singapore-management-university-announced-strategic-partnership-to-1ebf4a6969cd
|
2018-06-09
|
2018-06-09 21:18:45
|
https://medium.com/s/story/viola-ai-ai-singapore-and-singapore-management-university-announced-strategic-partnership-to-1ebf4a6969cd
| false
| 1,119
|
Viola.AI - The First Blockchain-Powered Relationship Registry (REL-Registry) & Lifelong AI Love Advisor, Restoring Trust in the USD800 Billion Love Industry
| null |
viola.ai.world
| null |
Viola.AI
|
info@viola.ai
|
viola-ai
|
ICO,BLOCKCHAIN,VIOLA,ETHEREUM,BITCOIN
|
viola_ai_
|
Fintech
|
fintech
|
Fintech
| 38,568
|
Christina Thung
|
Head of PR | Marketing Communications | Viola.AI | Netflix, films and music enthusiast | Travel junkie | christina@viola.ai
|
f4e5dbcc7b05
|
xteena21
| 120
| 42
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-08
|
2018-02-08 06:30:02
|
2018-02-08
|
2018-02-08 06:30:36
| 0
| false
|
en
|
2018-02-08
|
2018-02-08 06:30:36
| 0
|
1ec04119bdc4
| 0.226415
| 0
| 0
| 0
|
KVCH is a training institute providing Artificial intelligence training in Noida from more than two decades. KVCH offer unique learning…
| 3
|
Best artificial Intelligence training in noida
KVCH is a training institute providing Artificial intelligence training in Noida from more than two decades. KVCH offer unique learning experience with the best infrastructure and latest tools. The course curriculum is designed so that the candidate can start practicing as the professional Artificial intelligence developer as soon as they complete their course .
|
Best artificial Intelligence training in noida
| 0
|
best-artificial-intelligence-training-in-noida-1ec04119bdc4
|
2018-02-08
|
2018-02-08 06:30:37
|
https://medium.com/s/story/best-artificial-intelligence-training-in-noida-1ec04119bdc4
| false
| 60
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Abhishek uriyal
| null |
cfeae44f97d1
|
abhi.kvch
| 4
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-05
|
2018-06-05 17:17:20
|
2018-06-05
|
2018-06-05 21:51:45
| 2
| false
|
en
|
2018-06-25
|
2018-06-25 04:41:47
| 6
|
1ec043d67e4f
| 2.673899
| 12
| 0
| 0
|
Congratulations for making it through the series! The last thing we’ll discuss is what it takes to make all this a reality. There’s lots of…
| 5
|
How to ensure the safety of Self-Driving Cars: Part 5/5
Congratulations for making it through the series! The last thing we’ll discuss is what it takes to make all this a reality. There’s lots of work still to be done to guarantee the safety of autonomous vehicles, so if you’re working in this field, good luck!
Part 5: What it takes to make self-driving cars objectively safe
In our opinion, there’s a few things that need to happen before we can objectively call autonomous vehicles safe:
Safety Criteria that we all agree on
One of the fundamental elements to this whole thing is agreeing on the metrics for vehicle safety. It must go beyond “x fatalities per 100M miles driven” into “The vehicle will make the correct decision 99.9999% of the time in emergency scenarios.” For all advanced technologies that have become ubiquitous in our world, there is some standards body that certifies them. The ISO and IEEE are the commonly used in the high-tech industry, and they have labs that verify that certain product meets their standards.
Figure 1: NHTSA Crash Test of Smart Fortwo (Source)
This doesn’t yet exist for the AV stack, and needs to exist to give the public confidence in the autonomous world. How are they going to verify the vehicle functions properly? Well, that could be anything from a set driving course with emergency scenarios, or some type of MIL/SIL/HIL simulation.
Just like a motorcycle helmet can’t be used on public roads without a “DOT” sticker, an autonomous vehicle shouldn’t be able to operate at level 4 or 5 autonomy (or potentially level 2 or 3) without a standards organization saying it’s safe.
More data
This one’s simple, we just need to log more miles. More miles on the road, more miles in simulation. One thing proposed is the sharing of data among vehicle makers. While this would greatly accelerate improvements in autonomous technology, it would cause some companies to lose their competitive advantage. Call us communists, but we’re of the opinion that the sharing of vehicle data among the industry is for the betterment of the world.
Standardization of the AV Stack
This one’s a toughie. Ultimately, there needs to be some standard software architecture that autonomous driving companies can leverage, similarly to how AUTOSAR (Automotive Open System Architecture) created the standardized software architecture for the automotive ECU (Electronic Control Unit) that controls vehicles today. This will ensure that the safety is guaranteed regardless of the autonomous application. If AUTOSAR isn’t the one to do this, I am sure another conglomerate of Autonomous Vehicle companies will bring this into fruition.
Figure 2: Technology Landscape for Autonomous Vehicles, Vision Systems Intelligence, LLC
Part of that standardized software stack will be the vehicle dynamics control system described in the planning section. This will be critical for allowing engineers to claim that they have created objectively safe vehicles that can perform better than humans in emergency scenarios.
Tools!
Here’s where we play. There must be simple to use tools that help Engineers deal with all this data and make their cars better. If you’re passionate about this too, shoot us an email and we should chat!
Conclusion
Thanks for your time. We would love to hear your feedback on this series. Let us know if we forgot something important, got something completely wrong, or even better, if you loved it. Feel free to connect with us on LinkedIn or via twitter or email.
For those working on making autonomous vehicles a reality, we salute you!
Read the Rest of the Series: How to ensure the safety of Self-Driving Cars
Part 1 — Introduction
Part 2 — Sensing
Part 3 — Planning
Part 4 — Acting
Part 5 — Conclusion
|
How to ensure the safety of Self-Driving Cars: Part 5/5
| 65
|
how-to-ensure-the-safety-of-self-driving-cars-part-5-5-1ec043d67e4f
|
2018-06-25
|
2018-06-25 04:41:47
|
https://medium.com/s/story/how-to-ensure-the-safety-of-self-driving-cars-part-5-5-1ec043d67e4f
| false
| 607
| null | null | null | null | null | null | null | null | null |
Self Driving Cars
|
self-driving-cars
|
Self Driving Cars
| 13,349
|
Jason Marks
|
Founder of Olley, Accelerating the Mobility Revolution
|
a1d01be9b8f2
|
olley_io
| 119
| 10
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-22
|
2018-08-22 04:12:17
|
2018-08-22
|
2018-08-22 17:28:13
| 3
| false
|
en
|
2018-08-22
|
2018-08-22 17:28:13
| 23
|
1ec19e1a717d
| 4.946226
| 0
| 0
| 0
|
This post is my day 1 definition and thoughts on what machine learning (ML) is and why we use it. I’ll compare this to my answer in 100…
| 5
|
Photo by Mohammad Metri on Unsplash
What is Machine Learning? — Day 1 #100DaysOfMLCode
This post is my day 1 definition and thoughts on what machine learning (ML) is and why we use it. I’ll compare this to my answer in 100 days.
Computers without machine learning (ML) algorithms are bad at making Spotify playlists for your friends. You know what kind of music your friends listen to and what songs they might like without creating spreadsheets and graphs with all the aspects of the songs. However, you are really bad at knowing every song you have ever listened to and every song currently available on Spotify.
ML algorithms are a combination of human and computer strengths. ML is able to collect and manipulate large amounts of data and update its decision making with new pieces of information. ML uses statistical analysis to make choices. Spotify uses many ML algorithms to come up with the perfect playlist for you.
Writing a ML algorithm has six steps according to Stephen Marsland.
You collect data.
You select features you want to look at in this data set.
You select an algorithm (an algorithm is just a set of instructions).
You tweak parameters in this algorithm.
You train it.
You evaluate it.
Collecting Data (Information Gathering)
Spotify collects data about songs and about users. User data is information you put in your account profile, what device you are listening on, when you are listening, what songs you favorites, skip, repeat, or add to playlists. Data about the songs can include text analysis of lyrics, tempo, pitch, artist, genre, harmony, instruments and much more.
Select Features (What is it looking for?)
For a simplified example, let’s look at two features of a song. The x-axis of this graph is song length (feature 1) and the y-axis is tempo (feature 2). Short and slow songs are in the bottom left and fast and long songs are in the upper right.
Graph with features and decision surface
Select Algorithm (How is it sorting that data)
The goal of the algorithm is to pick songs that it thinks you will like to listen to. For this simple example, the algorithm is sorting songs into two categories, songs you like and songs you dislike. Using user behaviors information, the green circles are songs you like and the red are songs you dislike.
The algorithm needs to create something called a decision surface. Data points on one side of this surfaces are songs you like and on the other side are songs you dislike. In this example, the surface is shown by the blue line.
There are all kinds of algorithms we can use. That is what we will be spending most of our time learning over the next 100 days.
Parameters (Fine tuning how it sorts)
Parameters are like dials you can turn on your algorithm to make it work better with your data. Maybe we want that blue line more to the left or right or more slanted in a certain direction. Parameters are the elements of the algorithm that we can change to adjust the decisions surface (so far that’s all I know that they really do, this is day 1).
Why do we need parameters? Why doesn’t the algorithm know the best line? The data you put in the system is never going to be perfect. Maybe this is the song profile for you when you are exercising. You like short fast songs when you are working out. If there was more data, it would know you like longer slower songs when you are working at your desk. If this decision line is drawn too well for this data, then the algorithm would not play the songs you like to listen to at your desk. It seems that one of the largest problems with data is “overfitting.” This is where you make an excellent decision surface for the data you have, but it does poorly with classifying new data. This is why you adjust your parameters.
Train (Teach it right from wrong)
You are constantly training the algorithm. As Spotify is looking through its lists of possible songs to show you, it is analyzing the songs features to see if it is short enough or is fast enough that you would enjoy it. If the new song is drawn is on the left side of the blue line, it knows it is probably a good idea to play it.
Once it’s served to you and played, you can listen to the whole song, share it, favorite it, skip it, add it to a playlist, or listen to it again. If you skip this song, that is a signal that you didn’t like this song and the algorithm chose wrong. This data is logged and the graph of what you like is adjusted. This is how it is always learning. You are telling it how to make choices for you.
Evaluate (How did it do?)
There is a team of people at Spotify evaluating their algorithms to see if they are working and showing you the proper content.
This is a simple example. Spotify is doing so much more. They are predicting what activities you are doing and what mood you are in based on your song choices. Machine learning is not only used to pick songs or videos to play for you. It is how you talk to voice assistants, how cars drive themselves, and how facial recognition works.
There is a massive amount of data in this world. More than humans could ever sort through. We need help from computers to made sense of what all the data means. First, we have to teach machines how we perceive the world. We have to explain what a song is, what a cat looks like, what a stop sign means, or what a sentence is. Then we can teach it how to make decisions about those items.
Thanks for learning with me on day one. Hope to see you back on day two.
My Learning Resources:
Udacity — Intro to Machine Learning, Sebatian and Katie
Coursera — Machine Learning, Andrew Ng
Machine Learning, Stephen Marsland (2015)
You Are What You Stream By Christine Hung from Spotify-
https://youtu.be/OMo6yXPETbM
https://www.oreilly.com/ideas/machine-learning-at-spotify-you-are-what-you-stream
Big data, big quality: Data quality at Spotify — Irene Gonzálvez (Spotify) at strataconf.com 2018
Weekly wrap-up videos will be on YouTube.
[https://www.youtube.com/user/SciJoy]
Daily Videos (all same content just different platforms):
IGTV — [http://instagram.com/scijoy]
Twitter — [https://twitter.com/TheSciJoy]
Facebook — [https://www.facebook.com/TheSciJoy]
LinkedIn — [https://www.linkedin.com/in/jacklynduff/]
Audio is the same on all these platforms. It is a podcast called Learnings of a Maker:
RSS — [https://anchor.fm/s/557c9e8/podcast/rss]
Anchor — [https://anchor.fm/learnings-of-a-maker]
Apple Podcast — [https://itunes.apple.com/us/podcast/learnings-of-a-maker/id1414916236]
Google Podcast — [https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy81NTdjOWU4L3BvZGNhc3QvcnNz]
Spotify — [https://open.spotify.com/show/4AnBX33erDhqXzrMWY8yIh]
Breaker — [https://www.breaker.audio/learnings-of-a-maker]
Overcast — [https://overcast.fm/itunes1414916236/learnings-of-a-maker]
Pocket Cast — [https://pca.st/uRk6]
RadioPublic — [https://play.radiopublic.com/learnings-of-a-maker-WzO35N]
Stitcher — [https://www.stitcher.com/podcast/anchor-podcasts/learnings-of-a-maker]
You can add this to your Alexa Flash Briefing — [https://www.amazon.com/gp/help/customer/display.html?nodeId=201601880]
|
What is Machine Learning? — Day 1 #100DaysOfMLCode
| 0
|
what-is-machine-learning-day-1-100daysofmlcode-1ec19e1a717d
|
2018-08-22
|
2018-08-22 17:28:13
|
https://medium.com/s/story/what-is-machine-learning-day-1-100daysofmlcode-1ec19e1a717d
| false
| 1,165
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
SciJoy
| null |
b65620178c67
|
SciJoy
| 3
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-27
|
2017-10-27 17:06:32
|
2017-10-27
|
2017-10-27 17:12:30
| 0
| false
|
en
|
2017-10-27
|
2017-10-27 17:12:30
| 0
|
1ec429b4ca6f
| 1.233962
| 0
| 0
| 0
|
Started working on a CUDA lookup layer
| 5
|
Komputation v0.9.3
Started working on a CUDA lookup layer
Implemented and tested a lookup kernel
All CUDA forward layers now inherit from BaseCudaForwardLayer
Input column lengths are now passed on to CUDA layers as pointers rather than host integers.
Since the CUDA lookup supports variable-length input, the CudaForwardState member “numberOutputColumns” has been renamed to “maximumOutputColumns”
BaseCudaForwardLayer manages the pointers to the input/output column lengths.
CUDA entry points have access to a memory that is used to store and retrieve input data as well as information about column dimensions
Moved the propagation code into separate classes: (Cpu/Cuda)(Forward/Backward)Propagator
Network classes have been simplified to the following responsibilities: recursively acquiring and releasing resources instantiating trainers and testers
Trainers are now responsible for calling the optimize method on layers.
Added a member to the matrix classes to access the number of entries
Added a helper function to create integer arrays with one value.
Removed an unnecessary condition in MinimalGatedUnit optimization
Removed the closestPowerOfTwo function from IntMath
Removed the emptyKernelLaunchConfiguration factory function
CUDA lookup layers can now be optimized.
The CUDA max-pooling layer uses separate launch configurations for forward and backward propagation.
Fixed result resetting in the max-pooling backpropagation kernel
Switched to an NaN-based strategy for variable input to CUDA networks
Moved the zero CUDA header to the new symbols resource directory
Added an NaN header
The lookup kernel assigns the NaN symbol in the case of an absent vector index.
The number of output (input) columns stored on the device is no longer part of the CUDA forward (backward) states.
BaseCudaForwardLayer no longer keeps track of these dimensions on the device.
Simplified CUDA forward propagation through the use of the return value
Split up functions that combined padding and concatenation
These functions now act on individual instances rather than arrays of instances.
Moved the padding function to a separate file
Removed column lengths from InputMemory
Added a CUDA version of the embeddings demo
Upgraded to Kotlin 1.1.4–2
|
Komputation v0.9.3
| 0
|
komputation-v0-9-3-1ec429b4ca6f
|
2018-05-09
|
2018-05-09 10:17:12
|
https://medium.com/s/story/komputation-v0-9-3-1ec429b4ca6f
| false
| 327
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Komputation
|
Neural networks framework for the JVM written in the Kotlin programming language: komputation.com
|
c229c4952c2d
|
komputation
| 3
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-26
|
2018-07-26 07:38:46
|
2018-07-26
|
2018-07-26 07:56:48
| 1
| false
|
en
|
2018-07-26
|
2018-07-26 07:56:48
| 13
|
1ec4ad2f777
| 5.34717
| 0
| 0
| 0
|
This is the second piece of our AI Series. In this piece, we’ll go through different AI applications across several industries.
| 5
|
AI : What role in tomorrow Business ?
This is the second piece of our AI Series. In this piece, we’ll go through different AI applications across several industries.
If you’re still unfamiliar with AI terms such as Neural Networks, Deep Learning or unsupervised learning, I suggest you starting by reading our first piece. Even though a large part of the press and AI expert are using AI to describe any data science including predictive analysis, here we are referring to more advanced capabilities.
Machine learning capabilities have an almost infinite numbers of use cases. Neural networks can potentially solve any discrete problem; as long as there is a lot of data and the answer needs to be the better prediction of outcomes. Deep learning has changed the way predictive analysis are made.
Deep learning is what has made possible natural communication, personal assistants, and precise image & video recognition.
Natural Language Processing
One of the most common application of AI is what is called Natural Language Processing (NLP). Roughly speaking, NLP technology is gathering communication data, either text, and some of the most advanced one voice and video. Once gathered, this data is analyzed and models are created to make sense of this data. Software are then understanding what is said by the text/video and able to use it for different purposes.
Image and voice recognition are amazing training data for deep learning softwares. Deep learning can be applied on both structured and unstructured data. Structured data is when one knows already the type of data the software will get.
Neural AI techniques excel at analyzing image, video, and audio data types because of their complex, multi-dimensional nature, known by experts as “high dimensionality.”
For example, when applied on online reviews written by consumers, this technique will gather & analyze different type of data: The review will be text, and then it will get all the user data (gender, location etc.).
AI is then used to understand what the words of the text means, and what are the user sentiment while using these words.This type of learning is supervised, meaning a team of human is helping the AI understands the link between phrases and feelings.
Revuze, graduated from the Nielsen Innovate incubator in Caesarea, Israel has turned its attention to product experience management (PEM), enabling clients to “measure customer perception of the holistic product and service experience.”
Revuze analyses aren’t limited to reviews; the company’s tech applies NLP for topic, keyword, and sentiment extraction from any form of textual interactions, including survey responses, call-center text, and social media. Generated via semi-supervised machine learning, source text is gathered, sorted and analyzed to provide brand with granular information about products and user experience.
Retail: a natural market for AI applications
As revuze example points out, one of the main market benefiting of AI today is retail. According to McKinsey, AI applications in retail represent a potential market of $0.8T, followed by CPG, a $486Bn market.
Indeed, retailers are creating omni-channel personalized experience using Machine-learning technology. Even if we will dig deeper on this market applications in our next articles, we can’t bypass it here. So allow me to introduce some basic use of AI for online merchants.
Marketers use NLP to gather data on customers, build target audiences, and optimise the omnichannel experience. The more data companies gather, the easier it will be to apply neural networks to any of the challenges they have along the entire journey to conversion and customer lifecycle.
Dynamic Yield is applying machine learning algorithms to build model that understands consumer preferences and create micro-segment of user having similar behavior. When I started working on Marketing, we built segment depending on location, sometimes on time-since-last-purchased, or AOV.
Now, thanks to companies like Dynamic Yield, marketers are able to build very specific micro-segment, encapsulating data from different sources, both internal and external. Then, this micro-segment will be assigned to different consumer journeys.
This is by using similar technology than your Amazon feed doesn’t look like mine. At least it shouldn’t. Personal recommendations, tailored banners, dynamic menu are becoming the norms for online retailers. People are used to be offered content that fits their need and not look for anything more than a few seconds.
Our cognitive biases lead us to focus 70% of our brain looking for the negative aspects, so when we are online, the minor bug can push us out. For example, payment processing inefficiencies are one of the largest cause of cart drop. And a website visitor with a bad experience is more difficult to convince a second time.
This is why online merchants need to pay attention to every detail, opening opportunities for large set of specialised services resolving one aspect of the customer journey at a time.
AI application in fintech
In fintech, we have found also interesting use cases for fraud-detection, but it’s the topic of another article. Riskified, one of our Tel aviv neighbours, are bumping conversion rate by making the payment experience safer seamlessly.
Described as an AI approval engine, the company is helping merchants avoid false positive and let legitimate consumers process their payment.
Insurance companies & banks also start to use AI as prediction tool for user behavior. They can now sort user according to more accurate predictions, and hopefully create the fairest offers for them.
AI applications across industries
Marketing & Sales services are one of the largest market today, accounting for $1.4T revenues last year. Companies are using AI technology and NLP to identify data structure, understand pattern of behavior, create predictive analysis, leading to recommendations, and sometimes engaging actions.
According to Keyrus Innovation Factory examination of AI application in businesses, one of the highest added value for companies is today besides Marketing & Sales is in operational functions such as supply-chain management & manufacturing.
Of course what we’ll hear about are Virtual Assistant, like Google Assistant that is able to make an appointment for you and have a conversation with a human-being. And this is certainly impressive. After all, the technology needs to master speech recognition & dialogue.
But these giants are using virtual assistant as a nice tool for consumers to have, in order for them to gather more data, and thus making their back-end AI maximise their revenues. But scepticism and doubts about AI are the topic of another article.
We’ve already discussed Automotive application and autonomous vehicle in a previous article, but it is certainly one of the best known example of machine learning. The amazing part, sometimes unknown, is that a large number of data is still processed using supervised learning.
It means AV companies are hiring very large team of people actually processing data manually and giving answers to the software to help him refine its prediction & decision models.
Analytical techniques are accounting for over ten trillion dollars annually, and AI powered intelligence across 19 industries could potentially represent 40%, or $3.5 trillion. In addition to the markets mentioned above, existing applications are already creating new standard in Logistics, Healthcare, Cyber, Agritech & Climate study and of course Aerospace & drones.
Sooner or later, every company will need to examine its mix of functions and find the most pertinent and attractive opportunities to use AI. Over the next few year, an explosion of use cases will rises where companies will experiment different ways to use AI, exploring things they could never do on their own.
The exciting developments in AI delivering jumps in the accuracy of classification and prediction. As consumer data & intelligence become more and more available, deep learning will become more and more efficient, fast and accurate. We will then see plethora of use cases and firms in every industry will need to collaborate with innovative companies.
To go further, here are some sources that inspired me:
https://www.nytimes.com/2017/07/29/opinion/sunday/artificial-intelligence-is-stuck-heres-how-to-move-it-forward.html?_r=0
https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/what-ai-can-and-cant-do-yet-for-your-business
https://www.mckinsey.com/featured-insights/artificial-intelligence/visualizing-the-uses-and-potential-impact-of-ai-and-other-analytics
https://martechseries.com/mts-insights/interviews/interview-jeremy-fain-ceo-co-founder-cognitiv/
https://www.thomsonreuters.com/en/reports/2018-ai-predictions.html
|
AI : What role in tomorrow Business ?
| 0
|
ai-what-role-in-tomorrow-business-1ec4ad2f777
|
2018-07-26
|
2018-07-26 07:56:48
|
https://medium.com/s/story/ai-what-role-in-tomorrow-business-1ec4ad2f777
| false
| 1,364
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Zacharie Lahmi
| null |
4c537cdba3c8
|
zachlahmi
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.