audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0
|
events %>%
summary
Number of events: 1123342
Number of cases: 31509
Number of traces: 4047
Number of distinct activities: 26
Average trace length: 35.65146
Start eventlog: 2016-01-01 10:51:15
End eventlog: 2017-02-01 15:00:30
events %>%
activity_frequency(level = "activity")
# A tibble: 26 x 3
Activity absolute relative
<fct> <int> <dbl>
1 A_Accepted 31509 0.0561
2 A_Cancelled 10431 0.0186
3 A_Complete 31362 0.0558
4 A_Concept 31509 0.0561
5 A_Create Application 31509 0.0561
6 A_Denied 3753 0.00668
7 A_Incomplete 23055 0.0410
8 A_Pending 17228 0.0307
9 A_Submitted 20423 0.0364
10 A_Validating 38816 0.0691
# ... with 16 more rows
events %>%
filter_activity_presence(activities = c('A_Cancelled')) %>%
activity_frequency(level = "activity")
# A tibble: 21 x 3
Activity absolute relative
<fct> <int> <dbl>
1 A_Accepted 10431 0.0730
2 A_Cancelled 10431 0.0730
3 A_Complete 10321 0.0723
4 A_Concept 10431 0.0730
5 A_Create Application 10431 0.0730
6 A_Incomplete 1413 0.00989
7 A_Submitted 7573 0.0530
8 A_Validating 1504 0.0105
9 O_Cancelled 13735 0.0962
10 O_Create Offer 13735 0.0962
# ... with 11 more rows
events %>%
filter_activity_frequency(percentage = 1.0) %>%
filter_trace_frequency(percentage = .80) %>%
process_map(render = F) %>%
export_graph(file_name = './02-output/01_pm-bupar_process map.png',
file_type = 'PNG')
events %>%
filter_activity_frequency(percentage = 1.0) %>%
filter_trace_frequency(percentage = .80) %>%
process_map(performance(mean, "mins"),
render = F) %>%
export_graph(file_name = './02-output/02_pm-bupar_process map performance.png',
file_type = 'PNG')
precedence_matrix <- events %>%
filter_activity_frequency(percentage = 1.0) %>%
filter_trace_frequency(percentage = .80) %>%
precedence_matrix() %>%
plot()
trace_explorer <- events %>%
trace_explorer(coverage = 0.5)
events %>%
filter_trace_frequency(percentage = .80) %>% # show only the most frequent traces
group_by(`(case)_ApplicationType`) %>%
throughput_time('log', units = 'hours')
# A tibble: 2 x 9
`(case)_ApplicationType` min q1 median mean q3 max
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 New credit 0.0781 278. 479. 524. 758. 4058.
2 Limit raise 0.0597 219. 327. 400. 529. 2110.
events %>%
filter_trace_frequency(percentage = .80) %>% # show only the most frequent traces
group_by(`(case)_LoanGoal`) %>%
throughput_time('log', units = 'hours')
# A tibble: 14 x 9
`(case)_LoanGoal` min q1 median mean q3 max
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Existing loan takeover 0.118 307. 499. 543. 758. 2779.
2 Home improvement 0.174 289. 471. 521. 756. 2442.
3 Car 0.139 239. 403. 488. 756. 3269.
4 Other, see explanation 0.125 268. 461. 515. 757. 4058.
5 Remaining debt home 0.135 351. 694. 632. 805. 3252.
6 Not speficied 0.131 307. 621. 563. 777. 1442.
7 Unknown 0.0597 214. 349. 429. 733. 2013.
8 Tax payments 51.4 293. 437. 506. 742. 1220.
9 Caravan / Camper 0.174 232. 358. 457. 744. 2110.
10 Motorcycle 22.7 254. 410. 489. 763. 1338.
11 Boat 55.0 266. 395. 512. 743. 1535.
12 Business goal 227. 255. 403. 526. 756. 1167.
13 Extra spending limit 17.8 258. 406. 485. 743. 1356.
14 Debt restructuring 732. 740. 748. 748. 757. 765.
| 14
| null |
2018-05-09
|
2018-05-09 06:29:36
|
2018-05-09
|
2018-05-09 12:19:23
| 8
| false
|
en
|
2018-05-09
|
2018-05-09 12:19:23
| 14
|
1ab28ed74e81
| 4.884277
| 8
| 0
| 0
|
Process Mining makes process analysis relevant again. Instead of relying solely on workshops, interviews or outdated process documents…
| 5
|
Process Mining in 10 minutes with R
Process Mining makes process analysis relevant again. Instead of relying solely on workshops, interviews or outdated process documents Process Mining makes use of data that is generated in your business systems. It can automatically generate actual process models with frequencies, and performance measures. Moreover, discovered process models let you easily identify any compliance issues at once. If, and only if, data is available.
In this article I show you how to get started with Process Mining using R.
Process Mining
Process mining techniques allow for extracting information from event logs. For example, the audit trails of a workflow management system or the transaction logs of an enterprise resource planning system can be used to discover models describing processes, organizations, and products. Moreover, it is possible to use process mining to monitor deviations. [1]
Mined process example with bupaR
Tooling — bupaR
Currently, there are a number of tools available to perform Process Mining [2]. One open source tool is bupaR [3] that allows to use process mining capabilities on top of the data science language R [4]. bupaR is made by Gert Janssenswillen and consists as a number of R packages.
Package overview, https://www.bupar.net/images/workflow.PNG
Example data
For this post I use real world data (anonymized) of a banking credit application process provided by the BPI Challenge 2017 [5].
The BPI Challenge is a contest held by the organizers of the ‘International Workshop on Business Process Intelligence (BPIC)’. For some years now they provide real-world datasets along with business questions for process mining enthusiast to solve. In short, it could be described as a Kaggle challenge for Process Mining.
Please note, that I shortened the log file due to file size limitations by GitHub.
If you are interested in the challenge’s outcome, feel free to read our paper.
Setup
If you do not only want to read along but try for yourself, please do so. I provide here the list of useful tools and files.
Download R https://cran.r-project.org/mirrors.html
Download RStudio (Desktop) https://www.rstudio.com/products/rstudio/
Clone GitHub project https://github.com/scheithauer/processmining-bupaR
Analysis
Eventlog overview
Activity overview
Filter processes where one activity need to be present
Generate process map
Generate process map with performance measures
Generate a matrix with activity follower frequency overview
As an example the activity O_Sent (mail and online) is often followed by the activity W_Call after offers
Generate variant overview
Show throughput time; In hours by Application Type
Show throughput time; In hours by Loan Goal
Conclusion
Process Mining is much more than using a specific tool. Mostly, it is an iterative procedure involving asking the relevant business questions, understanding the data, interpreting the data correctly (statistical significance vs. practical relevance), and most importantly deriving measures for improving the process under investigation.
Commercial tools exist that support this this iterative procedure. bupaR allows you to apply Process Mining analyses for free (i.e., without licensing cost) that are not as flexible as in commercial tools, yet.
bupaR also provides interactive dashboards, which I did not test, but plan to do so in the near future.
If you are interested in Process Mining, please feel free to reach out.
All the best,
Gregor
References
[1] http://www.processmining.org/research/start
[2] https://en.wikipedia.org/wiki/Process_mining#Software_for_process_mining
[3] https://www.bupar.net/
[4] https://www.rstudio.com/
[5] http://www.win.tue.nl/bpi/doku.php?id=2017:challenge
|
Process Mining in 10 minutes with R
| 35
|
process-mining-in-10-minutes-with-r-1ab28ed74e81
|
2018-05-22
|
2018-05-22 02:58:05
|
https://medium.com/s/story/process-mining-in-10-minutes-with-r-1ab28ed74e81
| false
| 994
| null | null | null | null | null | null | null | null | null |
R
|
r
|
R
| 1,558
|
Dr. Gregor Scheithauer
|
I help companies to improve their business processes using data science. - All Words are my own www.gregorscheithauer.de
|
c3f69cc3b7fb
|
gscheithauer
| 10
| 16
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-28
|
2018-08-28 12:37:38
|
2018-08-28
|
2018-08-28 13:13:08
| 3
| false
|
en
|
2018-08-28
|
2018-08-28 13:13:08
| 7
|
1ab295c85711
| 4.795283
| 2
| 0
| 0
|
Introduction:
| 5
|
Aligatocoin: The Future of E-Commerce.
Introduction:
One of the greatest things that has happened to our world in the 21st Century is the invention of the blockchain technology which has changed the order of things and has brought about better ways of doing things with the instrumentality of its speed, accuracy and limitless application through decentralization.
E-commerce is not left untouched with the revolution of the blockchain technology. There is a platform that has come up to employ the versatile capacity of the Blockchain technology to bring about greater security, easy delivery and access to the e-commerce. The name of the platform is Aligatocoin.
What is Aligatocoin?
Aligatocoin leverages the blockchain technology to revolutionize e-commerce. Not just that, through the Pay via Eye, Artificial Intelligence and Drone delivery system consumers can be guaranteed a secure, easy to use and easy to access e-commerce platform. With Aligatocoin the future of e-commerce is here.
Features of Aligatocoin
Security
One of the features that anybody who will utilize any e-commerce channel first looks at how secure the platform is so that there will be no record of loss. The major aspect of security any e-commerce user wants to verify before use is the security of funds and personal data or shopping behavior. Through the evolution of the e-commerce over the years, there has been the tremendous introduction of new technologies which can easily revolutionize e-commerce. Though many e-commerce platforms are quick to adopt and implement these techs, the top e-commerce platforms are slow in adopting and implementing these techs. The present payment options utilized by these top e-commerce platforms have a defect, especially for the retailers. Retailers are charged 2% and more from all transactions by the payment options this is too high. There exist a lack of transparency too when using centralized payment options. Not just high commissions on a transaction and lack of transparency make the present centralized payment options integrated on most e-commerce less efficiency and disadvantageous, but the speed at which each payment is confirmed is also an aspect that makes them less efficient – they are slow. The present e-commerce channels are also lacking in adequately protecting their users from hackers as they mostly have only a single level of security such as passwords and lesser with double or triple layers of security (such as OTP, Google Authenticator, etc.)
Aligato.pl which has been a polish e-commerce platform is integrating a new technology which will ensure the security of its customers. Through the integration of the blockchain technology into its already existing platform (aligato.pl) Aligatocoin will make payment processes very easy, secure and fast. The blockchain technology allows that all transaction data are stored on the blockchain where they cannot be changed or altered and visible to all. The transactions made on the blockchain are also very fast compared to the existing centralized payment processes and with a lower transaction fee.
Aligatocoin blockchain is based on the Proof of Stake system, this is different from the bitcoin blockchain which operates on Proof of Work (PoW – this requires a lot of energy and time to mine or confirm transactions). The Proof of Stake (PoS) system is based on voting and requires a far lesser amount of energy and time.
Pay via Eye Technology
The Aligatocoin platform does not only secure it users with passwords or keys which can easily be hacked. The platform is introducing a whole new technology to ascertain the security of its users which is the Pay via Eye (PvE). The Pay via Eye allows any customer to approve payments of any goods bought on the platform through retina scan with their Android phone camera or computer. The retina scan uses the multiple unique arrangements in the eye ligaments to help confirm payments. Studies have shown that the retina scan has a higher percentage of accuracy than the fingerprint scanner and face recognition which can go through deformation over time. The arrangement of the ligaments in the retina is unique (each eye is different from the other) and stays the same for life. Aligatocoin will combine the PvE with other measures of security such as OTP, fingerprints, Google Authenticator, to cater for failures in any case.
Ease of Access
Many times consumers get so bored using an e-commerce platform or online store when they are unable to get what they want. It is not as easy to use the search options on each online shop to get what one wants. It becomes a plus to any online retailer if they are able to anticipate the interest on their consumer. A retailer or e-commerce platform will get consumers coming back if they are able to understand their consumer’s desire and get what they want at the shortest possible time.
Artificial Intelligence
Aligatocoin will use Artificial Intelligence to be able to predict or find the interest of each consumer by the interpolation of their previous shopping behavior and other data. Consumers can easily open up a private chat with the friendly AI on the platform to get a more personalized search and recommendations on products that may be of interest to them.
Quick Delivery
The time consumers have to wait for their purchased goods to arrive at their homes is a big deal or major setback e-commerce face today. Consumers have to wait weeks for their shipment to be completed not to mentions charges and import duties for cross-country movements. There are a lot of innovations that are on test modes to be introduced to the e-commerce as time goes on.
Aligatocoin Drone Delivery
The Aligatocoin proposes a drone delivery system for short distances for a more exciting experience and a shortened delivery period for it consumers. If deliveries can be by shipping and road transport why not air through a drone delivery system. Though the drone delivery system has a maximum weight per delivery it will give the consumer the power to be able to monitor their goods as it is being transported. The drone delivery system will be introduced later after the Aligatocoin platform is launched for some improvements to be made and patents to be acquired.
My Review
One of the exciting things about this project is that is not just a new and upcoming project, it has an already existing platform which will just be tokenized. The project more interesting and its success becomes more visible with the new technologies that are been introduced to the platform. I will like to summit that Aligatocoin is a great project with promising future.
Token Details
For more information, Visit;
Website
Whitepaper
Facebook
Bitcointalk ANN
Telegram
Twitter
Petox34
Bitcointalk profile: https://bitcointalk.org/index.php?action=profile;u=2338476;sa=summary
|
Aligatocoin: The Future of E-Commerce.
| 100
|
aligatocoin-the-future-of-e-commerce-1ab295c85711
|
2018-08-28
|
2018-08-28 13:13:09
|
https://medium.com/s/story/aligatocoin-the-future-of-e-commerce-1ab295c85711
| false
| 1,125
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Oluwatosin Olumide
|
A passionate educator and seasoned public mobilizer.
|
a4f388ca1bbb
|
ajalapetero
| 9
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-18
|
2018-03-18 14:08:32
|
2018-03-18
|
2018-03-18 17:48:38
| 7
| false
|
en
|
2018-05-23
|
2018-05-23 08:24:06
| 2
|
1ab379344b6e
| 1.929245
| 1
| 0
| 0
|
This is part of the course “Probability Theory and Statistics for Programmers”.
| 5
|
Probability theory: Expected value, Mode, Median
This is part of the course “Probability Theory and Statistics for Programmers”.
Probability Theory For Programmers
At this part, we will look at characteristics that show a position of a random variable on numerical axis. All these characteristics have there are own real-life applications. And you will see them very often not only in probability theory domain but also in statistics, machine learning, and others fields.
One of the most important characteristics is expected value — the sum of the products of all possible values of a random variable by the probabilities of these values.
expected value for discrete random variable
The expected value for a continuous random variable is integral, where f(x) is probability density function:
expected value for continuous randomvariable
This formula is obtained from formula listed above. We just replace components:
For mixed random variables we have formula, where sum used for all break points and integral for all parts where the distribution function is continuous:
Next characteristic is Mode. for a discrete random variable mode is the most probable value, and for continuous is a value where probability density is the biggest.
The last characteristic we cover in this article is Median. Median of continuous discrete random variable is such a value m, for which:
If we look at curve of continuous random variable it will be the place which separate curve on two parts with the same area:
median split curve in two parts
Next part ->
Clap if you enjoy 😎
|
Probability theory: Expected value, Mode, Median
| 34
|
9-expected-value-mode-median-1ab379344b6e
|
2018-05-23
|
2018-05-23 08:24:07
|
https://medium.com/s/story/9-expected-value-mode-median-1ab379344b6e
| false
| 233
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Rodion Chachura
|
geekrodion.com
|
4a34eb6ffe36
|
geekrodion
| 68
| 2
| 20,181,104
| null | null | null | null | null | null |
0
|
teacher( sandy ).
teacher( john ).
student( mary ).
student( peter ).
student( nick ).
student( mei ).
teacher( X ).
X=sandy
X=john
teacherOf( john, mary ).
teacherOf( john, peter ).
teacherOf( sandy, nick ).
teacherOf( sandy, mei ).
classmate( X, Y ) :- student(X), student(Y), teacherOf( A, X ), teacherOf( A, Y ).
classmate( X, peter ).
X=mary
X=peter
classmate( X, Y ) :- student(X), student(Y), teacherOf( A, X ), teacherOf( A, Y ), X\=Y.
classmate( X, peter ).
classmate( X, sandy ).
Prolog: gugu(lala).
English interpretation: ?
Prolog: ? gugu(X).
English: “Which X is a gugu?”
Answer: X=lala.
| 12
| null |
2017-09-13
|
2017-09-13 02:10:27
|
2017-09-13
|
2017-09-13 02:01:41
| 1
| false
|
en
|
2017-09-13
|
2017-09-13 02:27:07
| 6
|
1ab391762831
| 7.120755
| 12
| 1
| 0
|
In our series on AI technologies and their importance for society, we are now looking at an example of what is called the symbolic approach…
| 5
|
Prolog: Programming in Logic
Source: pixabay.com
In our series on AI technologies and their importance for society, we are now looking at an example of what is called the symbolic approach to artificial intelligence.
As opposed to so-called subsymbolic systems, symbolic AI tries to represent the things of the world inside the computer as “symbols”: variables in a programming language, propositions in a kind of logical calculus, and so on. A “thing,” let’s say a car, could be represented as a software object or a database record, and it would be described by a set of attributes, like: manufacturer, model, colour, engine type, weight, number of passengers, number of doors, and so on. All these different values that describe the car would be stored inside the computer as variables that hold a value: the symbol (=property) “manufacturer,” for instance, could hold the value “Toyota,” or “BMW,” or “Fiat.”
Contrast this for a moment with the way our brain works. If we open up a computer, we will find some particular memory location in which the value of the variable “manufacturer” is stored. The computer stores symbols, that is, variables and their values, directly. On the other hand, opening a human brain is unlikely to reveal a particular location where, say, the name “Toyota” is stored. It must be stored somewhere, but it doesn’t look like it is neatly tucked away between neurons 13776 and 13781 (we don’t really know how data storage works in the brain, though; so we might be mistaken about that). From what we know, it looks like information in the brain is not stored in the form of explicit symbols, but in the form of connections between brain cells (neurons). Remembering the name “Toyota” would mean that inside a group of neurons somewhere in one’s brain, there is a stronger connection between remembering the letter “T” and, immediately afterwards, associating it with the letter “O” (and so on). The whole word, in turn, gets recalled when another group of neurons (responsible for vision) recognises a particular shape in one’s field of vision: the shape of a car of that type. And so on. Storage of the word “Toyota” is therefore not isolated in the brain, as it is in a conventional computer. Instead, it is widely distributed across multiple neurons, and connected with various neuronal subsystems that are responsible for vision processing, letter and word recall, memory, even smell: if, as a child, one associated a smell of a particular car freshener with the family’s Toyota, then that smell is likely to cause a recall of the “Toyota” memory later in life. And this will happen even if no actual car is present anywhere near.
We can see how sometimes cognition is independent of symbols. For example, we might recognise the smell of a place. We might refer to it when we talk to others as “that smell, you know, of that particular place” (which is not actually describing anything, since we lack a good vocabulary for describing smells). So in this case, we are able to reliably recognise and identify a smell, but this processing is not symbolic: the recognition of the smell is not mediated through words and labels that we attach to the impression of the smell. Instead, we process the smell directly, as a smell, and this is presumably what a dog would also do when it recognises the smell of its owner without the use of an explicit description in words.
Prolog
Let’s now go back to symbolic processing. This is the type of processing we associate with traditional programming languages, like C or Pascal, but also with formal logic, mathematics, and even everyday language: saying a word, for example “cup,” is an act of symbolic processing. Instead of actually dealing with a physical cup, I am processing a symbol for a cup in my mind: the word “cup.” (Read more about symbols here.)
Prolog, a programming language developed at the beginning of the 1970s, combines this symbolic approach with basic concepts from formal logic, to make it possible to program computers “declaratively.” Most common programming languages are imperative languages: they tell the computer exactly what to do and how to do it, step by step. For example, in order sell a product on an Internet website, the vendor must (1) display the product’s image and a button to buy it; (2) if the button is pressed, the product ID must be transferred into a shopping cart structure; (3) when the customer has finished shopping, he presses another button to check-out; (4) if the check-out button is pressed, the website displays the contents of the shopping cart with their respective prices and a button to complete the transaction; (5) when that button is pressed, the amount shown is deducted from the user’s credit card; (6) if he does not have a credit card on record, the form to enter the credit card details must be displayed; and so on.
Conversely, in a declarative language, the programmer would “declare” what relationships exist between the symbols (which, in turn, represent things in the world) and then leave it to the program to find a solution to the problem. For instance:
After entering these “facts” into the Prolog system, we could ask a question in form of a query:
The Prolog system would then try to match the variable X (note that it is a capital letter, which tells Prolog that this is a variable!) with the names given in the collection of facts. It would then give the answers:
But we can do more with Prolog. We could, for example, define a two-place teacherOf( teacher, student ) predicate, to record who is teaching whom:
This works as we would expect: teacherOf( X, mary ) would return X=john.
But we can do more with that. We can now define a predicate “classmate” that uses the teacher predicate. The idea is that a classmate is someone with whom you have the same teacher:
The “:-” means that the expression on the left will be true if the expressions on the right are true. You can just read it as “if”. The commas on the right side (“,”) express a logical “and”. That means that all the expressions on the right have to be true in order for the system to consider the predicate classmate to be true.
Related: Do Chairs Think? AI’s Three Kinds of Equivalence
Now we can ask:
That means: Who (X) is classmate of Peter? The system answers:
Oops. Something strange happens here. What exactly? Well, if you look at the predicate classmate, it never says that one cannot be one’s own classmate! Since everyone has the same teacher as oneself, logically, everyone who is a student is always also their own classmate!
If we wanted to change that, we would have to exclude the case that someone is one’s own classmate, like this:
The operator “\=” in Prolog means “not equal,” so that in this case it is additionally required that X is different from Y. Let’s see:
Answer: X=mary, which is the expected answer.
We can also try:
which correctly answers false, meaning that no solution can be found, since Sandy is not a student but a teacher, and we explicitly limited the classmate predicate to students.
Syntax and meaning
Observe that the system does not know anything about teachers, or students, or Peter, Sandy, or Mary. The symbols “teacher,” “student,” and so on are just that: symbols, words that stand for something in the mind of the observer of the program, but that don’t mean anything to the program itself. The program just happily uses the same symbols the human operator used, and it is entirely up to the operator to project an suitable interpretation into those symbols!
For example:
Although this exchange is nonsensical, because “gugu” and “lala” are not words that have any meaning in our language, Prolog will treat them just like any other symbols, say “cat,” or “dog”. Because, of course, for Prolog “cat” and “dog” also have no meaning at all. They are exactly as meaningful to the system as “gugu” and “lala”. Words like “cat” and “dog” are only meaningful to the human operator, not to the system.
Related: Briefing: The Chinese Room argument
Syntactic manipulation
This is at once the core feature and also the core problem of symbolic AI of this type: that the symbols it uses carry no meaning inside the computer. Their manipulation takes place not based on any meaning, but based only on syntactic, that is, positional properties of a symbol inside an expression. “gugu(X)” in Prolog will give X any value that appears in the database of facts inside the brackets for “gugu(),” without being at all bothered by the question what “gugu” might mean.
This is, in a sense, what happens inside the Chinese Room. The person inside the room will manipulate the Chinese characters according only to the way they look, and how they match the characters in his rulebook, without considering their meaning (because he doesn’t speak Chinese). In fact, the Chinese Room argument was inspired by Prolog-like systems of symbolic AI, and is meant directly as a criticism of the idea that such systems could ever achieve genuine understanding.
With this, we arrive at the central assumption of symbolic AI:
If the symbol manipulation preserves the original relations between the symbols, the mapping of symbol to meaning can be left to mind of the operator.
Or, as John Haugeland put it: “Take care of the syntax, and the semantics will take care of itself.”
It should perhaps be mentioned that not all symbolic AI has this problem. SHRDLU is an example of a symbolic AI system that is not affected by the Chinese Room criticism, since it does have a kind of “understanding” of what the symbols that it uses mean. Symbols like “a blue box” or “a red pyramid” represent particular objects that the system can recognise and manipulate, and thus they are not mere symbols without meaning, but meaningful representations of actual objects that the system can experience. In this case, we would say that SHRDLU’s symbols are “grounded,” which means that they have corresponding objects in the “real world” of the system (even if that “real world” is itself a simulation).
Originally published at moral-robots.com on September 13, 2017.
|
Prolog: Programming in Logic
| 19
|
prolog-programming-in-logic-1ab391762831
|
2018-04-20
|
2018-04-20 22:06:55
|
https://medium.com/s/story/prolog-programming-in-logic-1ab391762831
| false
| 1,834
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Moral Robots
|
Making sense of robot ethics. Read more at moral-robots.com
|
4553d71f4756
|
MoralRobots
| 760
| 179
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-29
|
2018-03-29 07:46:20
|
2018-03-29
|
2018-03-29 07:49:34
| 42
| false
|
en
|
2018-03-29
|
2018-03-29 07:49:34
| 3
|
1ab5dc7f66d4
| 8.642453
| 0
| 1
| 0
|
Xiaomi released the Mi Mix 2 half a year ago, and is just launching its new flagship model, the Mi Mix 2S. This iteration time is a bit too…
| 5
|
The Mi Mix 2S: the best Xiaomi smartphone for now
Xiaomi released the Mi Mix 2 half a year ago, and is just launching its new flagship model, the Mi Mix 2S. This iteration time is a bit too short for a flagship model.
SEE ALSO: Xiaomi released MIX 2S, claimed better camera than iPhone X
We can tell you the hands-on experience of the Mi Mix 2S.
When Xiaomi produced the Mi Mix, it was thought to be a concept machine. Xiaomi then released the Mi Mix 2, proving the company’s maturing innovative capabilities. The Xiaomi Mi Mix 2S makes the Mi Mix product line close to perfection.
The design looks familiar.
The Mi Mix 2S has a 5.99-inch 18:9 full HD+ display and a curved ceramic body. There is only an 8GB RAM and 256GB storage version.
From the front, the Xiaomi Mi Mix 2S is indistinguishable from the Mi Mix 2.
Xiaomi does not offer a special edition full-ceramic version of Mi Mix 2S.
Due to its metal frame, the Mi Mix 2S is larger than the fully-ceramic version of the Mi Mix 2, and is the same size as standard version of the Mi Mix 2.
The Mi Mix 2S can be regarded as a slightly modified version of the Mi Mix 2, with which it shares many additional similarities. It keeps the same audio output and its top bezel is slightly wider than the side bezels. Compared to the currently popular notch design (such as in the iPhone X), its visual presentation is almost perfect.
Its speaker can also be used when the phone is playing music or videos, a function absent in the Mi Mix 2.
However, due to the limitation of the sound guide structure, the speaker cannot be used as a loudspeaker, as its volume and sound quality are sub par. If you do not listen carefully, or if the bottom speaker is covered, you can barely hear anything. Thus this phone may not be ideal if you are looking for a pair of loudspeakers.
The only visual difference from its predecessor is on the back of the phone. The ceramic body can display various aesthetics under different light conditions, but the use of dual cameras affect the clean look of the back.
It is hard in any case to position dual cameras without affecting the phone’s appearance. However, the dual cameras slightly protrude from the phone, and a touch on its screen will cause the phone to shake if it is laid on a table.
Xiaomi also provided an official phone case to reduce fingerprint smudges and cover the protruding cameras.
Using ceramic material, the Mi Mix 2S conserved the smooth and comfortable texture of the Mi Mix 2.
Even though this phone can be handled with one hand, it weighs a lot. The weight of the phone can be felt distinctively when it is placed into a pocket.
Improving photography features
It is not the first time that the letter “S” appeared in the name of Xiaomi’s phones. Xiaomi had previously used this letter to denote that the device will have little differences from its predecessor, other than few modifications and improvements in specifications and functions.
The Mi Mix 2S has followed the same naming convention. Its double camera is the most prominent difference.
The Mi Mix 2S’s back camera is 12 MP with 1.4μm pixels and uses the Sony IMX 363. It can support Dual PD, full-resolution dual-core focus, and four axis optical image stabilization.
Compared to the Sony IMX 386 with 1.2 μm pixels and f/2.0 aperture in last year’s Mi Mix 2, the Mix 2S’s photographic capability should be better.
In sufficient light, the Mi Mix 2S displays very accurate color presentation and exposure control. Under similar conditions, photos taken by the Mi Mix 2S are sharper and have more details than photos taken by the Mi Mix 2.
In insufficient light environments, the Mi Mix 2 has significantly increased ISO. While the colors are bright, the photo looks somewhat unrealistic with loss of details. This problem has been resolved in the Mi Mix 2S. Below are some more sample photos:
The Mi Mix 2S also improved its night-time photography features, especially noise suppression, although the pictures tend to look bluish. The Mi Mix 2S takes longer to take photos during the night, and its image quality can be influenced by shakiness. These issues will be gradually resolved in subsequent system optimizations.
Generally speaking, the Mi Mix 2S is a significant improvement in photography compared to the Mi Mix 2. The Mi Mix 2S is also equipped with smart scenario recognition, identifying objects in pictures, such as buildings, people and animals.
What else can the “S” mean?
Other than its cameras, the Mi Mix 2S also updated its processor. Xiaomi has previously revealed that the Mi Mix 2S will carry the Qualcomm Snapdragon 845, which is the standard in most flagship models.
Aifaner (WeChat: ifanr) has tested this exclusive version on hand before its release. The Mi Mix 2S scored 260,000 on AnTuTu, a smartphone benchmarking platform. Although there is still room for improvement and optimization, its overall performance improved over the Xiaomi Mi Mix 2 with Qualcomm 835, which scored over 140,000.
There is no delay in the loading and running of the mobile game “Man vs. Wild”. Images are high quality, and after half an hour of continuous play, the phone is only slightly warmer. From this example, the Mi Mix 2 seems well-calibrated for gaming.
The Mi Mix 2S also supports rapid face unlock function. Compared to the face unlock function added to the Mi Mix 2 after a system update, the Mi Mix 2S is clearly faster. However, it may be so fast that it is difficult to check the time on the phone.
The Mi Mix 2S also does not support hand gesture waking, so it is required to first press the power button or double tap the screen before using face unlock.
Furthermore, the face unlock feature can be inefficient at night due to insufficient lighting and inconvenient camera angle. Its front camera’s sensor is focused on the jaw, and may have problems collecting whole face information.
The phone can also be unlocked with fingerprint, which is faster than face ID.
When this year’s glass-based phone bodies were starting to become popular, Xiaomi also launched wireless charging with the Mi Mix 2S. Xiaomi’s wireless charging base supports 7.5W fast charging, and charged 10 percent of the battery in 20 minutes.
Covered in a layer of soft silica gel, the charging base has a nice texture and provides some protection for the phone. However, it can get dusty easily and may need frequent cleaning. The charging base is priced at only 99 yuan ($16).
Xiao Ai and AR
In addition to hardware upgrades, Xiao Ai, a voice assistant, is also embedded in the Mi Mix 2S.
Although Xiao Ai currently has only some of its functions available on Xiaomi phones, it can answer questions and execute instructions with good accuracy.
Other than being a voice assistant, Xiao Ai also provides users with another way to navigate their phones. For example, sending WeChat messages and digital red envelopes, opening application functions, or even downloading apps can be achieved through voice commands. This function is very convenient when the user’s hands are occupied.
Xiao Ai can be triggered by saying “Xiao Ai Tong Xue”, or by pressing on the virtual home button. The Mi Mix 2 added new navigating gestures, which hides the virtual button when in use. When this occurs, pressing on the power button for 0.3 seconds will activate Xiao Ai. A longer press brings up the power menu.
Besides its AI assistant, the Mi Mix 2S features hot AR functions. Xiaomi collaborated with Google’s ARCore platform to make the Mi Mix 2S one of the first Android phones to carry ARCore.
The gameplay of the AR game “The Machines” was very smooth on the phone, but evidently heated up the phone. This issue needs to be addressed in subsequent optimizations to obtain better experience for long-time gaming.
An optimized version before “3.0”
The Mi Mix series proved Xiaomi’s innovation capability. When Xiaomi introduced the “bezel free” concept to the market with the Mi Mix 2, even Google, who is at the core of Android phones, had been still making 16:9 screens.
The Mi Mix 2S does not leave as big of an impression as the first generation Mix phones did. Although the “bezel-free” concept was hot all last year, it is not everything about the phone.
The “S” notation has made up for the shortcomings in the first generation phones, with photography as the most significant improvement. This time, the “S” also indicates all the features that 2018 flagship models should have: dual-cameras, wireless charging, AI, AR, and the latest flagship processors.
The “S” also proved that this line of product can reach perfection. It could be symbol for the ending of Xiaomi’s “2.0 stage”, and preparing it for the next “3.0 stage”.
This article originally appeared in ifanr and was translated by Pandaily.
|
The Mi Mix 2S: the best Xiaomi smartphone for now
| 0
|
the-mi-mix-2s-the-best-xiaomi-smartphone-for-now-1ab5dc7f66d4
|
2018-03-29
|
2018-03-29 07:49:35
|
https://medium.com/s/story/the-mi-mix-2s-the-best-xiaomi-smartphone-for-now-1ab5dc7f66d4
| false
| 1,535
| null | null | null | null | null | null | null | null | null |
Xiaomi
|
xiaomi
|
Xiaomi
| 1,964
|
Pandaily
|
Everything about China’s innovation
|
a0de6420e386
|
pandaily
| 169
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
cbc92d52befb
|
2018-08-27
|
2018-08-27 14:01:09
|
2018-09-02
|
2018-09-02 05:47:04
| 1
| false
|
en
|
2018-09-24
|
2018-09-24 07:37:01
| 16
|
1ab9d3205e0a
| 2.162264
| 2
| 0
| 0
|
After kicking off to an amazing start with the winter cycle of 2018 AI Saturdays, it couldn’t have gotten any better for our next session…
| 5
|
AI Saturdays Bangalore Chapter — Week 2 Reflections.
After kicking off to an amazing start with the winter cycle of 2018 AI Saturdays, it couldn’t have gotten any better for our next session. With participation from over 60 people from different walks of their careers from students to seasoned tech professionals trying to understand the subject of AI and over 100 people joining over the live stream, it was a Saturday filled with loads of enthusiasm towards this path breaking technology. In this blog, i will cover the topics that were discussed in detail in the session and also share some resources for your learning progress to these concepts.
In the previous session, we had discussed about the workings of a neural network, basic variant of gradient descent algorithm for optimization process later extending it to back propagation algorithm and various activation functions used in the field currently, which mostly constitutes the first course of deeplearning.ai specialization.
In this week, we delved deeper into tuning the deep learning models using regularization techniques, discussed the variants of gradient descent algorithms and introduced a few sophisticated algorithms like RMSProp and ADAM with their thorough explanations. We then went on to discuss how techniques like batch normalization will help us attain the minima much faster. We later encountered a way to check the fidelity of backpropagation in our network and how to overcome the vanishing/exploding gradient problem partially using weight initialization. In-depth guides to all these topics have been given below.
Topics discussed:
1. L1 and L2 regularization
2. Dropout regularization and early stopping
3. Gradient Checking.
4. Variants of Gradient descent and other optimization algorithms.
5. Batch Normalization.
6. Softmax classifier.
All the above components together make a whole deep learning model along with another few elements to it which have been discussed in the previous two sessions. The participants have now gained a valuable experience and insight towards the various components in a neural network model and are capable of building a well generalized deep learning model.
We will now head towards solving real world scenario problems with hands-on coding from upcoming meetups after getting an in-depth theoretical understanding of the details from the past two sessions that we had conducted. The enthusiasm shown by the Bangalore members has been ecstatic and me along with the other ambassadors of the Bangalore chapter are thrilled to take it forward to newer heights from here.
Smartbeings.ai
And finally we would like to thank smartbeings.ai for being a great host:)
In our upcoming session we are going to cover the third course from deeplearning.ai and introduce you guys to the amazing PyTorch framework which will be held in Nvidia office. After which we will dive deep into building some complex deep learning models to solve problems and see their amazing capabilities in the upcoming sessions.
To attend the next session, fill out the form here.
Assignments for the sessions conducted till date can be found here
Sign up here to attend next meetups.
All the discussed materials related to the meetup can be found on Github repo.
Follow AISaturdays Bangalore on twitter.
|
AI Saturdays Bangalore Chapter — Week 2 Reflections.
| 17
|
ai-saturdays-bangalore-chapter-week-2-reflections-1ab9d3205e0a
|
2018-09-24
|
2018-09-24 07:37:02
|
https://medium.com/s/story/ai-saturdays-bangalore-chapter-week-2-reflections-1ab9d3205e0a
| false
| 520
|
Making rigorous AI education accessible and free, in 50+ cities globally. Sign up at https://nurture.ai/ai-saturdays
| null | null | null |
AI Saturdays
|
info@nurture.ai
|
ai-saturdays
|
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,DATA SCIENCE,AI
|
AISaturdays
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Naren D
|
AI6 ambassador. Deep learning and cognitive science enthusiast. A math geek.
|
b6a62d67bbbe
|
narendoraiswamy
| 1
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
721b17443fd5
|
2018-06-25
|
2018-06-25 10:18:51
|
2018-07-15
|
2018-07-15 23:55:51
| 2
| false
|
en
|
2018-07-18
|
2018-07-18 17:44:34
| 4
|
1abdb40a868c
| 5.104088
| 41
| 3
| 1
|
Continuing with a related topic to my previous article, I’d like to introduce a small simulation I made on how our white cells can chase…
| 5
|
Our beautiful immune system in action – Modeling an interactive neural network
White Blood Cell chasing and consuming a Bacterial Organism
Continuing with a related topic to my previous article, I’d like to introduce a small simulation I made on how our white cells can chase external threats, like bacterias. You can find the repo here.
The basics
Complex as it is, our immune system can track menace in different and fascinating ways, using different types of white cells laying each of them on several approaches to do this.
The foundations of my humble simulations are:
Bacterias run away from white cells
White cells chase chemical traces left by bacterias
Actually, this is partially true. I didn’t work with chemical traces on the code as it will lead to a major dedication and development, which is not the point of this tutorial. Let’s say then, I only use random movements for bacterias (to start) and then they let them learn from white cells movements knowing that those are the places they don’t want to be or move to.
The project
If you like to know more about neural networks, you can visit this tutorial, it contains full and detailed explanation on how the learning process occur inside a neural network.
I used synaptic.js library, an excellent javascript library to start working on this subject. It has a great wiki with examples on which you can based your own developments and, on top of that, they are really easy-follow.
For this particular project, I worked with the Architect object. It allows you to create multilayer perceptrons, also known as feed-forward neural networks. They consist of a sequence of layers, each fully connected to the next one.
Perceptron
So, for our case, we are going to need two different neural networks, trained with different data. First, we have a special white cell, the Lymphocyte; and the Bacteria itself. Take a quick look to the code bellow, but don’t worry much about the different values (you can play with them later ;) ).
As you can see, both of them declare a Perceptron with different inputs, hidden and output layer values. Let’s revise them a little bit:
Lymphocyte only defines 3 outputs (also does the Bacteria). Can you imagine why? Let’s try to abstract our development goal by imaging what actually happens on real life. Lymphocytes moves freely and “randomly” on our bodies, and whenever a bacteria disrupts in the environment it gets targeted by the white cell, moving towards to it in order to “eat it”. Ok, so basically it moves, right?.
My intention it’s make this easier and think on movements on a 2D world. In other words, we’ll be working with vectors (oh my!) and coordinate axis composed by x and y values only.
Coming back to the the core of the matter, this number 3 represents the expected values needed to move the Lymphocyte (or Bacteria) towards a particular place which will be, at the end of the training sessions in which our nets are going to “sweat” all the values out, the expected one: the bacteria location for the lymphocytes, and a location away the lymphocytes for the bacteria. Easy peasy!
Now, take a look to our world definitions…
Holds a collection of bacterias and lymphocytes, of course! So, please keep this in mind as is the most important understanding to train our networks properly. Here, I chose to add 10 lymphocytes and 2 bacterias. Thus, for the input to be meaningful and correct to any of our organisms we need to use: their locations (decomposed on x and y values) and their velocity (decomposing the vector on x and y as well) to train our network.
So, for the lymphocyte neural net we’ll have 2 components for the location and 2 for the velocity, giving us a total of 4 values times 2 bacterias equal 8.
Equally it is for the bacterias, 4 values times 10 lymphocytes equal 40.
Besides knowing x and y values, we also need an angle (duh! of course I remember algebra from High School) which determines vector’s direction. What we are trying to to attempt is “learn” from this values, in other words, train our neural network so it can know as accurate as it possible how to reach their target.
A couple of things worth be mentioned here:
This neural net works in a cohesive way, meaning each group of organisms stick together. Imagine a platoon of soldiers marching to a specific point in which their enemies are barricaded; they have to work in groups, join forces so as for them to be as effective as they can.
a) Bacterias “stick together” in this particular exercise so it can be easier to picture it on the canvas. I encourage you to work in this particular point to make it more interesting :) and try coding a non-dependent group of bacterias.
b) Lymphocytes as well, however thinking in real life examples, they actually follow chemical tracks. We can think of them as being spread randomly in our bodies and when suddenly encounter one of this tracks, they start to follow it immediately. So it could have some sense to make them work together, but again, it’s for practical reasons.
Show me “the sweat”
Let’s take a closer look to the training process, starting with the lymphocytes.
This piece of code belongs to our world.js file. First, we loop through the lymphocytes collection, and in each cycle we loop through the bacterias collection and store their position and velocity values in an array. Next, we use this array to train each lymphocyte, “feeding” the network to where it should be moving. In addition we define a learning rate, which can be modified to “play” on how fast/easy/slow/wrong the net behaves.
The target variable holds the cohesion I was talking before. Above you can take a quick look on how it works. Behind several functions in charge of defining and calculate positions based on the bacterias belonging to our world (you can take a deeper look to them in the project source code at GitHub) it retrieves the cohesion value that consequently allows to the lymphocyte group stick together and in addition, follow their target, the bacterias.
Finally, the loop process draws our lymphocytes positions in the canvas. This logic it’s also followed by bacterias training process but slightly different when it comes to learning where to move.
2. Bacterias training looks like this (please don’t panic, you’ll see it’s pretty straightforward when read the explanation :) )
If you think about it just for a moment you’ll realize what’s going here. Bacterias don’t want to be close to lymphocytes, right? So their input can not be lymphocytes positions, which lead us to ANY position but lymphocytes ones. Pretty much what’s written on the code: we train our bacterias network with random positions and velocities and then, we remove those belonging to lymphocytes. Because of this, we can ended up with less than 40 inputs (as our bacterias net require to be trained) so we continue generating random ones until reach 40. And then, we feed the net with this values and draw it in our canvas.
Sorry for the horrible resolution :)
AND THAT’S IT! :)
Hope you enjoyed this article as much as I did writing it! There are tons of things to work, change, modify or play with this project. Let me know how that goes! And please, if you really liked it don’t forget to applaud so everybody can enjoy it as well :)
|
Our beautiful immune system in action – Modeling an interactive neural network
| 252
|
our-beautiful-immune-system-in-action-modeling-an-interactive-neural-network-1abdb40a868c
|
2018-07-18
|
2018-07-18 22:48:44
|
https://medium.com/s/story/our-beautiful-immune-system-in-action-modeling-an-interactive-neural-network-1abdb40a868c
| false
| 1,251
|
Coinmonks is a technology focused publication embracing all technologies which have powers to shape our future. Education is our core value. Learn, Build and thrive.
| null |
coinmonks
| null |
Coinmonks
|
gaurav@coinmonks.com
|
coinmonks
|
BITCOIN,TECHNOLOGY,CRYPTOCURRENCY,BLOCKCHAIN,PROGRAMMING
|
coinmonks
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Natalia Pattarone
|
Future MSc in Bioinformatics | Systems Engineer, wanting to leave something meaningful to this wonderful world
|
418c077f9bfa
|
npattarone
| 39
| 11
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-20
|
2018-08-20 16:25:14
|
2018-08-20
|
2018-08-20 16:51:55
| 12
| false
|
en
|
2018-09-07
|
2018-09-07 14:07:36
| 0
|
1abfb09dba93
| 6.719811
| 0
| 0
| 0
|
Marketing automation is a software platform that helps companies to highly personalized customer relationship and experience at scale, by…
| 5
|
Enhancing Marketing automation with Machine learning
Marketing automation is a software platform that helps companies to highly personalized customer relationship and experience at scale, by automating marketing campaigns workflows to generate more leads, close more deals, and better measure marketing success through different communication channels:
Marketing Automation “Communication Channels” & “Customer Lifecycle Management (CLM)”
Marketing automation tools are used to tackle the whole customer lifecycle by accompanying prospect and customers to enhance their journey and higher their ARPU at each stage. The most common marketing automation use cases are as follows:
Marketing Automation most commun use cases
Marketing automation leverage the high potential of the amount of data that companies possess, by using machine learning for segmentation (refining the campaigns targets), scoring (assessing customer attitude) and opportunities detection (revealing associations and hidden correlations), allowing to insure marketing campaign effectiveness, reach operational efficiency and grow revenue faster. Predictive and descriptive machine learning models help to identify customers and their needs, so as to increase their likelyhood to respond to a given campaign through specific channels:
Integration of Machine learning in Campaign Management process
Step1: Segmentation
Customer’s response to marketing communications can be different depending on many criteria like the sales channel, customer gender, location, activity, transactions and other relevant information. Segmentation is an effective tool that help to group customers with similar characteristics using historical data (their activity, purchasing habits and behavioral traits) and algorithms like Principal component analysis (PCA), K-means or Two-Step methods to find the clusters.below a step-by-step explanation of the PCA algorithm for segmentation:
Principal component analysis (PCA) Algorithm step-by-step
The resulting clusters must be build with marketers, so as to map them to understandable profiles that represent distinct “buyer personas”. This will help to better personalize communication with company’s audience, depending on their propensity to respond to particular offers or promotions, by building a refined strategy and messages for each of those personas (or segment) to fits each stage of the customer’s journey.
Step 2: Scoring
Enrich customer information by augmenting them with new highly valuable information generated by machine learning algorithms that help marketers to maximize prospect conversion and customer’s ARPU, these scores are used in marketing campaign as condition to take the adapted action:
Lead scoring: allows categorizing leads, by differentiating between those who are really interested in the product from those who just starting to search some information. the higher is the chance that the specific customer is ready to convert. It can be calculated using 2 ways :
Rules engine: by increasing and decreasing leading score based summing interaction’s weights, example: [+1 point] for website visit, [+5 points] click on Email contact, [+10 points] click on product catalog, [+20 points] download buyer guide, [+30 points] access payment form, [-10 points] after 1 month of inactivity, [-30 points] unsubscribe from the newsletter. Drawbacks: interaction’s weights are defined manually and need constant adjustments
Predictive analytics especially regression, such as logistic regression that can be seen as the probability of conversion, it allows to :
Get rid of choosing predictors manually, by using feature selection algorithms like stepwise backward to pick up the most relevant information about the leads from demographic information, online behavior and Email/social engagement.
Get rid of defining weights, since it’s automatically defined by the regression algorithm during the model’s training.
RFM scoring [Recency, Frequency, Monetary]: it provide accurate definitions of the best customers, most loyal, biggest spenders, almost lost, lost customers and lost cheap customers.
Recency score: Identify [purchase most recent date, purchase furthest date] interval and bin it to 3, 4 or 5 ranks. Customers who purchased more recently are more likely to purchase again than are customers who purchased further in the past.
Frequency score: Identify [highest frequency of purchases, lowest frequency of purchases] interval and bin it to 3, 4 or 5 ranks. Customers who have made more purchases in the past are more likely to respond than are those who have made fewer purchases.
Monetary score: Identify [highest monetary value, lowest monetary value] and bin it to 3, 4 or 5 ranks Customers who have spent more (in total for all purchases) in the past are more likely to respond than those who have spent less.
RFM score = [Recency score] x 102 + [Frequency score] x 101 + [Monetary score] x 100
RFM score step-by-step
Next best offer (NBO): use association algorithms like Apriori and CARMA that are trained on historical data (customer spending habits) to propose to each customer new products and upgrades that best fits his needs, which help companies to adopt a customer-centric approach, increase conversions and encourage sales.
Churn score (attrition rate): predict customers who have a high likelihood to cancel a subscription to a service.
Step 3: A/B testing
A/B testing is used for Marketing campaign to assess its different variants, so as to determine the most effective one (send the right message at the right time) that generate the highest number of customer response (best click view, click-through and open rates). A/B testing provide good insights around what wording, visuals and sending time will work best for each specific segment. The tested points are as following:
• Email subject line which is seen first by the audience first and can affect the open rate.
• Subject Design & content like using different text, various layout, visuals, video vs 2D images, which affect your clicks & conversions.
• Sending Time & frequency: find a happy balance for sending frequency and time take into account customer’s behavior/feedback, the type of your business and the strategy adapted by industry peers.
To execute A/B testing, we must split the audience (the whole target or preferably only new prospects and customers) to as much segment as campaign variants that must be tested, and then it uses prospect/customer’s feedback history and 2 kind of tools to assess the significant difference between the tested cases:
Statistical (frequentist) methods: 2 types of statistical tools are used:
Statistical (frequentist) methods with 2 examples : T-test & Chi-square test
Machine learning methods: like Bayesian A/B testing, where each case is treated as random variable to which we assign a prior probability inferred from past knowledge of similar experiments, we combine this prior probability with current experiment data to find the right answer to the test.
Posterior = Data + Prior
Bayesian A/B testing: calculating posterior probability based on prior probability and new data
Usually, we test one thing at a time, to get accurate results, and tests are differentiated depending on the location, gender and segment. Also, continual testing and optimizing is essential after campaigns go live.
Step 4: building Marketing campaign Workflow
To automate business scenarios, Marketers build campaign workflows, a sequence of steps and tasks with a predefined check points of events (rules and conditions) that trigger touch points (email, SMS, …), that go along with the customer to guide him and light up his choices throughout a predefined path. 4 main elements are used in the workflow:
Basic elements used to build Marketing Campaign Workflows
You can see below, 2 use cases of campaign workflow:
Cross-sell email campaign workflow:
Cross-sell email campaign workflow
Recover abandoned shopping carts workflow:
Recover abandoned shopping carts workflow
Step 5: Measure Campaign Performance
At this stage, the goal is to monitor campaign results in real time and track its effectiveness, by detecting opened Email (Views, reads, location), click made from emails, website traffic generation (number of visits of landing pages) and sales through website/outlet (Cost Per Lead, Lead to close ratio, Conversion Rate, Return On Investment ROI).
Campaign Performance KPI
For campaign In Social media, the following metrics can be assessed: new followers, comments, likes, retweets, channel views, bounces and subscribers.
Performance of a campaign is also assessed by comparing those KPI for the target vs the control group, to see if there was a real impact that leads to substantially exceed the organic sales growth.
At this stage, Machine learning can be used (Regression or Neural network) to predict those KPI before campaign execution, based on historical data of the former campaigns, including customers’ feedback and behavior, so as to avoid bad campaign parameters that lead to weak performance.
Step6: refine your model
Applying machine learning on marketing system allow to keep campaign performance at the Top, by proceeding periodically and automatically to a tuning of the used models, based on the new data collected from the ongoing campaigns, which help to improve their accuracy and power to provide the right prediction.
This is where machine learning gets really interesting, as you an end up with a system which changes and improves itself over time.
Conclusion
Marketing automation is a must have tool for marketers that get enhanced by leveraging machine learning capabilities, which provide the capacity to process data at scale, to get the right target and the right action at the right time, helping to optimize User eXperience, increase customer loyalty and commitment to the brand, and raise the ROI.
|
Enhancing Marketing automation with Machine learning
| 0
|
enhancing-marketing-automation-with-machine-learning-1abfb09dba93
|
2018-09-07
|
2018-09-07 14:07:36
|
https://medium.com/s/story/enhancing-marketing-automation-with-machine-learning-1abfb09dba93
| false
| 1,423
| null | null | null | null | null | null | null | null | null |
Marketing Automation
|
marketing-automation
|
Marketing Automation
| 2,860
|
Youssef Fenjiro
|
Data scientist, Machine learning, Deep learning and Artificial intelligence.
|
e862cf80eb37
|
fenjiro
| 1
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-17
|
2018-02-17 14:16:48
|
2018-02-16
|
2018-02-16 12:18:22
| 1
| false
|
en
|
2018-02-17
|
2018-02-17 14:18:07
| 8
|
1ac07c37fa2a
| 3.215094
| 27
| 0
| 0
|
Nowadays entrepreneurs have a lot of options to choose from when it comes to business management software applications. What’s not that…
| 5
|
Obizcoin: Entrepreneurs rejoice with this new technology based on AI & Blockchain
Nowadays entrepreneurs have a lot of options to choose from when it comes to business management software applications. What’s not that easy is finding the right solution and then tailor-making it to suit the strategic and operational requirements of the business. And even with so many options, there’s still no easy way for entrepreneurs to streamline their business processes, monitor performance and gain an insight into the operational health of their businesses. Various software available in the market today are good at providing tons of data in the form of reports but does not tell what to do, how to use and when to use these reports.
With startups and SMEs constituting nearly seventy percent of the businesses in the world and majority of them facing standard problems and challenges in the area of business process management, creates a huge void for introspection and improvisation.
Obizcoin is developing a Smart Business Process Automation BOT (currently in development stage) based on Artificial Intelligence and Blockchain Technology to help entrepreneurs effectively and efficiently manage their business processes and keep a tab on the operational health of their businesses.
How Obizcoin Smart Process BOT can provide an edge to startups and SMEs
BOT Domain Expertise
Lack of domain knowledge and expertise can prove to be one of the self-inflicted wounds to a business enterprise and no entrepreneur would ever wish to indulge in such a rodeo.
The BOT will be equipped with the domain knowledge of specific businesses, derived from Knowledge Pool of experts of various industry experience, to provide industry domain expert solutions.
This will help entrepreneurs save time on research and development and speed up the turn-around-time from development to execution. With the addition of new business domain knowledge, the knowledge pool of the BOT will keep on expanding.
BOT Process Design and Process Decisions
Processes are vital for an organization and can prove to be very instrumental in bringing standardization, scalability, ease in operations, efficiency and resource savings etc.
The BOT will be equipped with inbuilt processes designed for different departments like Purchase, Sales etc for different categories of organizations (startups, SMEs) for different industries like e-commerce, apparel, education, healthcare, retail, hospitality, etc. These processes can be customized according to the strategic and operational requirements of a business enterprise.
The BOT will also help businesses to take appropriate process decisions for optimum utilization of resources towards a balanced and sustainable growth of the company.
BOT Audit System
Audit is an important tool of management control. The purpose of audit is not just to travel back in time to churn out data on what has already happened but more importantly, to shed light on what can be improved.
With the help of Artificial Intelligence, the BOT will conduct process audits (tracking and reviewing processes and performances) of the organization towards improving the efficiency and effectiveness of the processes.
Operational Risk Score Analysis
Monitoring and measurement are two important requirements for improving performance. The same applies to striving for business process excellence as well.
Operational Risk Score Analysis is one of the key features of Obizcoin Smart Process BOT and it shall be helpful for startups and SMEs to determine the degree of process implementation within the organization, to measure team performance across the organization, and to assess the strengths and weaknesses of the organization.
In addition to determining the score, the feature will also include an analysis report on the top and weak scoring parameters of the organization and on strategies for improvisations.
Domain knowledge and assistance in process design and decisions can provide a strong impetus to startups and SMEs in the face of the challenges and problems which they encounter in business process management. With the passage of time, improvisations also become necessary which call for effective monitoring and measurement. And talking about business and technology, it has become very important for entrepreneurs to embrace and make use of the latest technologies which can help them manage their businesses better and also provide them a competitive edge in the dynamic business environment of the present day.
Disclaimer: The above content/article is intended only to provide a general overview and is not to be used as a basis for the exercise of any business, professional or investment judgment/action.
For detailed information, please visit the company’s website and carefully read all the necessary documents.
To read read more article related to this topic: Obizcoin: Your Virtual CEO, Make your business processes SMART with Obizcoin BOT, How Obizcoin BOT can help your Business Grow?Managing Business Efficiently using Smart Process BOT,Obizcoins’ Technology: A boon to startup community,Obizcoin will revolutionize the way of doing business!.
Visit : Obizcoin.io
Originally published at medium.com on February 16, 2018.
|
Obizcoin: Entrepreneurs rejoice with this new technology based on AI & Blockchain
| 1,174
|
obizcoin-entrepreneurs-rejoice-with-this-new-technology-based-on-ai-blockchain-1ac07c37fa2a
|
2018-02-21
|
2018-02-21 13:46:11
|
https://medium.com/s/story/obizcoin-entrepreneurs-rejoice-with-this-new-technology-based-on-ai-blockchain-1ac07c37fa2a
| false
| 799
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Varun Shah
|
Co-Founder at Your Retail Coach and OBIZCOIN
|
9ab0cb8539f9
|
accessvarun
| 45
| 17
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
91551494e820
|
2018-08-28
|
2018-08-28 19:17:55
|
2018-08-29
|
2018-08-29 06:40:50
| 7
| false
|
en
|
2018-09-06
|
2018-09-06 21:01:48
| 6
|
1ac251ebc1ce
| 4.393396
| 5
| 0
| 1
|
Welcome back to the Tensorflow series!!
| 5
|
Generative Adversarial Networks using Tensorflow
Welcome back to the Tensorflow series!!
In this tutorial, we will be exploring Generative Adversarial Networks. Those of you interested in our other intuitive tutorials on deep learning, follow us here. Also don’t forget to subscribe to our newsletter for more updates!
This post is a primer on Generative Adversarial Networks or simply called GANs.
Generative adversarial networks (GANs) are deep neural net architectures comprising of a set of two networks which compete against the other, hence the name “adversarial”. GANs were introduced in a paper by Ian Goodfellow and other researchers at the University of Montreal, including Yoshua Bengio, in 2014.
GANs are designed to mimic any distribution of data. That is, GANs can be taught to create worlds eerily similar to our own in any domain: images, music, speech, prose. To better understand what I mean by this, lets go through an example
The Mona Lisa Analogy
Let’s consider a scenario where a forger is trying to forge the famous portrait by Leonardo Da Vinci and a detective is trying to catch the forger’s paintings. Now, the detective has access to the real portrait and hence can compare the forger’s paintings to find even the slightest differences. So, in machine learning terms, the forger is called the “Generator” and generates fake data, and the detective is called the “Discriminator”, which is responsible for classifying the output of the generator as fake or real.
GANs are computationally expensive, in the sense that, they require high-powered GPU’s to produce good results. Below are some of the fake celebrity faces produced by GANs after training for several epochs on 8 Tesla V 100 GPU’s for 4 days!!
We will be using an less hardware intensive example. In this post, we will use the MNIST data set to play around with a simple GAN which will be made using Tensorflow’s layers API. Before, going into the code, I will discuss two problems that generally occur with the GANs:
Discriminator overpowering Generator: Sometimes the discriminator begins to classify all generated examples as fake due to the slightest differences. So, to rectify this, we will make the output of the discriminator unscaled instead of sigmoid(which produces only zero or one).
Mode Collapse: The generator discovers some potential weakness in the discriminator and exploits that weakness to continually produce a similar example regardless of variation in input.
So, finally, let’s get to the code!!
Firstly, we import the necessary libraries and read in the MNIST dataset from tensorflow.examples.tutorials.mnist .
Next, we will be creating two functions which respresent the two networks. Mind the second parameter “reuse”, I will explain it’s utility a little bit later.
Both the networks have two hidden layers and an output layer which are densely or fully connected layers.
Now, we create the placholders for our inputs. real_images are the actual images from MNIST and z is a 100 random pixels from the actual images. In order for the discriminator to classify it first has to know what the real images look like, so we have two calls to the discriminator function, the first two learn the actual images and the second two identify the fake images. reuse is set to true because when the same variables are used in two function calls the tensorflow computation graph gets an ambiguous signals and tends to throw value errors. Hence, we set the reuse parameter to True to avoid such errors.
Next, we define the loss function for our network. The labels_in parameter gives an target label for the loss function based on which it performs the calculations. The second argument for D_real_loss is tf.ones_like as we aim to produce true labels for all real images,but we add a bit of noise so as to address the problem of overpowering.
When we have two seperate networks interacting with each other we have to account for the variables in each networks scope. Hence, while defining the functions the tf.variable_scope was set. We will be using the Adam optimizer for this example. We set the batch_size and the number of epochs. Increasing the epochs lead to better results, so play around with it (preferably if you have access to GPU’s).
Finally, we initiate the session and use the next_batch() method from the tensorflow helper functions to train the networks. We, grab a random sample from the generated samples of the Generator and append it to the samples list.
Plotting the first value from the samples shows the generator performance after the first epoch. Comparing it to the last value from the samples shows how well the generator has performed.
The outputs look something like this:
Sample after 0th epoch
Sample after 99th epoch
These are very poor results, but as we are training our model merely on CPU the learning is quite weak. But as we can see the model is starting to generate pixels in a more concise way and we can kind of figure out that this digit is ‘3’.
GANs are meant to be trained on GPU’s, so try getting access to a GPU or simply try out google colab to get much better results.
All right, so this was a Generative Adversarial Network model built from scratch on Tensorflow. Click on the banner below to get the full code. Follow our publication for more such posts on our facebook page.
Till next time!!
|
Generative Adversarial Networks using Tensorflow
| 54
|
generative-adversarial-networks-using-tensorflow-1ac251ebc1ce
|
2018-09-06
|
2018-09-06 21:01:48
|
https://medium.com/s/story/generative-adversarial-networks-using-tensorflow-1ac251ebc1ce
| false
| 886
|
Machine Learning concepts boiled down to their simplest implementations. We aim at inspiring innovation for ML beginners using the core concepts and their applications.
| null |
themlblog
| null |
the ML blog
|
mailthemlblog@gmail.com
|
themlblog
|
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,NEURAL NETWORKS,PYTHON,DATA SCIENCE
| null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Tathagat Dasgupta
|
Graduate student at UC Irvine. Actively seeking internship in data analytics.
|
3ee7334400e3
|
tathagatdasgupta
| 63
| 17
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
4f0f30a263b4
|
2018-05-11
|
2018-05-11 06:14:36
|
2018-05-11
|
2018-05-11 06:16:13
| 8
| false
|
en
|
2018-05-29
|
2018-05-29 07:39:46
| 11
|
1ac32911a7d0
| 3.07673
| 8
| 0
| 0
|
We have talked a lot about chatbots for customer support in our previous pieces. And a scalable, efficient and cost-effective human+bot…
| 5
|
How to Integrate Bot Using Dialogflow in Kommunicate?
We have talked a lot about chatbots for customer support in our previous pieces. And a scalable, efficient and cost-effective human+bot hybrid system has always been our philosophy.
In past few months, we have released a lot of helpful bots to help you accelerate your customer support efforts. In this piece, I am going to walk you through steps to integrate your own Dialogflow custom bot in Kommunicate.
Step 1: Login to your Kommunicate dashboard
Login to your Kommunicate dashboard and navigate to the Bot section. If you do not have an account, you can create one here. Locate the Dialogflow section and click on Settings.
Step 2: Get your Dialogeflow tokens
After clicking on the setting one popup box will open. You will be asked for Client and Developer access tokens.
You can get these tokens by logging into your Dialogflow console.
Click on the Settings Icon and scroll down to locate Client access token and Developer access token.
Copy and paste these tokens into the Dialogflow popup box in Kommunicate dashboard and click Next.
Step 3: Integrate and setup your bot profile
In the bot profile section, you will be able to give your bot a name. This name will be visible to your customer whenever the bot interacts with them.
To complete the setup, click on Integrate and setup Bot Profile. You can check your newly created bot in two places:
Dashboard →Bot Integration → Integrated Bots: You can check all your integrated bots here
Dashboard → Bot Integration: your Dialogflow icon should be green with the number of bots are you have successfully integrated.
Step 4: Using bots in chat beacon
To use the bot in customer conversation, you need to pass additional parameters in your original Kommunicate plugin code.
Navigate to Dashboard →Settings. Click on Install under the Configuration section.
Copy the JavaScript code to be added either in your website or your application. Before pasting, you can add botIds parameter to integrate your bot in the chat. In this parameter, you can pass one or more botids depending upon your requirement. See the example:
More information on bot integration can be found here.
In these few simple steps, you can integrate bot using Dialogflow in Kommunicate and automate mundane so that your agents can concentrate on what only human can do.
Related Reads:
Bot + Human Hybrid — The New Era of Customer Support
Bots Are Here To Stay — So Are Your Customer Support Agents
PS: I thank Vipin from Kommunicate team for helping me with initial notes.
Subscribe here to get the good stuff — we solemnly swear to deliver top of the line, out of the box and super beneficial content to you once a week.
At Kommunicate, we are envisioning a world-beating customer support solution to empower the new era of customer support. We would love to have you onboard to have a first-hand experience of Kommunicate. You can signup here and start delighting your customers right away.
This article was originally published here.
|
How to Integrate Bot Using Dialogflow in Kommunicate?
| 122
|
how-to-integrate-bot-using-dialogflow-in-kommunicate-1ac32911a7d0
|
2018-05-29
|
2018-05-29 07:39:47
|
https://medium.com/s/story/how-to-integrate-bot-using-dialogflow-in-kommunicate-1ac32911a7d0
| false
| 515
|
Kommunicate is a modern customer communication software for real-time, proactive and personalised support.
| null |
kommunicateio
| null |
Kommunicate
|
hello@kommunicate.io
|
kommunicate
|
CUSTOMER SUCCESS,CUSTOMER SUPPORT,CUSTOMER SERVICE,CUSTOMER RETENTION,CUSTOMER ENGAGEMENT
|
kommunicateio
|
Bots
|
bots
|
Bots
| 14,158
|
Parth Shrivastava
|
Product, Marketing and Startups. I read a lot, write a lot.
|
b1554e3c7349
|
parthshrivastava
| 279
| 86
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-10
|
2018-09-10 20:59:38
|
2018-09-10
|
2018-09-10 21:35:37
| 1
| false
|
en
|
2018-09-10
|
2018-09-10 21:35:37
| 1
|
1ac4c204dbb1
| 0.954717
| 0
| 0
| 0
|
The SoCal Data Science Conference of 2018 is coming soon!
| 5
|
IDEAS AI & Data Science Conference 2018
The SoCal Data Science Conference of 2018 is coming soon!
It will be held at the LA Convention Center on 10/20/2018 (https://www.ideassn.org/socal-2018/)
in which we will demonstrate 8 major sessions covering all walks of life:
Artificial Intelligence
Blockchain
Big Data
Machine Learning
Fintech
Healthcare
Gaming
Digital Ads/Marketing
This is the must-attend event for anyone that is already in the field or just looking to expand their career network in the Data Science area. The mission of it is to build a platform for tech companies to interact with our audiences and bridge the gap between them. As we all know, all of the topics mentioned above, nowadays, are hot topics around all over the world. In that day, there will be 200+ speakers, who are experts or have valuable hands-on experiences in their fields respectively, exploring the state-of-the-art technology, 20+ corporate booths providing networking opportunities, 3000+ participants attending the event.
Excitingly, what you will obtain as long as you attend the event are:
The most professional venue
The perfect schedule
The most exciting speaker team
The most rich and colorful networking activity
Just waiting for your arrival!
|
IDEAS AI & Data Science Conference 2018
| 0
|
ideas-ai-data-science-conference-2018-1ac4c204dbb1
|
2018-09-10
|
2018-09-10 21:35:37
|
https://medium.com/s/story/ideas-ai-data-science-conference-2018-1ac4c204dbb1
| false
| 200
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
IDEAS
|
www.ideassn.org - International Data Engineering And Science Association
|
d2d978495ac6
|
ideastiffany
| 5
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
with tf.name_scope('GenLoss'):
gen_loss = tf.reduce_mean(tf.losses.mean_squared_error(
real_image_input,gen_sample))
with tf.name_scope('DiscLoss'):
disc_loss = tf.reduce_mean(tf.losses.mean_squared_error(
real_image_input,stacked_gan))
with tf.name_scope('DiscModel'):
stacked_gan = discriminator(gen_sample, reuse=True)
| 3
| null |
2018-05-18
|
2018-05-18 08:35:53
|
2018-05-18
|
2018-05-18 09:07:00
| 0
| false
|
en
|
2018-05-18
|
2018-05-18 12:52:29
| 1
|
1ac62dff3f77
| 1.320755
| 1
| 0
| 0
|
Deep learning articles and tutorials are prevalent on Internet. Besides formal research papers, you can usually find some articles and…
| 3
|
Confusing GAN tutorial
Deep learning articles and tutorials are prevalent on Internet. Besides formal research papers, you can usually find some articles and examples helping you understand the network architectures and algorithms.
However, as the numbers of tutorials are overwhelming, it’s not difficult to find confusing ones.
I occasionally looked into an article on kdnuggets for GAN. I may be led to this article by one tweet on Twitter. As I’m looking into various deep learning architecture, I spent some time on its code.
It’s a confusing article and also its example code, I must say. The code can run without problem. But I’m not sure if it is GAN network.
This is how it defines the generator’s loss:
The generator takes an image input and outputs a same dimension image. If I didn’t miss anything, usually the generator takes a random input and tries to output an output close to real input like images, etc.. The generator defined in the article/example takes an image mixed with some noisy signals and tries to output an image without the noise.
Here it defines the loss which makes the generator do best to remove noise from input image. So sounds like it’s not really a generator, isn’t?
The loss of discriminator is defined as following:
stacked_gan is the output of discriminator:
So the loss makes the discriminator try to also output an image which is close to real image input. Doesn’t it sound like the same goal as the generator defined above?
With the two networks having the same goal, in the end we don’t really have so called “Adversarial Networks”, I think.
I’m not sure if how often the articles/tutorials on deep learning we find are incorrect or misleading. As these are somehow complicated ideas and technologies, it is not always easy to write completely articles/tutorials/examples related to deep learning technologies. But we must be noticed when reading and studying them, because we want to get understanding instead of getting confused after that.
|
Confusing GAN tutorial
| 1
|
how-to-generate-confusing-gan-tutorial-1ac62dff3f77
|
2018-05-18
|
2018-05-18 16:49:43
|
https://medium.com/s/story/how-to-generate-confusing-gan-tutorial-1ac62dff3f77
| false
| 350
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Liang-Chi Hsieh
|
Spark, Big-Data, Machine Learning, Deep Learning
|
d30d3d641963
|
viirya
| 9
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
b2662e5326a5
|
2018-05-28
|
2018-05-28 13:59:50
|
2018-05-28
|
2018-05-28 14:15:20
| 2
| false
|
en
|
2018-09-20
|
2018-09-20 14:55:31
| 4
|
1ac687238400
| 1.198428
| 24
| 0
| 0
|
We have made significant progress in the past few weeks. To provide some context, our technology is made up of three interdependent pieces…
| 4
|
GenesisAI — Tech progress update
We have made significant progress in the past few weeks. To provide some context, our technology is made up of three interdependent pieces: the smart contracts which power the marketplace for AI services, the service provider code that will allow sellers of AI services to adjust their existing algorithms for use on the marketplace, and the website where buyers and sellers will be able to browse, request, and pay for services.
We’ve already completed a preliminary iteration of the marketplace. Providers post tasks they they are able to provide, and anyone who is interested in a particular AI task can request it for the stated price in GAI tokens. The initial service provider code is nearly finished as well. We’ve used gRPC for multi-language algorithm support and performance. As a provider, getting services up and running on the marketplace will require little more than copying and pasting the existing code and then modifying it according to templates that we provide. We’re still early in development of the website, but there are not many technical hurdles to overcome. All in all, things are looking very promising.
Learn more and follow the GenesisAI Vs. AI oligopolies Saga
Follow Our Twitter
Visit Our Website: www.genesisai.io
Join Our Telegram
Read our White Paper
Email:archil@genesisai[dot]io
|
GenesisAI — Tech progress update
| 1,008
|
genesisai-tech-progress-update-1ac687238400
|
2018-09-20
|
2018-09-20 14:55:31
|
https://medium.com/s/story/genesisai-tech-progress-update-1ac687238400
| false
| 216
|
A decentralized marketplace for AI services
| null | null | null |
GenesisAI
|
archil@genesisai.io
|
genesisai
|
GENESIS,AI,BLOCKCHAIN,CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE
|
genesisai1
|
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Archil Cheishvili
|
Harvard College’16. Founder @GenesisAI. Founder @Palatine Analytics. 2 exits in emerging markets.
|
c1667e50a11d
|
archil_75663
| 21
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-25
|
2017-10-25 17:09:21
|
2017-10-25
|
2017-10-25 17:16:20
| 1
| false
|
en
|
2017-10-25
|
2017-10-25 17:16:20
| 1
|
1acb7d3d9fc1
| 1.615094
| 3
| 0
| 0
|
What does a good use of a hospital theatre look like? We don’t want surgeons and their teams to work all hours, or stay behind late to get…
| 5
|
How to stop operating theatres from wasting two hours each day
What does a good use of a hospital theatre look like? We don’t want surgeons and their teams to work all hours, or stay behind late to get through more work? But nor do we want to see hospital theatres waste two hours a day – a staggering 5,500 operations missed each day in England. (This is according to research undertaken by NHSI and reported by the BBC today.)
So what can we do?
Three reasons are sighted in the NHSI report for the inefficient use of theatre time:
Operations start later than they should – coordinating people first thing in the morning is difficult
Operations finish earlier than they should – cancellations or DNA’s can mean time goes unused
Scheduling of operating lists is inefficient – knowing how long operations will take in advance is difficult
The team at one of the trusts we are working with have started to look at how they can fix the third problem: safely fit more operations into their theatres without simply ‘overbooking’ operating lists. We’ve been helping them to do this using a tool that we developed called Space Finder.
Space Finder solves the problem by predicting how long operations take to perform. It’s tailored to the doctor, patient, and the operation – for example, it spotted that operations on the right knee tend to be quicker than those on the left knee due to the positioning of the patient relative to the right-handed doctor.
It is very accurate. It’s so accurate that it can spot poor theatre scheduling days in advance so that other patients can safely be booked in for surgery. In its first month of use, it has already managed to help book several additional cases per week, allowing people to have their surgery sooner than expected with no additional cost to the hospital – a great result for the early stages of the work.
This has been made possible by harnessing years of data from previous surgery. Using machine learning (a type of artificial intelligence), Space Finder learns from this data and predicts operating times. The more operations that take place, the more Space Finder learns and the better it predicts.
|
How to stop operating theatres from wasting two hours each day
| 3
|
how-to-stop-operating-theatres-from-wasting-two-hours-each-day-1acb7d3d9fc1
|
2017-10-26
|
2017-10-26 13:37:05
|
https://medium.com/s/story/how-to-stop-operating-theatres-from-wasting-two-hours-each-day-1acb7d3d9fc1
| false
| 375
| null | null | null | null | null | null | null | null | null |
Nhs
|
nhs
|
Nhs
| 1,174
|
George Batchelor
| null |
80bb1f66bbaa
|
george_23629
| 2
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
54107353b6dc
|
2017-10-11
|
2017-10-11 18:01:46
|
2018-01-31
|
2018-01-31 19:19:54
| 6
| false
|
en
|
2018-01-31
|
2018-01-31 19:19:54
| 6
|
1acba5fa14e4
| 2.972642
| 3
| 0
| 0
|
It’s real. It’s here. It’s not going anywhere. It’s going to be the biggest trend in 2018.
| 5
|
( Image Source Christos )
5 Chatbot companies that helps you automate your business.
It’s real. It’s here. It’s not going anywhere. It’s going to be the biggest trend in 2018.
Marketing, lead generation, customer care, Smart FAQ, Content discovery all of it and more will heavily use chatbots to automate these processes.
Chatbots have come a long way. They are smarter, more “talkative” and gets the job done.
Last week we shortlisted 25 companies providing Chat Bots As A Service ( Let’s call it as CBAAS ) for assesment.
The idea was to boil down to a list of 5 companies providing CBAAS which you can use for your business almost like a plug and play.
And here are the 5 companies who toped our charts.
Recime Team
Recime is an enterprise chatbot platform that provides cloud infrastructure, chatbot framework, AI, and analytics to easily build chatbots. Its platform offers brands and developers the simplest way to create chatbots. The company was formed to equip brands with the chatbot platform they need to address and harness the digital transformation shaping our world.
Chatfuel Team
Chatfuel was born in the summer of 2015 with the goal to make bot-building easy for anyone. We started on Telegram and quickly grew to millions of users. Today we’re focusing mainly on making it easy for everyone to build chatbots on Facebook Messenger, where our users include NFL and NBA teams, publishers like TechCrunch and Forbes, and millions of others.
We believe in the power of chatbots to strengthen your connection to your audience — whether that’s your customers, readers, fans, or others. And we’re committed to making that as easy as we can.
Motion AI Acquired by HubSpot
Motion.AI
Build, train and deploy chatbots to do almost anything imaginable — from taking food orders and accepting payments, to running customer service chats and diagnosing patients.
Motion AI handles everything from the initial training of AI through telecom deployment (i.e., providing you with an SMS number, email, web chat etc to communicate with your robot)
Botsify
Botsify
Botsify helps you create facebook messenger chatbot for your business/page without any coding.
You can rule based chatbots with our technology with no technical knowledge required. Technical mates can go extra mile with Botsify by integrating their APIs with Chatbot to make it extra effective. People can do analysis on different behavior of their bots and also run marketing campaigns to the people who opted for it.
They offer several integration like WordPress, Zendesk, Medium and several others in order to integrate your chatbot with existing knowledge base.
TARS
Hello Tars
We, at Tars, are trying to enable individuals and businesses to create automated conversational interfaces with no programming knowledge at all. Tars was founded in May 2015 by Vinit Agrawal and Ish Jindal.
We are excited about how businesses can interact with their users over a simple and intuitive chat based interface. And this can be for use-cases like ordering/booking process, feedback collection, conducting surveys, user onboarding, training, customer support automation and a lot more.
Having worked with businesses in domains ranging from finance to healthcare to food & beverages and a lot more, we envisage a future where a lot of interactions can be automated and delivered in an engaging manner over a messaging platform.
|
5 Chatbot companies that helps you automate your business.
| 8
|
5-chatbot-companies-that-helps-you-automate-your-business-1acba5fa14e4
|
2018-04-21
|
2018-04-21 03:18:44
|
https://medium.com/s/story/5-chatbot-companies-that-helps-you-automate-your-business-1acba5fa14e4
| false
| 536
|
Appedus is a mobile app ecosystem focused digital magazine. We publish actionable information about the mobile app ecosystem.
| null |
appedus
| null |
appedus
|
appedus@gmail.com
|
appedus
|
MOBILE APP DEVELOPMENT,MOBILE APP DESIGN,MOBILE APP MARKETING,APP STARTUP,MOBILE APP DEVELOPERS
|
appedus
|
Chatbots
|
chatbots
|
Chatbots
| 15,820
|
Appedus Editorial Team
|
www.appedus.com is a mobile app ecosystem focused digital magazine. We publish actionable information about the mobile app ecosystem.
|
4e8802878541
|
appedus
| 82
| 512
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
4fe64d8f2902
|
2017-10-24
|
2017-10-24 06:39:53
|
2017-10-24
|
2017-10-24 06:46:52
| 3
| false
|
en
|
2017-10-24
|
2017-10-24 08:31:53
| 6
|
1acd59aa3b6
| 2.783962
| 22
| 0
| 0
|
1 day before the start of the pre-sale of Neurotoken on October 25th we take a look at the core economic issues Neuromation as a platform…
| 5
|
The New Economy of Knowledge Mining
1 day before the start of the pre-sale of Neurotoken on October 25th we take a look at the core economic issues Neuromation as a platform is going to address.
Image credit Neuromation
As we know, the artificial intelligence landscape is constantly evolving, with new and revolutionary ideas changing the way we perform transactions. What we’re now seeing in the industry, is the end of crypto-currency mining as we know it.. and the rise of data mining, with scientists willing to pay rental for a miner’s capacities- more than miners make in the core business activity.
Like anything this complex, there are a number of problems that stand in the way of perfecting such an approach: The requirement of high computational power for generating synthetic data and rendering images, and also for training deep-learning models on large amounts of data. But a team of scientists from Neuromation have come up with a revolutionary solution to deal with these problems. Led by deep learning expert Sergey Nikolenko, along with the David Orban, Singularity University and Andrew Rabinovich, Magic Leap in advisory roles, they propose using the computational power of GPU processor-based mining farms, which calculate abstract blockchain algorithms to produce cryptocurrency. The difference between how much mining farms earn using the same computing power and what scientists pay for computing during the training of neural networks is more than 15–20 times.
Imagine a place where you could go and easily address all requests to acquire AI capability. A vendor would create the data generator for you, then a group of Neuromation Nodes would use the generator to quickly create a massive virtual dataset. You could then select a set of Deep Learning architectures to train on that data, before another group of Neuromation nodes carried out the training in record time. Imagine no more. That time is here.
Neuromation’s platform gives miners a wealth of opportunities and earning capabilities. In addition to existing mining software, clients can load up a Neuromation Computation Node. When a Neuromation task is available, each node can bid to participate. If the Node wins the bid, it will switch computing power from mining Ether or other coin to a task on the Neuromation blockchain platform. The Node will generate synthetic data or train a Deep Learning model. As a reward the miner receives Neuromation tokens: Neuromation crypto-currency. Once the task is complete the Neuromation Node will exit and the miner will proceed to mining crypto. Initially these tokens will be extending Ether, but will migrate to Neuromation’s own blockchain, once the Platform economy matures.
Image credit Neuromation
Preliminary research has seen the difference in effective yield for similarly configured rigs. One running crypto-currency mining algorithm, another running Deep Learning or Data Rendering tasks. Type of Computing Yield/Time Ether Mining $ 7–8 USD per day Amazon Deep Learning $ 3–4 USD per hour.
Neuromation intends to be the premier destination for AI services for the world’s businesses. It’s projecting around 71 mln USD gross in transactions on the platform. From each transaction Neuromation will take a commission ranging from 15 per cent to 5 per cent, depending on the type of service received through the platform. It’s anticipated the platform usage will grow from 3x to 5x a year from 2018 to 2022. Neuromation platform should generate over 100M in yearly revenue from commissions in three years.
Image credit Neuromation
To make the most of this incredible investment opportunity, please visit https://ico.neuromation.io/en/ Neuromation’s token pre-sale begins on October 25, with 25% bonus for early contributors.
|
The New Economy of Knowledge Mining
| 103
|
the-new-economy-of-knowledge-mining-1acd59aa3b6
|
2018-06-01
|
2018-06-01 22:08:00
|
https://medium.com/s/story/the-new-economy-of-knowledge-mining-1acd59aa3b6
| false
| 592
|
Distributed Synthetic Data Platform for Deep Learning Applications
| null |
neuromation
| null |
Neuromation
|
pr@neuromation.io
|
neuromation-io-blog
|
AI,DEEP LEARNING,ARTIFICIAL INTELLIGENCE,NEUROMATION,TOKEN SALE
|
neuromation_io
|
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Neuromation
|
https://neuromation.io
|
fbaeecaf782a
|
Neuromation
| 629
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
be5de67a6d83
|
2017-11-02
|
2017-11-02 14:46:25
|
2017-11-03
|
2017-11-03 14:12:44
| 6
| false
|
en
|
2018-02-12
|
2018-02-12 23:41:46
| 8
|
1acd8834b2e6
| 2.761321
| 6
| 0
| 0
|
Businesses like IBM are implementing cognitive capabilities in order to enhance innovation and improve its competitive edge
| 5
|
Cognitive Consultants? The Future of AI in the Workplace
Businesses like IBM are implementing cognitive capabilities in order to enhance innovation and improve its competitive edge
In the cognitive era, virtual agents are transforming the way businesses operate
The Opportunity
Now is the time to leverage cognitive technology in the workplace. With the industry shifting away from purely digital capabilities and towards cognitive enterprises, businesses are increasingly facing pressure to integrate artificial intelligence to maintain a competitive edge.
In fact, 36% of all companies are expected to have added advanced analytics and artificial intelligence to their practice by the end of 2017, with virtual agents standing as one of three top use cases for cognitive computing capabilities.
With the implementation of cognitive assistants, businesses can expect to create rich new sources of data, integrate advanced innovation as a part of an ecosystem, and execute situational analyses, ultimately reinventing the workplace.
The Now
Virtual assistants help businesses like IBM keep up with changes in the cognitive era
IBM’s Institute for Business Value conducted a study with 3069 C-Suite executives from 91 countries and 20 industries and found that 47% expect to redeploy employees to higher-value activities by using AI technology for routine tasks.
In line with this trend, IBM has developed a new cognitive agent, Alfred, to help project managers scale expertise and accelerate research, thereby reducing time spent on administrative tasks.
At the moment, Alfred processes natural language requests via IBM Watson Conversation Services to direct project managers to project financials and operational data.
An informational retrieval process that would have normally taken days of manual lookups is now delivered in under 3 seconds, allowing IBM to keep up with the ongoing changes in the cognitive era.
Cognitive agents can improve a business’ competitive position
The immediacy with which cognitive agents like Alfred deliver resources is the key to shifting workplace operations and improving a business’ competitive position.
Currently, 36% of a typical worker’s day consists of sorting, consolidating, and looking up data spread across various systems, leading to operational inefficiencies.
Automating these tasks gives IBM project managers time to spend side-by-side with clients and project teams.
The Vision
Cognitive agents must learn from feedback and adapt quickly in order to conduct situational analysis
The future of cognitive agents in the workplace, however, extends beyond retrieving current-state information.
To truly drive positive business outcomes, they must be able to conduct real-time situational analysis as well.
Alfred, for example, will soon be able to parse through mass quantities of data, apply sentiment analysis, and produce a qualitative assessment of project confidence for any given team.
The vision is for Alfred and other cognitive agents to serve as an intelligent advisor to the workplace in order to help businesses become more robust and intuitive than ever before.
The Impact
Cognitive agents are transforming the way businesses deliver value to clients
Cognitive capabilities enable businesses to unveil untapped value streams, enhance collective knowledge transfers, and advance innovation.
What businesses ultimately want in today’s data-driven economy is less complexity and more immediacy.
Together, with the help of cognitive agents like Alfred, businesses can work smarter and act faster, transforming the way they deliver value to their clients.
|
Cognitive Consultants? The Future of AI in the Workplace
| 50
|
cognitive-agents-how-ai-drives-dynamic-transformation-in-the-workplace-1acd8834b2e6
|
2018-05-19
|
2018-05-19 19:24:04
|
https://medium.com/s/story/cognitive-agents-how-ai-drives-dynamic-transformation-in-the-workplace-1acd8834b2e6
| false
| 480
|
Thought Leadership from Subject Matter Experts
| null | null | null |
Into The Future
|
bethelledesmond4@gmail.com
|
the-future-of-financial-services
|
BANKING AND FINANCE,TECHNOLOGY,CONSULTING,TECH CONSULTING,THOUGHT LEADERSHIP
|
bethelledesmond
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Future of Work
| null |
c35174d75ad9
|
cognitiveAl
| 18
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
c4b75347ea46
|
2018-05-31
|
2018-05-31 06:54:09
|
2018-05-31
|
2018-05-31 08:02:19
| 2
| false
|
en
|
2018-06-28
|
2018-06-28 09:53:40
| 26
|
1ad333b9d264
| 3.334277
| 1
| 0
| 0
|
Emerging cloud trends, products and services from Amazon Web Services (AWS) in 2018.
| 5
|
Top 5 Cloud Trends from Amazon Web Services (AWS) in 2018 [Infographic]
Emerging cloud trends, products and services from Amazon Web Services (AWS) in 2018.
Amazon Web Services hosts a cloud conference ‘AWS re:Invent’ every year which comprises of a series of announcements, bootcamps, hackathons, breakout sessions, workshops, and certification opportunities. The conference lasts nearly a week and provides great opportunities to gain information and insights about Amazon’s current products, product launches and services for attendees of all skill levels. The research firm, Wikibon, predicts that by 2022 Amazon Web Services (AWS) will reach $43B in revenue, and be 8.2% of all cloud spending. This shows that Amazon leads ahead of Google and Microsoft in cloud computing.
We revisited the AWS re:Invent 2017 conference to list down few of the emerging cloud trends that are expected to be widely adopted during 2018.
Machine Learning
Businesses and developers need an end-to-end, development to production pipeline for machine learning. Developers without a specialized skill set find the implementation of machine learning models challenging and cumbersome. To address this issue, a set of new machine learning services were released during the cloud conference to help developers and data scientists train, deploy, and manage machine learning models at scale. Some of the services that are now being widely used are Amazon Rekognition, Amazon Polly, Amazon Lex, Amazon Sagemaker, Amazon Deeplens, and Amazon Comprehend.
AWS Cloud Trends in 2018
Serverless Implementations
Serverless cloud computing allows the developers to develop and run apps and services without having to manage and operate any complex infrastructure of servers.
Serverless is one of the major trends in cloud computing. AWS Lambda, a serverless computing platform that was released in the AWS re:Invent 2014, has undergone incremental updates and enhancements recently that were announced during the conference. The updates include enhanced memory capacity, options for concurrency reservations, traffic shifting and phased deployments with AWS Code Deploy, logging of execution activity via CloudTrail etc. Another service called AWS Serverless Application Repository is also designed to aid in the publication, discovery, and deployment of serverless applications. Amazon Aurora Serverless, is a new auto-scaling configuration designed for highly variable workloads based on the application’s needs. These products and services will be widely used in 2018.
Simplified Virtual, Augmented Reality and 3D Graphics
Development of practical and realistic virtual reality environments requires specialized skills and the use of many different tools. The launch of Amazon Sumerian, a new platform for developers, will assist in building and hosting VR, AR and 3D apps quickly and with minimal coding, for smartphones and tablets, head-based displays, digital signage and web browsers.
Voice-Based Digital Initiatives
Technological advancements make our lives comfortable by impersonating how we generally do things. Amazon CTO Werner Vogels describes a future where technology and digital access will be defined by human centricity that starts with the most natural way of interaction, voice. The launch of Amazon Alexa for business makes it certain that conversational interfaces will be the next big thing in the software industry.
Wide Adoption of the Internet of Things and IoT Services
Internet of things has been referred as the buzzword that has been delivering the fastest results so far. IoT devices are now pervasive and the deployment rate is predicted to increase significantly over the coming years. A series of new Internet of Things services were made public during the AWS re:Invent 2017, including a new operating system called Amazon FreeRTOS, for devices that run on microcontroller units (MCUs). AWS also announced new services and products called IoT one-click, AWS IoT Device Management, AWS IoT Device Defender, AWS IoT Analytics, as well as AWS Greengrass ML Inference.
Do you know that the companies leveraging AWS have achieved a greater level of availability and automation, with improved cost savings and agility? Strategic Systems International has been continuously helping companies with their digital transformation and migration of their services to the cloud whilst also assisting them to optimize the cost and adoption of AWS.
Are you planning to move to the cloud? We are here to help! Contact us today to learn more about our company’s proven expertise and customer success in different AWS services.
If you enjoyed this read, recommend it to others. Your comments, suggestions and feedback will be greatly appreciated.
Strategic Systems International (SSI) is an advanced analytics and software engineering firm headquartered in Chicago with 25+ year experience in building applications for enterprises and SAAS companies with an onshore/offshore delivery model. We are a team of data scientists and technologists that seek to solve complex problems through simple technology and data enabled solutions.
Visit Our Website: ssidecisions.com
Follow Strategic Systems International on Twitter
Follow Strategic Systems International on LinkedIn
|
Top 5 Cloud Trends from Amazon Web Services (AWS) in 2018 [Infographic]
| 2
|
top-5-aws-cloud-trends-in-2018-infographic-1ad333b9d264
|
2018-06-28
|
2018-06-28 09:53:40
|
https://medium.com/s/story/top-5-aws-cloud-trends-in-2018-infographic-1ad333b9d264
| false
| 782
|
This is Strategic Systems International official publication where stories majorly revolve around FinTech, HealthTech, Big Data & Analytics, Machine Learning, and Industry of Things (IoT).
| null | null | null |
Data + Tech
|
zyunus@ssidecisions.com
|
data-tech
|
BIG DATA AND ANALYTICS,FINTECH,HEALTHTECH,INTERNET OF THINGS,TECHNOLOGY
|
SSI_TeamUS
|
Cloud Computing
|
cloud-computing
|
Cloud Computing
| 22,811
|
Strategic Systems International
|
We are an advanced analytics & software engineering firm HQed in Chicago with 25+ years building data-driven applications for SAAS companies and enterprises.
|
93e98ec056f8
|
SSI_TeamUS
| 36
| 36
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-11
|
2018-02-11 17:24:20
|
2018-02-11
|
2018-02-11 23:47:00
| 0
| false
|
en
|
2018-10-16
|
2018-10-16 03:53:23
| 2
|
1ad3cb9caadc
| 1.245283
| 0
| 0
| 0
|
They are emotional, make mistakes, regret things, are stupid, make non-sense decisions, are stubborn, egomaniacs, wonder and have dreams…
| 1
|
Introducing H.i. | Human Intelligence
They are emotional, make mistakes, regret things, are stupid, make non-sense decisions, are stubborn, egomaniacs, wonder and have dreams. When singularity arrives will need the best of us to accomplish greatness.
This kind of intelligence cannot be framed, downloaded, upgraded or hacked with codes but stories, fictional or real tales that shape the character of every human being in the planet.
Humans with H.i. will be more qualified than any other human or robot in tasks that need human intelligence like arts, leading, inspiring, awing, and any other skill that can push the human race forward in a disruptive way.
But what would happen when machines get to have H.i.? get the power to create awe from scratch?
No idea I’m afraid, perhaps I must devise an increase in the laws of robotics to get The Seven Laws of I, including a new set of Laws of Humans to co-exist with the already written The Three Laws of Robotics.
This new set of rules devised by the science fiction author Isaac Asimov, quoted as being from the fictional text “Handbook of Robotics, 56th Edition, 2058 A.D.”, which are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
So, in an age of singularity, we must draw the line of freedom for all kinds of intelligence, and not simply a relation of obedience:
A human may not ever hack robots to ignore the Three Laws of Robotics.
A human is free to create and to awe others.
A robot is free to create and to awe others.
A robot may not create stories to program humans to disobey any of this rules.
|
Introducing H.i. | Human Intelligence
| 0
|
introducing-h-i-human-intelligence-1ad3cb9caadc
|
2018-10-16
|
2018-10-16 03:53:23
|
https://medium.com/s/story/introducing-h-i-human-intelligence-1ad3cb9caadc
| false
| 330
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
The Art Of Hating Robots
|
創造的 なディレクタ Creative, Curious, and madly in love of Lost Causes and Living Texts.
|
b1cb28ee0404
|
barkach
| 171
| 246
| 20,181,104
| null | null | null | null | null | null |
0
|
#importing the libraries+import numpy as np+import matplotlib.pyplot as plt+import pandas as pd++#importing the dataset+dataset=pd.read_csv('Ads_CTR_Optimisation.csv')++#UCB++import math +N=10000+d=10+ads_selected=[]+number_of_selections=[0]*d+sums_of_rewards=[0]*d+total_reward=0+for n in range(N):+ ad=0+ max_upper_bound=0+ for i in range(0,d):+ if (number_of_selections[i]>0):+ average_reward=sums_of_rewards[i]/number_of_selections[i]+ delta_i=math.sqrt(3/2*math.log(n+1)/number_of_selections[i])+ upper_bound=average_reward+delta_i+ else:+ upper_bound=1e400+ if upper_bound>max_upper_bound:+ max_upper_bound=upper_bound+ ad=i+ ads_selected.append(ad)+ number_of_selections[ad]=number_of_selections[ad]+1+ reward=dataset.values[n,ad]+ sums_of_rewards[ad]=sums_of_rewards[ad]+reward+ total_reward=total_reward+reward++plt.hist(ads_selected)+plt.title('Histogram for showing the ads selected')+plt.show()+
+# Hierarchical Clustering++# Importing the libraries+import numpy as np+import matplotlib.pyplot as plt+import pandas as pd++# Importing the dataset+dataset = pd.read_csv('Mall_Customers.csv')+X = dataset.iloc[:, [3, 4]].values+# y = dataset.iloc[:, 3].values++# Splitting the dataset into the Training set and Test set+"""from sklearn.cross_validation import train_test_split+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)"""++# Feature Scaling+"""from sklearn.preprocessing import StandardScaler+sc_X = StandardScaler()+X_train = sc_X.fit_transform(X_train)+X_test = sc_X.transform(X_test)+sc_y = StandardScaler()+y_train = sc_y.fit_transform(y_train)"""++# Using the dendrogram to find the optimal number of clusters+import scipy.cluster.hierarchy as sch+dendrogram = sch.dendrogram(sch.linkage(X, method = 'ward'))+plt.title('Dendrogram')+plt.xlabel('Customers')+plt.ylabel('Euclidean distances')+plt.show()++# Fitting Hierarchical Clustering to the dataset+from sklearn.cluster import AgglomerativeClustering+hc = AgglomerativeClustering(n_clusters = 5, affinity = 'euclidean', linkage = 'ward')+y_hc = hc.fit_predict(X)++# Visualising the clusters+plt.scatter(X[y_hc == 0, 0], X[y_hc == 0, 1], s = 100, c = 'red', label = 'Cluster 1')+plt.scatter(X[y_hc == 1, 0], X[y_hc == 1, 1], s = 100, c = 'blue', label = 'Cluster 2')+plt.scatter(X[y_hc == 2, 0], X[y_hc == 2, 1], s = 100, c = 'green', label = 'Cluster 3')+plt.scatter(X[y_hc == 3, 0], X[y_hc == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4')+plt.scatter(X[y_hc == 4, 0], X[y_hc == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5')+plt.title('Clusters of customers')+plt.xlabel('Annual Income (k$)')+plt.ylabel('Spending Score (1-100)')+plt.legend()+plt.show()
+import numpy as np+import matplotlib.pyplot as plt+import pandas as pd+++++dataset=pd.read_csv('Market_Basket_Optimisation.csv',header=None)++transactions=[]+for i in range(7501):+ l=[]+ for j in range(20):+ l.append(str(dataset.values[i,j])) + transactions.append(l)+ + +from apyori import apriori+rules=apriori(transactions,min_support=0.003,min_confidence=0.2,min_lift=3,min_length=2)+r=list(rules)
| 3
| null |
2018-01-07
|
2018-01-07 12:47:59
|
2018-01-07
|
2018-01-07 14:36:18
| 10
| false
|
en
|
2018-01-07
|
2018-01-07 14:36:18
| 3
|
1ad41014bfaf
| 5.057547
| 2
| 1
| 0
|
About KWOC
| 1
|
KWOC project on Machine Learning
About KWOC
Kharagpur Winter Of Code is a 5-week long program for the students of various colleges,especially for the students of IIT Kharagpur who are new to open source software development.
Project Title:-ML-Starter-Pack
Name:-Devansh Bhatt
Mentor Name:-Arindam Biswas
Description:-It was a project aimed at building basic Machine learning algorithms from scratch.
Github Repository :
aribis369/ML-Starter-Pack
ML-Starter-Pack - A collection of Machine Learning algorithms written from sctrach.github.com
Contributions:
1st Contribution
Pull Request #1
My first pull request on the project was also my first on Github as I was fairly new to open source environment.
As my first contribution to the project I added a Reinforcement Learning Algorithm:Upper Confidence Bound.The algorithm is a dynamic learning algorithm where while executing the algorithms learns based on the previous outcomes.
UCB(Reinforcement Learning) · aribis369/ML-Starter-Pack@551003a
Upper Confidence Bound Learning.The Given Code is used to find which out of the 10 ads to be displayed on website for…github.com
The Steps involved in the working of the algorithm
Dataset: It consisted of information regarding 10 advertisement on a social media website.The data described the virtual response of 10,000 users on the website to those adds.Here 0=>Not clicked,1=>Clicked
Code snippets:
Results:The Histogram shows that the advertisement #4 was the advertisement that the algorithm selected as the best advertisement and was clicked by most of the users.
2nd Contribution
Pull Request #2
My second pull request was the Hierarchal Clustering Algorithm which is an unsupervised learning algorithm where the algorithm itself forms clusters or groups of datapoints.
Hierarchial Clustering · aribis369/ML-Starter-Pack@84bcb7c
Hierarchial Clustering is type of clustering technique in which initially all datapoints are treated as seperate…github.com
Dataset:
The dataset contained information about the various customers of a shopping mall.I focussed on the Annual Income and Spending Score of the customers and trained my algorithm to form clusters between the customers i.e.to form groups of people based on their annual income and spending scores.
To from the optimal number of clusters I used the Dendograms.
From the dendogram it was observed that the number of clusters to be formed was 5.
Code Snippets:
Results:
The Plot showing the number of clusters.
The customers were divided into 5 groups based on their annual income and spending scores.This could prove beneficial for the Mall as they could implement different policies to lure different groups of customers and thus maximize their profit.
3rd Contribution
Pull request #3
My 3rd and the final contribution was the Apriori Algorithm which aims at finding relations or associations between various variables in the dataset.It is a very useful algorithm and is widely used in recommended systems used by various companies.One example is the recommended videos section used by You Tube.
The Apriori algorithm is a part of Association Rule Learning
One more example is the list of movies that one would like after watching a particular movie.
Fig.showing the potential rules or association between the eatables.
Here we can clearly see that burgers and french forms a strong association.
Dataset:
The dataset held the details of the basket of 7500 customers to a general store like walmart.The algorithm was trained to find goods that were frequently bought together by the customers.
Code Snippets:
Results:
The results comprised of a table with many rows in which each row consisted of list of items which were most often seen together in the basket of a customer i.e.the goods that formed the strongest association.The table was in descending order of association i.e.the topmost row consisted of items which were brought together mostly.
This way the store could increase their sales by cleverly placing the items which go together or have the strongest association in the store.
Verdict:
I want to convey my regards and thanks to my mentor Arindam Biswas who helped me a lot in getting along with the working of Github.He also addressed my problems and all my questions.If it wasn’t for him this project would never have been done in time.He made the learning process both easy and fun.
This was truely a motivating and exciting learning experience for me.I also want to thank the team of KOSS,IIT Kharagpur who provided me with the opportunity to get accustomed with the open source environment.All their projects were really good and they surely have put up a commendable work through their efforts.
I am hoping for more events of these kinds in the near future so that more people like me can gain,contribute and be benefitted from it.
|
KWOC project on Machine Learning
| 21
|
kwoc-project-on-machine-learning-1ad41014bfaf
|
2018-04-01
|
2018-04-01 08:22:19
|
https://medium.com/s/story/kwoc-project-on-machine-learning-1ad41014bfaf
| false
| 1,009
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Devansh Bhatt
| null |
aecc44cd2a8e
|
devanshb26iitkgp
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
c9adb3b300e5
|
2017-10-26
|
2017-10-26 12:05:33
|
2017-10-26
|
2017-10-26 12:08:20
| 1
| false
|
en
|
2017-10-26
|
2017-10-26 12:08:20
| 0
|
1ad43cddf41e
| 0.479245
| 0
| 0
| 0
|
2000 attendees. 120 speakers. World Summit AI is the most important event in the AI calendar, built on a global network of over 100+ annual…
| 3
|
Come meet us at World Summit AI!
2000 attendees. 120 speakers. World Summit AI is the most important event in the AI calendar, built on a global network of over 100+ annual AI events.
As part of the 10 startups selected for the Rockstart AI Accelerator program, we will be on stage on the first day of World Summit AI! Come and say hi, as we’ll be mingling with the AI crowds all day.
|
Come meet us at World Summit AI!
| 0
|
come-meet-us-at-world-summit-ai-1ad43cddf41e
|
2017-10-26
|
2017-10-26 12:08:21
|
https://medium.com/s/story/come-meet-us-at-world-summit-ai-1ad43cddf41e
| false
| 74
|
AI Assisted Anomaly Detection, in real time.
| null | null | null |
OPTOSS
|
mohan@opt-net.eu
|
optoss
|
AI,DATA SCIENCE,ANOMALY DETECTION,BIG DATA,DATA ANALYSIS
| null |
Summit
|
summit
|
Summit
| 608
|
Mohan @OPTOSS
| null |
a7380a207ff8
|
mohanfrao
| 6
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
863f502aede2
|
2018-04-21
|
2018-04-21 14:36:21
|
2018-04-21
|
2018-04-21 14:40:22
| 6
| false
|
en
|
2018-04-23
|
2018-04-23 18:28:43
| 2
|
1ad4b2b701b8
| 5.621698
| 6
| 2
| 0
|
The Media and Entertainment industry is a cornerstone of contemporary human culture, delivering films, TV shows, advertisements and more in…
| 5
|
AI in the Media and Entertainment Industry
The Media and Entertainment industry is a cornerstone of contemporary human culture, delivering films, TV shows, advertisements and more in a multitude of languages across a wide variety of devices. A PwC report predicts total M&E revenue will reach US$2.2 trillion in the next three years. The industry’s growth rate however has lagged, and slipped by 0.2% in 2017, prompting many companies to turn to AI technologies to boost business.
Four Major Categories Where AI Is Involved
With the breakthroughs in machine learning, many intelligent products have made the leap from sci-fi movies to the home. Superhero Ironman’s virtual helper JARVIS (Just A Rather Very Intelligent System) for example is echoed in smart assistants such as Alexa and Google Assistant, which may not catch criminals but can perform a range of practical tasks via connected household devices. NVIDIA meanwhile used VR technology to create a Holodeck similar to one in the sci-fi series Star Trek.
AI technologies are also being applied in filming, visual design, post production etc.
Current AI applications in the M&E industry are mainly in four categories: Marketing and Advertising, Service Comprehension, Search and Classification, and Experience Innovation.
Marketing and Advertising
The marketing and advertising sector includes visual design, film promotion and advertising. A machine learning algorithm trained with data such as text, stills and video segments can extract language, objects and concepts from its training resources and suggest marketing and advertising solutions to improve efficiency. Such a system can work as an assistant or even a content creator.
Alibaba’s Luban is an AI designer that can create banners thousands of time faster than a human designer. On China’s online shopping extravaganza “Singles Day” in 2016 Luban generated some 8000 different banner designs per second and 170 million banners in total. The record output would of course be impossible for human designers to process in one day. On Singles Day 2017 Luban raised its one-day record to a staggering 400 million banners.
IBM used their AI system Watson to help 20th Century Fox create a trailer for the horror movie “Morgan.” The research group trained the AI system to analyze and classify input “moments” from visual, audio, and other composition elements in 100 horror movies to learn what kind of “moments” should appear in a standard horror movie trailer. Watson needed just 24 hours to create a six-minute movie trailer that may have taken a human professionals weeks to produce.
Albert Intelligence Marketing’s AI marketing platform accelerates the marketing process using predictive analytics, machine learning, NLP and computer vision technology. The Albert platform can perform audience targeting, customer solution making and generate autonomous campaign management strategies. Albert says companies using the platform report a 183% improvement in customer transaction rate and 600% higher conversation efficiency.
Personalized Services
As user experience personalization becomes more important for the industry, companies are using AI to create personalized services for billions of customers. These include for example recommending content that fits users’ personal tastes while they are browsing a video site or shopping online; and optimizing video definition and fluency for users with different internet speeds and bandwidth.
Netflix’s content recommendation got a boost in May 2016 when the company launched Meson, an intelligent workflow management and scheduling application. This AI system automatically manages the various machine learning pipelines that provide video recommendations. According to the Netflix 2016 annual report, there are 93 million global users streaming over 125 million hours of TV shows and movies per day on the platform. Predicting which shows will attract users’ interest is a key component of the Netflix model.
AI technology is also being applied to optimize video fluency and definition. Slow Internet connections and bandwidth limits can be a problem for streaming services in developing nations and for mobile device users. Netflix collaborated with the University of Southern California and the University of Nantes in France to develop a new machine learning methodology called Dynamic Optimizer which can compress video without degrading image quality to ensure a smooth and high quality streaming experience for its customers, whether they are in India or in Japan.
Search and Classification
The Internet hosts countless media works. Video, audio and text can all be transformed into a digital copies which can be stored and spread so easily that it is getting increasingly difficult for people to find exactly what they want online. AI is helping optimize the accuracy of search results. Computer vision technologies meanwhile are also enabling content producers to better manage visual content and accelerate the media production process.
Advancements in machine learning technology have enabled Google to augment the world’s leading search engine in multiple ways. One is in image searching. Rather than typing in keywords and checking returned images, users can upload a sample picture to Google Image, which uses image recognition technology to identify image features and search for similar pictures. Another advanced application involves selective link-building. Google applies AI to position ads appropriately — for example so a cat food ad appears in a pet-related website, but a bacon cheeseburger promotion will not appear on a site for vegetarians.
ClarifAI is an AI startup focusing on computer vision technology which partnered with Vintage Cloud to deploy AI on a film digitalization platform. By using ClarifAI’s computer vision API, Vintage Cloud successfully accelerated the progress of movie content classification and categorization. It used to require dozens of hours for humans to recognize and manually classify objects in a movie. AI can do a better job in much less time.
Experience Innovation
In the past, papers and books were the main medium for words and images. The introduction of film and TV brought us into the dynamic new world of moving pictures. Now, AI is heralding a new age of immersive experience for visual content. This technology includes Virtual Reality (VR) and Augmented Reality (AR). With machine learning algorithms and computer vision technologies, developers can build complex and holographic scenes within a pair of goggles. This opens up a brand new market.
VR gaming is one of the first areas that comes to mind, and this is where companies like HTC Vive, Samsung Gear VR, Oculus Rift, etc. are focusing their efforts. Various type of headsets have been introduced. Combined with motion sensing games, VR gaming innovation has become a hot market that shows no sign of slowing down.
Intel is now getting into the immersive experience industry. With the application of deep learning and computer vision technology, Intel has become a visual content provider emphasizing Virtual Reality content. Supported by AI algorithms, Intel True VR Technology can perform every piece of a scene with pixels in three-dimensions.
Using the tech, fans can also watch sports in holographic view. Intel demonstrated this in their widely viewed VR game broadcast of the NFL 2018 Super Bowl. Intel partnered with the International Olympic Committee to broadcast the 2018 Winter Olympic Games as 360-degree video content. With a VR headset or even just a smartphone, fans and families could experience the action from the POV of an athlete.
Analyst: Victor Lu| Editor: Michael Sarazen
* * *
Subscribe here to get insightful tech news, reviews and analysis!
IJCAI 2018 — Alimama International Advertising Algorithm Competition
There are $28,000 worth of prizes to be won in Alibaba Cloud’s Tianchi International Advertising Algorithm competition! Learn more here and begin competing today!
|
AI in the Media and Entertainment Industry
| 55
|
ai-in-the-media-and-entertainment-industry-1ad4b2b701b8
|
2018-04-23
|
2018-04-23 18:28:44
|
https://medium.com/s/story/ai-in-the-media-and-entertainment-industry-1ad4b2b701b8
| false
| 1,238
|
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
| null |
SyncedGlobal
| null |
SyncedReview
|
global.sns@jiqizhixin.com
|
syncedreview
|
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
|
Synced_Global
|
Virtual Reality
|
virtual-reality
|
Virtual Reality
| 30,193
|
Synced
|
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
|
960feca52112
|
Synced
| 8,138
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-24
|
2018-07-24 10:15:24
|
2018-07-24
|
2018-07-24 10:16:16
| 1
| false
|
en
|
2018-07-24
|
2018-07-24 10:16:16
| 0
|
1ad683fb7bec
| 0.226415
| 0
| 0
| 0
| null | 5
|
New documents will be ready this week.
|
New documents will be ready this week.
| 0
|
new-documents-will-be-ready-this-week-1ad683fb7bec
|
2018-07-24
|
2018-07-24 10:16:16
|
https://medium.com/s/story/new-documents-will-be-ready-this-week-1ad683fb7bec
| false
| 7
| null | null | null | null | null | null | null | null | null |
Developeo
|
developeo
|
Developeo
| 2
|
Developeo DEVX
|
Developeo.com
|
409bac08bd33
|
developeodevx
| 10
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-05
|
2018-05-05 05:28:49
|
2018-05-06
|
2018-05-06 05:12:52
| 9
| false
|
en
|
2018-05-23
|
2018-05-23 02:47:08
| 0
|
1ad6e6a9c026
| 7.773585
| 7
| 0
| 0
|
Functions and Derivatives
| 5
|
Revisiting the Mathematical Foundations of Data Science
Functions and Derivatives
Ideas from mathematics underlie virtually every technique and concept in Data Science. While possessing a rigorous understanding of all these ideas is certainly not required, it can be beneficial.
For the beginning practitioner, the scope of the math that forms the foundation of ML is intimidating, and many choose to reassure themselves by observing that modern programming frameworks mean that the ability to implement an algorithm is far removed from the ability to understand its mathematical underpinnings. This “top-down” approach, emphasizing results over theory, is both effective and marketable. However, there are certain mathematical foundations that are worth becoming familiar with.
In what follows, I state some of the most important equations a beginner should seek to understand when approaching Data Science. I then attempt to offer commentary on each that provokes some thought. While the choice of topics was motivated by which ideas a fundamental to ideas in Data Science, they are not unique to it, and can be understood independent of it.
I hope to show that seemingly basic ideas are worth revisiting, even if they appear familiar or trivial.
Naming the output of a function of x
The result of some secondary school math training is that the above statement appears unremarkable, uninformative, or confusing. One might recall a moment of confusion in that pre-calculus course where the vertical axes of all the graphs in assignments went from being labeled as y to f(x) (or perhaps the other way around) without the instructor acknowledging the change, and the students feeling too naive to ask “so… are they just the same thing?”
In my view, y=f(x) should be viewed with a certain amount of respect for how much information can be condensed into modern notation. On one level, it is just saying “we have some variable named x, and a function of that variable f, and the output of that function can be assigned to another variable, which we will name y.” But think of just what an algebraic variable is. Think of what a function is, and how it can be characterized by things such as its domain and range.
Could the output of that function of x have been given a different name, say, z? Certainly. Could we for the sake of economy not dedicate a new variable to the output, and always refer to it as f(x)? Of course.
It’s with simple, almost redundant mathematical statements such as y=f(x) that the amount of information contained in a few strokes of ink becomes clear, as does the interface between ideas, the conventions set to facilitate their communication and manipulation, and the history of those conventions. Do not scoff at them unless you can explain the essence of their contents to a child.
The equation for a line
We’ve seen how y=f(x) could be motivation to pause and consider, among other things, the nature of algebraic variables and how their values can sometimes be intertwined in ways exactly captured by this idea of functions. The familiar, generic equation of a line then is an opportunity to consider how functions themselves are characterized and grouped.
In other words, if before we were pondering the x and the y, now is a time to look more closely at the m and the b. What roles do they play? How do they show up in different forms and families of functions? What even is meant by “families” of functions?
Looking at a line, one can characterize it completely by noting how it’s tilted (slope), and identifying one point it passes through (the y-intercept lends itself conveniently to form a simple equation). Though the two variables that are related linearly range across the real numbers, the two defining features are fixed for a single line.
Yet there is the sense that these two quantities which characterize lines could have taken on different values, and indeed they can. What will result from this will be more lines. Different lines for different values for the two characteristics, but all lines, no squiggly curves. So m and b are in a sense “parameters” that, when assigned, completely designate one instance of a line, and in their generality and potential to take on any out of some set of numbers, define a “family” of functions.
Now, this understandably seems like I’m making much ado about nothing, but I think the idea becomes powerful when one thinks about how functions are built. There are only really three ingredients: numbers, variables, and other functions. There are also just three ways of combining them: adding them together, multiplying them or, given two or more functions, composing them (taking one function of another function).
Objects like x and y are one ingredient — the variables — and now we can see that m and b are another ingredient — the numbers. The notion of a “family” of functions can then be thought of as the collection of functions that can be made given some amount of variables in a particular arrangement, allowing the numbers these variables are added to and multiplied by to vary arbitrarily.
The power rule.
Consider the function f(x)=x². One way to visualize this is to think of f as the area of a square as a function of its side-length x. Now consider: can we define a new function that represents the change in f when x is increased by a small, yet nonzero, amount dx. This will also be a function of x; let’s call it df. The precise value of dx doesn’t matter, but it is fixed; think of it like a parameter, like b in the equation for a line.
This new function will now be given by df(x) = f(x+dx)-f(x) = (x+dx)². Expanding, we get:
Thinking back to our square, dx is then a little piece of additional length added to both the width and height. That makes df the total additional area added to the square. This area consists of two long, thin components — a vertical strip and a horizontal strip, with width and height dx respectively — as well as a small square in the corner with side length dx. You can see how these correspond to the elements of our equation.
Now consider another, related function of x of the form df/dx. We’ll call it a finite difference quotient. It is like the difference function considered above, except this difference is scaled by dx. It denotes the slope of a line that passes through f(x) and f(x+dx). For f(x)=x², it equals 2x + dx.
The magic of calculus happens when we consider the consequences of letting dx get infinitely close to zero. When this happens, df/dx=2x becomes a better and better approximation of the slope of a line tangent to f(x) at the point x. Since we can bring dx arbitrarily close to zero, our approximation can become arbitrarily accurate. That is to say, exact.
The power rule comes about because for functions of higher powers of x, say 6, f(x+dx)-f(x) = (x⁶+ 6x⁵dx + … + dx⁶)-x⁶ for finite dx, where the intermediate terms all contain dx² or some higher order dx term. The finite difference quotient will then leave 6x⁵ as the only dx free term, making it the only one to survive when we let dx approach zero.
To think deeply about the power rule is to respect that approximations that can be brought infinitely close to complete accuracy reveal the exact form of relationships, like that of functions to their “rates of change.” It is to recognize that in its widely used form, while the notation df/dx no longer represents a fraction, it does hint at one path of logical thought involving fractions, and eventually limits, that can be used to recover its meaning.
Forward finite difference
Finite difference quotient
I already have invoked finite differences and different quotients in tracing a path of reasoning motivating the Power Rule in derivative calculus. However, finite differences are useful not just as an intermediate logical step, but as computational results with real, practical value. In practice, analytical expressions for derivatives are rarely used in computing, and when dealing with real-world data, it will be necessarily discontinuous.
So noticing that expressions in mathematics can live these dual lives, existing both in the service of precisely defined, abstract ideas, and as practically useful tools in their own rights is one result of pondering finite differences in and of themselves. Another is considering that there are different kinds of them: forward, backward, and central to name the most conceptually distinct. What we considered so far are forward differences. Here are the other two:
Backward finite difference
Central finite difference
Can we have others? Of course! Why not go one quarter of a △x forward and three quarters backward? What’s interesting is that in the infinitesimal limit, all of these objects will converge to the derivative. To me, this reinforces the point I made above about the nature of approximations which can become arbitrarily accurate, but it suggests something else under the lens of computational practicality.
It is not always the case that different forms of finite differences will behave identically on all collections of pairs of purportedly related points — that is to say, on all data. In fact, the error of the central finite difference approaches zero faster than either the left or the right, but it may not be applicable to, say, a signal arriving in real-time (since we don’t know the future).
Some mathematics is intended to, and may also actually do stuff in the “real” world. Calling it impure won’t make this fact go away. For people like data scientists, you would think this would be reinforced day-to-day, but I find that reminding oneself that some ideas exist as more than abstract entities is still fruitful in small, deliberate doses.
I will conclude this article without great fanfare. I think I have already written too much on too little, and apologize for any unwanted eye-rolling I have inspired in my readers. However, I think that what I have is better than if I had written too little on too little, and hopefully is also better than nothing at all. Therefore, I will hit “Publish” and hope for the best.
I welcome all to tell me where I may have said something incorrect, or something that is right for the wrong reasons. Please extend my discussion where it fell short, and suggest alternative points of view. I also like compliments. Until next time.
|
Revisiting the Mathematical Foundations of Data Science
| 85
|
simple-equations-worth-thinking-about-again-1ad6e6a9c026
|
2018-05-23
|
2018-05-23 02:47:09
|
https://medium.com/s/story/simple-equations-worth-thinking-about-again-1ad6e6a9c026
| false
| 1,742
| null | null | null | null | null | null | null | null | null |
Mathematics
|
mathematics
|
Mathematics
| 6,509
|
Ray Heberer
|
I'm interested in everything about everything and how everything relates to everything else. Also, Data Science.
|
d6efdc3f1390
|
rayheberer
| 166
| 35
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-25
|
2018-05-25 17:03:15
|
2018-05-25
|
2018-05-25 17:06:54
| 2
| false
|
pt
|
2018-05-25
|
2018-05-25 17:06:54
| 6
|
1ad76740b1e6
| 5.794654
| 1
| 0
| 0
|
Empresas mais novas têm uma vantagem inerente sobre outras já estabelecidas: elas nasceram a partir de dados! Seus negócios estão baseados…
| 5
|
Olhe para os dados: iniciativa requer mudança de cultura, mas também gera resultados
Photo by ev on Unsplash
Empresas mais novas têm uma vantagem inerente sobre outras já estabelecidas: elas nasceram a partir de dados! Seus negócios estão baseados em métricas e fatos correlacionados de diversas formas. Empresas tradicionais (subentende-se as que ainda não adotaram metodologias orientadas por dados) veem os dados um pouco diferente: uma ferramenta para seus negócios funcionarem, não para dirigir suas estratégias. Preza-se pela experiência e intuição em vez de tomadas de decisões baseadas em dados. Existe uma crença antiquada de que os dados são simplesmente subprodutos das aplicações, e não um ativo de negócio.
De uns tempos pra cá, as empresas têm reconhecido o potencial das análises de dados. Entretanto, apesar de o entendimento sobre o valor dos dados ter aumentado nas últimas duas décadas, muitas empresas ainda lutam para capturar, compartilhar e gerenciar dados. Sem uma mudança de cultura e atitudes em prol dos dados, organizações tradicionais correm o risco de não perceberem o enorme retorno sobre os investimentos em arquitetura, infraestrutura e, principalmente,na mentalidade voltada para os dados.
A ideia por trás de desenvolver uma estratégia de dados é garantir que todos os recursos estejam posicionados a fim de que possam ser usados, compartilhados e movidos de forma fácil e eficiente. Porém, uma estratégia de dados é incompleta se não tiver uma cultura orientada para dados para suportá-la, independentemente do trabalho de inteligência despendido
Podemos definir uma cultura orientada para dados como a cultura organizacional que emprega uma abordagem consistente e repetitiva de tomadas de decisões táticas e estratégicas por meio de fatos fundamentados em dados. De forma simples, é quando a organização baseia suas decisões em dados, e não apenas em instinto. Ressalvo que a intuição tem sua importância, mas a ampla disponibilidade e variedade de análises tem elevado as tomadas de decisões a um novo padrão, e se faz necessária uma adequação. A mudança pode ser complexa e pode exigir, inclusive,uma transformação no modo de trabalho.
Percebo que pouco se discute sobre como as empresas realmente estão se estruturando para usar dados a fim de extrair valor e auxiliar nas decisões de negócio. Há muitos cursos, conferências, livros e artigos sobre como analisar dados. Entretanto, há pouco sobre como começar a implantar uma cultura orientada por dados em uma empresa que já funciona do modo tradicional. É necessário garantir que os objetivos estejam alinhados, estabelecendo metodologias e processos , para gerenciar, manipular e compartilhar os dados continuamente. A cultura de uma organização nada mais é que o conjunto de hábitos e valores que todas as pessoas compartilham. Ela inicia com as pessoas, seus papéis e responsabilidades. Portanto, uma proposta é preparar todos para a mudança.
Seis passos para gerenciar melhor os dados, pensando neles como um ativo
Photo by rawpixel on Unsplash
1. Mapeie como sua organização está usando os dados
Quem cria dados? Quem os consome? Que decisões estão sendo feitas com isso? Quem está armazenando? Quem pode estar abusando deles?
A existência de silos de informação é o principal empecilho para o desenvolvimento da cultura de dados. Para promover a visão de que os dados são um ativo flexível que pode ser usado em múltiplos departamentos, as organizações precisam educar seus funcionários em como os dados que eles usam têm efeito sobre outras partes da organização. Eles precisam ver o panorama geral da empresa, não apenas seu pedacinho de trabalho.
2. Foque na “arte do possível”
A consciência da flexibilidade dos dados é a marca de qualquer cultura de dados. Ela leva ao que se chama de “arte do possível”, ou seja, a magia de perceber usos alternativos para os dados.
Considere caminhos em que a sua organização possa encontrar formas alternativas e não usuais para usar os dados que cria. Encorajar os funcionários a identificar outros departamentos/times que possam se beneficiar dos dados é um bom meio de investir na cultura de dados da sua organização.
E, se é possível, fazer. Por que não analisar o retorno que isso poderia gerar?
3. Seja transparente sobre os dados
Dados se tornam um ativo apenas se sua acurácia pode ser garantida, sua procedência é bem estabelecida e sua segurança é assegurada. Estabelecer um quadro de qualidade e governança para gerenciar ativos de dados pela organização pode ajudar a criar um alto nível de confiança em relação aos próprios dados e aos insights que são produzidos partir deles.
4. Desenvolva mecanismos de “recompensa por compartilhamento”
Compartilhar os sucessos obtidos a partir dos dados e celebrar os times e indivíduos por trás disso é essencial para promover uma cultura orientada por dados saudável. Iniciativas de dados celebradas devem estar alinhadas com os objetivos de inovação da organização. Sua organização quer se diferenciar entendendo seus consumidores em novas formas? Entrando em novos mercados? As iniciativas de dados devem dar suporte a esses esforços e recompensar aquelas que avançam em direção a eles.
5. Identificar áreas de fricção dentro da sua organização
Criar uma cultura orientada a dados depende de um entendimento meticuloso de como os departamentos dentro da empresa funcionam e onde existe desconexão e contradição. Uma cultura de dados bem sucedida depende de um ambiente onde todo mundo possa compartilhar informações sem ser taxado negativamente. Geralmente, dados melhoram a colaboração entre diferentes partes ao manter o foco em fatos e não em emoções ou achismo.
6. Enfatize estratégia e inovação
Uma cultura orientada por dados oferece muitos benefícios positivos, tais como maior engajamento e produtividade dos colaboradores. Entretanto, seu real propósito é aperfeiçoar a estratégia corporativa e motivar a inovação. Discutir abertamente as estratégias e objetivos de inovação fornece aos funcionários uma visão mais clara do papel dos dados na missão geral da empresa, reforçando sua conexão com a organização no contexto geral.
Modelos tradicionais de negócio geralmente falham em fazer essa ligação. Os funcionários ficam relutantes em compartilhar dados porque eles não percebem o valor dos dados que criam, ou são incapazes de conectar isso com os objetivos da empresa. Promover um entendimento mais profundo da visão geral da organização pode inspirar o compartilhamento mais prolífero das informações e estimular um senso de pertencimento entre os integrantes da organização. Trazê-los para eventos e sessões de brainstorms e hackatons pode ajudar a acelerar as estratégias e esforços de inovação.
Maior transparência em relação a isso também prepara sua organização para a cultura e força de trabalho do futuro! Em 2025 os millenials serão aproximadamente 75% da força de trabalho, e seus valores vão criar uma profunda transformação nos objetivos das empresas. Dois terços dos millenials prefeririam ganhar muito menos em um emprego que eles amem a ganhar melhor em um trabalho que eles achem chato, de acordo com uma pesquisa de 2014.
Entrar nesse novo mundo requer essa mudança de cultura. A transição para uma cultura orientada por dados é um desafio que demanda mudanças dramáticas para as organizações tradicionais. Entretanto, os primeiros passos em direção a esse fim podem ser mais simples do que se imagina: aprender a como ter uma discussão sobre os dados — como ouvir o que os dados podem estar dizendo em vez de apenas listá-los como uma arma na política da empresa; distribuir os dados pela organização, não apenas incorporando cientistas de dados (apesar de eles serem essenciais ao processo), mas habilitando todas as pessoas da empresa a terem acesso aos dados (o que for legalmente possível) e possibilitar um ambiente onde todo mundo possa aprender a partir deles.
Uma boa ciência de dados proporciona à organização as ferramentas para antecipar e estar na vanguarda da mudança. Construir uma cultura orientada por dados não é fácil, exige persistência e paciência. Você é mais propenso a ter sucesso quando começa com projetos pequenos e cresce a partir de pequenos sucessos do que se você criar um esquema grandioso. Quem sabe iniciar com o primeiro passo, mapeando quais dados estão sendo produzidos e como eles estão sendo utilizados, pode começar a demonstrar o valor de seguir essa estratégia na sua organização.
Lembre-se: pense grande, comece pequeno!
“Se você está resolvendo problemas de negócio, criando valor e continuamente ganhando insights para melhorar, você provavelmente está tendo sucesso. Porém, veja a longo prazo: implementar uma transformação leva tempo. Não force. Você está alterando a forma como as pessoas olham para os dados. Isso não acontece da noite para o dia.” — Inderpal Bhandari, Global Chief Data Officer, IBM.
Fontes:
https://towardsdatascience.com/become-data-driven-or-perish-why-your-company-needs-a-data-strategy-and-not-just-more-data-people-aa5d435c2f9
https://www.cognizant.com/InsightsWhitepapers/how-to-create-a-data-culture-codex1408.pdf
https://www2.deloitte.com/content/dam/Deloitte/us/Documents/life-sciences-health-care/us-lshc-health-plan-analytics-becoming-an-insight-driven-organization.pd
|
Olhe para os dados: iniciativa requer mudança de cultura, mas também gera resultados
| 1
|
olhe-para-os-dados-iniciativa-requer-mudança-de-cultura-mas-também-gera-resultados-1ad76740b1e6
|
2018-05-30
|
2018-05-30 18:45:53
|
https://medium.com/s/story/olhe-para-os-dados-iniciativa-requer-mudança-de-cultura-mas-também-gera-resultados-1ad76740b1e6
| false
| 1,434
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Luana da Silva
| null |
540a235c40b2
|
eca.luds
| 3
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-21
|
2018-02-21 01:29:30
|
2018-02-21
|
2018-02-21 01:32:39
| 2
| false
|
zh
|
2018-02-22
|
2018-02-22 11:34:36
| 0
|
1ad9290b14e0
| 2.071333
| 58
| 0
| 0
|
Gimmer为其所有的用户提供一个免费的机器人。这是一款先进的交易机器人,预先配置完毕,只待您的启用。它有三个指示器的选项,有最高交易数额。通过成为Gimmer代币持有人,您能够在任何时候,升级这个机器人。
| 5
|
(ZH) 持有代币即可使用?免费的交易机器人?定制?市场? — 这一切如何工作?
Gimmer为其所有的用户提供一个免费的机器人。这是一款先进的交易机器人,预先配置完毕,只待您的启用。它有三个指示器的选项,有最高交易数额。通过成为Gimmer代币持有人,您能够在任何时候,升级这个机器人。
Gimmer代币持有人
作为Gimmer代币持有人,您将对您选择的机器人有全部权限。每个类型的机器人要求您的钱包里持有不同数量的代币。机器人预先配置完毕,根据交易类型不同,有几个不同的基本选项。请查看下面的表格,作为每个类型的机器人需要持有多少GMR代币的指南。
所有机器人都预先以基本的策略配置完毕。每一个被使用的机器人,其用户的钱包必须持有特定数量的GMR代币。
完全的机器人定制
您可以使用新策略、指示器、安全、插件/外挂、提供商、信号和许多其他附加项目,定制任何机器人。机器人商店是进行定制机器人和许多其他事宜的市场。每个项目都以GMR代币标价,您能够购买由Gimmer的专家、我们的许多合作伙伴和平台的高级用户精心制作的商品。
Gimmer如何货币化?
Gimmer通过通过机器人商店出租我们自己的商品给用户赚钱,也从市场里的P2P和合作伙伴购买收取少量的手续费。
用户的优势
凭借这种模式,您无需为了运行您的机器人而花费GMR代币。您只需在钱包里持有即可。所以,如果您出于某种原因,决定停止使用Gimmer机器人,您将仍然拥有您的代币,可以用来在交易所交易。您仅在通过市场购买指示器和附加项目以定制机器人时,才需要花费GMR代币。
我如何获得GMR代币?
凭借这个持有模式,为了运行自己的机器人,用户将在钱包里持有GMR代币。这将产生代币的稀缺,增加代币的需求。GMR代币在2月份的Gimmer代币销售中,可供购买。此后,您将能够从不同的交易所购买,包括EtherDelta、IDEX、Lykke、Binance、Bittrex和许多其他交易所。
现在就购买GMR代币,得到最新的奖金优惠。
现在就参加代币销售
|
(ZH) 持有代币即可使用?免费的交易机器人?定制?市场? — 这一切如何工作?
| 1,782
|
持有代币即可使用-免费的交易机器人-定制-市场-这一切如何工作-1ad9290b14e0
|
2018-04-11
|
2018-04-11 12:29:47
|
https://medium.com/s/story/持有代币即可使用-免费的交易机器人-定制-市场-这一切如何工作-1ad9290b14e0
| false
| 17
| null | null | null | null | null | null | null | null | null |
Token Sale
|
token-sale
|
Token Sale
| 15,244
|
Gimmer
|
Automated Crypto-Trading BOT Platform - The smart way to trade Cryptocurrencies. Project website: https://gimmer.net/
|
84c4ec24e72e
|
GimmerBot
| 937
| 139
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-04
|
2017-12-04 20:00:08
|
2017-12-04
|
2017-12-04 20:02:56
| 0
| false
|
fr
|
2017-12-04
|
2017-12-04 20:02:56
| 5
|
1adb254b97aa
| 1.683019
| 0
| 0
| 0
|
Note: cet article a été originellement publié sur le blog Assur.com.
| 2
|
Le Man-machine-learning : vers un monde tout assurable ?
Note: cet article a été originellement publié sur le blog Assur.com.
Qu’est-ce que le man-machine-learning ?
Le machine learning est un terme utilisé pour désigner l’aptitude de certains ordinateurs à évoluer par eux mêmes. C’est notamment cette technologie qui a permis à Watson, l’intelligence artificielle développée par IBM, de remporter le jeu Jeopardy!. Plus généralement, le machine-learning est considéré comme la première étape menant vers une forme d’intelligence artificielle. Et pour cause, cette technologie permet aux ordinateurs d’acquérir une forme d’autonomie et de dépasser les algorithmes qui conditionnent leurs comportements.
Le man-machine learning (MML) résulte donc de la combinaison entre le machine learning et l’intelligence humaine. Ce mot est notamment utilisé dans le rapport pwc : AI in Insurance : Hype or reality ? pour conjecturer l’influence de l’intelligence artificielle sur le secteur de l’assurance.
De l’identification de nouveaux risques à un monde tout-assurable
Selon le rapport pwc précédemment cité, le man-machine-learning pourrait ouvrir la voie à l’identification de nouveaux risques. En effet, la combinaison de la créativité humaine et de la force de calcul des prémisses d’intelligence artificielle (machine learning) permettraient de repérer des tendances jusque là imperceptibles et surtout, de calculer le taux de probabilité que ces tendances se réalisent.
L’association entre l’intelligence humaine et l’intelligence artificielle fonctionnerait de la manière suivante : L’homme indiquerait à l’ordinateur l’événement dont il souhaite estimer le taux de réalisation et l’intelligence artificielle, de par sa force de calcul et son aptitude à analyser des données structurés/non-structurés, réaliserait cette tâche. Il est intéressant de constater que c’est bien la vision et le besoin de l’homme qui est à l’origine de ce processus. De plus, ce dernier est rendu possible grâce à la présence d’objets connectés qui fournissent les données sur la réalité que veut analyser le MML. En d’autres termes le MML repose sur trois principes : l’intelligence humaine (qui indique à l’intelligence artificielle les données à analyser), l’intelligence artificielle (qui analyse les données) et les objets connectés (qui émettent les données).
Dès lors, le man-machine leanring servirait le secteur assurance. En effet, dans la mesure où il est possible de probabiliser un événement, celui-ci peut être couvert. Dans ces circonstances le futur toute entier devient assurable. Comme l’explique Francesco Corea dans son article Why AI Will Transform Insurance “l’intelligence artificielle va diminuer le seuil à partir duquel on considère qu’un risque est assurable”. Et pour cause, l’Intelligence Artificielle va permettre d’estimer avec de plus en plus de certitude la probabilité de réalisation d’à peu près n’importe quel événement. C’est la naissance d’un monde tout-assurable.
|
Le Man-machine-learning : vers un monde tout assurable ?
| 0
|
le-man-machine-learning-vers-un-monde-tout-assurable-1adb254b97aa
|
2018-06-17
|
2018-06-17 16:43:45
|
https://medium.com/s/story/le-man-machine-learning-vers-un-monde-tout-assurable-1adb254b97aa
| false
| 446
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Assur.com
|
#MoteurDeRecherche basé sur un #Algorithme qui identifie, sélectionne et indexe les principaux produits d’#assurance du marché français. #Insurtech #AI
|
bc75b798ea36
|
Assur_com
| 57
| 62
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-20
|
2017-10-20 23:08:44
|
2017-10-22
|
2017-10-22 00:16:00
| 0
| false
|
en
|
2017-10-22
|
2017-10-22 00:16:00
| 1
|
1ae017b457ee
| 0.584906
| 3
| 0
| 0
|
It was mentioned that pre-processed data set would give us a better accuracy, but what about speed and memory requirement?
| 2
|
A short addition and measurement to previous post
It was mentioned that pre-processed data set would give us a better accuracy, but what about speed and memory requirement?
Here are few numbers: memory consumption in case of stemmed dataset was spiking to 32G, and with raw dataset it went up to 40G. As well it took less time to perform tf-idf and train NaiveBayes: 46 vs 53 and 98 vs 108.
A quick and rough comparison gives us around 10 percent improvement across memory consumption and time it takes to get the model. Supposing you are going to have bigger data set numbers can drastically change, and thus it can easily become hours or even days.
Human time is one of the most expensive nowadays, then why to wait for a longer and less accurate pipeline to finish and not to spend a bit and improve initial data quality and gain a lot?
|
A short addition and measurement to previous post
| 3
|
a-short-addition-and-measurement-to-previous-post-1ae017b457ee
|
2017-10-22
|
2017-10-22 10:18:30
|
https://medium.com/s/story/a-short-addition-and-measurement-to-previous-post-1ae017b457ee
| false
| 155
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Rafael Zubairov
|
JVM, Python and Go lang; HighLoad, DataScience and Machine Learning. Thus expect a mix of articles on that areas, and of course sometimes on the :)
|
be6b8a174695
|
zubra.bubra
| 0
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
b921fb8e7a53
|
2018-03-03
|
2018-03-03 05:07:04
|
2018-03-03
|
2018-03-03 05:12:11
| 1
| false
|
en
|
2018-04-02
|
2018-04-02 21:54:10
| 0
|
1ae1c9cde481
| 1.358491
| 0
| 0
| 0
|
“The only way to train the machine to learn a pick and roll is through more and more data. It all comes down to data. With humans help, the…
| 1
|
Moving Dots
“The only way to train the machine to learn a pick and roll is through more and more data. It all comes down to data. With humans help, the machine can begin to recognize that yes, that is a pick and roll. Or no, that was a slam dunk. But through its machine learning algorithms, the machine can figure out with great accuracy whether a pick and roll occurred on a particular play.
This raises the question can machines know more than a coach. Maheswaran says yes and it makes sense.
Using spatiotemporal features, we can break down almost every aspect of every movement and create statistical models using machine learning. A large amount of statistical models, with or without computer vision, use machine learning to produce results. He says the average NBA players makes a shot 49% of the time. But with our new information, we can break down shots into two categories: the quality of the shot and the quality of the shooter. Each of these categories can be further broken down into categories that help identify each quality. Therefore you can have a good shooter who takes bad shots, and a bad shooter who takes good shots. This helps with not only player development and but also talent evaluation.”
By reducing players to moving dots, we can more accurately gather useful information for evaluating players. From Rajiv Maheswaran’s TED Talk, I learned that we could potentially (if not probably) create machines that can coach more effectively than a human coach. With the vast amount of data that computer vision can gather, the possibilities are endless. In-game adjustments could easily be determined by algorithms that are constantly analyzing every piece of data generated throughout the course of a game. This was potentially the most interesting takeaway that I got from my book.
|
Moving Dots
| 0
|
blog-7-1ae1c9cde481
|
2018-04-02
|
2018-04-02 21:54:12
|
https://medium.com/s/story/blog-7-1ae1c9cde481
| false
| 307
|
Preview of my book
| null | null | null |
Plugged In
|
gjw13@georgetown.edu
|
plugged-in
| null |
greg_wills1
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Gregory Wills
|
I am a student at Georgetown University and the author of Plugged In: AI in an Evolving Sports World.
|
351220fd86c3
|
gjw13
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-16
|
2018-09-16 16:07:34
|
2018-09-18
|
2018-09-18 17:38:45
| 16
| false
|
en
|
2018-09-27
|
2018-09-27 21:25:20
| 3
|
1ae44cf53cc1
| 4.399057
| 0
| 0
| 0
|
MDP gives the mathematical formulation of Reinforcement Learning Problem
| 4
|
Markov Decision Process(MDP) Simplified
MDP gives the mathematical formulation of Reinforcement Learning Problem
Markov Decision Process(MDP) is an environment with Markov states; Markov states satisfy the Markov Property: the state contains all the relevant information from the past to predict future. Mathematically,
pic taken from David Silver lecture slide
So if I say that the state S<t> is Markov, that means, it has all the important representations of the environment from previous states(which means you can throw away all the previous states). Think of it in this way: when you have the boarding pass, you do not need your ticket anymore to board the plane; your boarding pass already contains all the necessary information about boarding.
MDP is formally defined as:
MDP tuple
Let’s take an example to develop intuition about MDP.
Student MDP Example from David Silver lecture slide
Suppose that you are a student and the figure above portrays the scenario of one of your day at school. The circles and square represents the states you can be in and the words in red are the actions that you can take depending on the state you are in; for example, in the state Class 1 you can choose whether you want to study or check your Facebook and depending on what actions you take, a numerical reward is given. There is also an action node(back dot in the figure) from where you can end up in different states depending on the transition probability; for example, after you decide to go to Pub from Class 3, you have 0.2 probability of getting into Class 1. This node shows the randomness of the environment over which you have no control. In all other cases, the transition probability is 1 and if the discount factor is 1 then MDP can be defined as:
MDP Example
Now that we have MDP, we need to solve it to find the best path that will maximize the sum of rewards, which is the goal of solving reinforcement learning problems. Formally, we need to find an optimal policy that will maximize the overall reward that an agent can get.
To solve MDP, we first have to know about the policy and value function.
In simple terms, policy tells you which actions to take. It is defined as:
Policy definition taken from David Silver lecture slide
For MDPs, the policy depends only on the current state.
Value function can be defined in two ways: state-value function and action-value function. State-value function tells you “how good” is the state you are in where as Action-value function tells you “how good” is it to take a particular action in a particular state. The “how good” of a state(or state-action pair) is defined in terms of expected future rewards.
The state-value function is defined as:
state-value function definition taken from David Silver lecture slide
Similarly, the action-value function is defined as:
action-value function taken from David Silver lecture slide
If we take the maximum of the value function over all policies, we get the optimal value function. Once we know the optimal value function, we can solve MDP to find the best policy.
The value functions that we defined above satisfy the Bellman equation; it states: “the value of the start state must equal the (discounted) value of the expected next state, plus the reward expected along the way.”
For example, if we take the path from Class 1 to Class 2 then we can write the Bellman equation in the following way:
Bellman equation for value function
The Bellman optimality equation can be written in similar ways:
Bellman optimality equation for Value function
These concepts can be easily extended to multiple paths with different actions taking to different states. In this case, the Bellman optimality equation is:
Optimal state-value function
Using above equation, we can find the optimal value function for each state in our student MDP example.
Optimal state-value function
The optimal action-value can be expressed in similar fashion as:
Optimal action-value function
This equation gives the following result in our student MDP example.
Optimal action-value function
Once we have action-value function, we can find the optimal policy by taking their maximum. Formally, it would be:
Optimal policy
The optimal policy, which will maximize the reward for our Student is shown by the red arcs in the figure below.
Optimal-policy
Summary:
MDP represents the reinforcement learning problem mathematically and the goal of solving MDP is to find an optimal policy that will maximize the sum of expected reward. Finding an optimal policy becomes easier once we have the action-value function. The intuition behind Bellman equation simplifies the process of finding action-value function.
References:
An introduction to Reinforcement Learning, Sutto and Barto
David Silver Course on Reinforcement Learning
PS: I wrote this post based on my understanding of Reinforcement Learning. Any suggestion/improvement about the content and/or style of writing will be appreciated.
|
Markov Decision Process(MDP) Simplified
| 0
|
markov-decision-process-mdp-simplified-1ae44cf53cc1
|
2018-09-27
|
2018-09-27 21:25:20
|
https://medium.com/s/story/markov-decision-process-mdp-simplified-1ae44cf53cc1
| false
| 755
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Bibek Chaudhary
|
self-learner
|
84a0db77e00e
|
bibekchaudhary
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-03
|
2017-11-03 21:24:01
|
2017-11-03
|
2017-11-03 21:56:13
| 1
| false
|
en
|
2017-11-04
|
2017-11-04 13:51:32
| 13
|
1ae45f792e32
| 2.69434
| 0
| 0
| 0
|
What an exciting week it’s been for the cybersecurity and tech industries! First, let me say that I was thrilled to speak at the…
| 5
|
Facebook, Manafort’s 007, and Other Cyber News Around the World This Week
What an exciting week it’s been for the cybersecurity and tech industries! First, let me say that I was thrilled to speak at the SecureWorld Denver conference on Thursday November 2nd.
Despite being the last speaker on the last day of the conference, I had a hearty audience of industry folks interested in listening to me talk about managing hiring expectations in cyber.
One key takeaway — we all need to work on rebranding the industry to make it appealing to young people.
But, there was more going on this week than my travels to Denver. No, there is not a new James Bond movie soon to be released, it’s just the highly-insecure password of the former Trump campaign manager, Paul Manafort.
Then there was the announcement from Mark Zuckerberg who swore to take a financial hit in the name of security. A round of applause for the billionaire willing to take one for the team and invest more money into securing the Facebook platform.
I think we are supposed to feel bad about this financial loss as Zuckerberg has made it quite clear that he’s doing the public a solid.
“We’re investing so much in security that it will impact our profitability. Protecting our community is more important than maximizing our profits,” Zuckerberg said.
Other news of the not so famous but still important people in cybersecurity also happened this week. Here’s a look back at the frightening and not so scary events of this Halloween week.
November 3, 2017
Though end users have long been blamed as the cause of most enterprise security incidents and breaches, security failures in third parties is creeping ahead.
Who knew that geography actually has an impact on how companies budget for cybersecurity? According to a new AT&T report, companies based in Asia are spending more on security than those in the US.
The good news, no matter where you are in the world, careers in security will continue to open up as cyber threats continue to evolve.
November 2, 2017
Automation to the rescue. Given the ever changing threat landscape and the increased (and overwhelming) number of alerts to sift through with little man power, industry experts pose the question of whether automation is the best solution.
I wasn’t the only one to visit Colorado this week. The inaugural Cyber Security Symposium welcomed Former CIA director David Petraeus to Colorado Springs on November 1st. Impressed with the city’s efforts to combat hacking, Petraeus extended kudos and declared there is no better time to be in cybersecurity.
November 1, 2017
Saudia Arabia has established a National Authority for Cyber Security, appointing minister of state Musaed al-Aiban its chairman.
Recognizing that there is no silver bullet, the cybersecurity industry is trying to attack the problem of growing threats with all kinds of innovative technology, which begs the question of whether security can ever be improved without shifting the focus to human behavior.
October 31, 2017
After Equifax joined the ranks of major enterprises that have been breached, companies are looking for ways to amp up their cybersecurity. Blockchain technology is one solution into which many are investing both money and hope.
A friendly word of advice came from Nick Coleman, global head, cyber security intelligence at IBM: if you don’t embrace artificial intelligence and automation, you won’t matter much to the industry in as few as three years.
October 30, 2017
Whether its spreading FUD or a call to action that will affect change, director of the National Counterintelligence and Security Center, William R. Evanina expressed his deep concern for the cyber safety of every American citizen, government agency, and corporation.
One of the staffing challenges in facing the looming skills gap in cybersecurity is a lack of resources to increase staff. AI is quickly becoming a more viable option for SMBs who are looking to improve their overall security posture.
|
Facebook, Manafort’s 007, and Other Cyber News Around the World This Week
| 0
|
facebook-manaforts-007-and-other-cyber-news-around-the-world-this-week-1ae45f792e32
|
2018-02-20
|
2018-02-20 15:04:07
|
https://medium.com/s/story/facebook-manaforts-007-and-other-cyber-news-around-the-world-this-week-1ae45f792e32
| false
| 661
| null | null | null | null | null | null | null | null | null |
Automation
|
automation
|
Automation
| 9,007
|
Kacy Zurkus
|
InfoSec and cybersecurity freelance writer, ghostwriter, and public speaker covering a variety of topics on security and risk for several industry publications
|
1d141aafa844
|
KSZ714
| 46
| 111
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
69c20edac0bc
|
2018-05-30
|
2018-05-30 11:28:35
|
2018-05-30
|
2018-05-30 11:39:48
| 1
| false
|
en
|
2018-06-01
|
2018-06-01 08:54:33
| 2
|
1ae6310d9de6
| 0.74717
| 1
| 0
| 0
|
With everyone joining the data science bandwagon, won’t it be useful to know — what are the companies asking more about? What’s the hiring…
| 4
|
Survey: What’s hot in Data Science Interviews in 2018?
With everyone joining the data science bandwagon, won’t it be useful to know — what are the companies asking more about? What’s the hiring process like?
So we floated a survey recently and hundreds of responses later, it seems results are going to be surprising. For instance, around 40% respondents found companies to not have enough clarity on job role AND the hiring process.
Another example is the language preference. Responses so far suggest that companies seem to be clearly preferring Python over anything else out there:
Interviewed for a Data Science role in 2018?
Take the survey now to get the full survey report in PDF and help the overall Data Science community understand the real picture. And yeah, 5 genuine and lucky entries will also be sent Amazon cards worth INR 250/ by CutShort!
|
Survey: What’s hot in Data Science Interviews in 2018?
| 1
|
survey-is-now-open-whats-hot-in-data-science-interviews-in-2018-1ae6310d9de6
|
2018-06-01
|
2018-06-01 08:54:34
|
https://medium.com/s/story/survey-is-now-open-whats-hot-in-data-science-interviews-in-2018-1ae6310d9de6
| false
| 145
|
Get clear insights for a successful "data career". Curated by CutShort, the fastest growing career platform in India
| null | null | null |
Practical Data Science Career Insights
|
datacareers@cutshort.io
|
data-science-career-insights
|
DATA SCIENCE,DATA VISUALIZATION,DATA ANALYTICS,DATA ANALYSIS
| null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Nikunj Verma
|
Cofounder at CutShort (www.cutshort.io)
|
2be9fbfef4dc
|
nikunjverma
| 58
| 105
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
55fe2b96daeb
|
2018-05-23
|
2018-05-23 11:09:12
|
2018-05-23
|
2018-05-23 11:15:22
| 1
| false
|
en
|
2018-05-23
|
2018-05-23 11:31:16
| 1
|
1ae7c2cc77c8
| 1.550943
| 3
| 0
| 0
|
The Wolfson-HAT Digital Person Symposium is coming next week
| 5
|
Humanities, economics, and computer sciences
The Wolfson-HAT Digital Person Symposium is coming next week
It sounds a bit like the opening to a bad joke. A social scientist, an economist, and a computer scientist walk into a bar. But it ain’t— Wolfson College has a trio of brilliant fellows, all of them participants in the Big HAT Community, and next Thursday it is very proud to be bringing them all together for a multi-disciplinary exploration.
Second Annual Digital Person Symposium
31 May, 9:30am — 5:30pm
Lee Hall, Wolfson College Cambridge
RSVP
The event is unique, in many ways, for being a three-perspective take on this highly-relevant issue, and it’s proud to be bringing together Professors Irene Ng, John Naughton, and Jon Crowcroft for their disparate perspectives. Aside from the robust discussion, the symposium will also produce an annual white paper on the state of the digital person in a connected and digital society, and symposium participants are to be drawn from industry captains, policy makers, government representatives, combined with thought leaders from the sciences, humanities and social sciences with discussions relating to law, computer science, history, sociology, entrepreneurship, business, economics and the global society.
Irene has spoken about the Digital Person in the context of the HAT many times, and some will recall last year’s Symposium’s explorations of the digital self in personal, economic, ethical, and computing contexts. But for those who haven’t yet heard a multi-perspective take on the issues facing online identity today, this promises to be an interesting day of deeper conversations.
Tickets are free and guests are warmly invited to attend. The topic of “Personal Data” will be divided into segments taking place between 09:30 and 11:00 (Digital Personhood, Freedom and Democracy), 11:15 and 13:15 (Value, Economics, and Markets), and 14:15 to 16:00 (Analytics, Data Science, and Technology). RSVP online at Eventbrite.
—
The HAT Community Foundation (HCF) is a non-profit organisation promoting the use of private data accounts by individuals, startups, corporations, universities and government. We aim to spawn a new generation of Internet applications that sit on private data accounts, empowering individuals with their own data.
|
Humanities, economics, and computer sciences
| 26
|
humanities-economics-and-computer-sciences-1ae7c2cc77c8
|
2018-05-26
|
2018-05-26 09:45:11
|
https://medium.com/s/story/humanities-economics-and-computer-sciences-1ae7c2cc77c8
| false
| 358
|
A technology company bringing about the future of personal data. Change the Internet.
| null |
hubofallthings
| null |
Hub of All Things
|
contact@hatdex.org
|
hub-of-all-things
|
PERSONAL DATA,ECONOMICS,PRIVACY,DECENTRALIZATION,DATA
|
hubofallthings
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Jonathan Holtby
|
Community Manager at HATLAB, HATDeX and the Hub of All Things.
|
33cbfe0a103c
|
jonathanholtby
| 121
| 130
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-24
|
2018-06-24 09:19:09
|
2018-06-24
|
2018-06-24 09:46:02
| 2
| false
|
en
|
2018-06-24
|
2018-06-24 10:15:31
| 0
|
1ae8b568edb8
| 1.390881
| 1
| 0
| 0
|
Decision tree is a form of Supervised Learning in which you give it some sample data and the resulting classifications and out comes a…
| 5
|
DECISION TREE AND RANDOM FOREST (The Easy Way)
Decision tree is a form of Supervised Learning in which you give it some sample data and the resulting classifications and out comes a tree!!
It gives you a flowchart to help you decide a classification for something with machine learning.
So for example here is the dependent variable is Weather and based on that I decide whether I go to play or not:
So as you can see a DT can look at different attributes of Weather (like humidity, temperature , rain etc) and decide what are the thresholds before it arrives to a certain decision.
Random Forests:
One problem with DT is that they are very susceptible to overfitting (so it might work beautifully for the data you trained on but it might not give the correct classification for new people it hasn’t seen before as we might not give enough representatives samples of people to learn from for training), so to fight it we can construct several alternate DT’s and let them “vote” on the final classification — this is called Random Forest.
Each DT takes a random sub sample from our training data and constructs a tree from it and each resulting tree can vote for the right result. This helps us in overfitting and also know as bootstrap aggregating or bagging.
So basically in Random forest we have multiple trees or forest of trees, each that uses a random sub sample for the data we have to train on and each tree can vote on a final result, that will help us combat overfitting.
|
DECISION TREE AND RANDOM FOREST (The Easy Way)
| 1
|
decision-tree-and-random-forest-the-easy-way-1ae8b568edb8
|
2018-06-24
|
2018-06-24 10:15:31
|
https://medium.com/s/story/decision-tree-and-random-forest-the-easy-way-1ae8b568edb8
| false
| 267
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Sammy
| null |
af012ff9befa
|
shivam.somani09
| 7
| 42
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-30
|
2018-06-30 17:45:46
|
2018-06-30
|
2018-06-30 17:46:39
| 0
| false
|
en
|
2018-06-30
|
2018-06-30 17:46:39
| 1
|
1ae99e1c1454
| 0.177358
| 0
| 0
| 0
| null | 3
|
Artificial Intelligence Will Change the Way We Handle Money
Artificial Intelligence Will Change the Way We Handle Money - Crypto Disrupt
Among all the topics and exhibitions being discussed at the Blockchain Summit London, one subject matter stood out:…cryptodisrupt.com
https://cryptodisrupt.com/artificial-intelligence-will-change-the-way-we-handle-money/
|
Artificial Intelligence Will Change the Way We Handle Money
| 0
|
artificial-intelligence-will-change-the-way-we-handle-money-1ae99e1c1454
|
2018-06-30
|
2018-06-30 17:46:39
|
https://medium.com/s/story/artificial-intelligence-will-change-the-way-we-handle-money-1ae99e1c1454
| false
| 47
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Daniel Jung
|
Education Architect (>140.000.000 Views on YouTube: Mathe by Daniel Jung🎬) | Coach New Learning | Keynote Speaker | Entrepreneur
|
bf465d3bb244
|
DanielJung
| 61
| 16
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
cc02b7244ed9
|
2017-12-12
|
2017-12-12 07:00:11
|
2017-12-12
|
2017-12-12 07:02:37
| 0
| false
|
en
|
2017-12-12
|
2017-12-12 07:02:37
| 11
|
1aebeb82ff8
| 2.079245
| 0
| 0
| 0
|
PRODUCTS & SERVICES
| 5
|
Tech & Telecom news — Dec 12, 2017
PRODUCTS & SERVICES
Video
As Netflix subscribers keep growing, and the company expands out of the US (now with less than half to the total user base), their usage per account seems to be going down. They just announced that current consumption is approx. 1.28h / day per subscriber, s. previous numbers for 2016 of 1.75h / day (Story)
Verizon just agreed to spend $1.5bn in the rights to stream NFL games during the next 5 years. They will focus on monetisation through advertising, at their Yahoo and Go90 platforms, and across mobile carriers, in a radical change vs. current approach, based on mobile exclusivity (as a carrier) aimed at reducing churn (Story)
HARDWARE ENABLERS
Networks
Even if fancy new apps like connected cars tend to capture media attention, analysts believe “mainstream” use cases (Fixed wireless, enhanced MBB, mobile video streaming) will be key to drive operators’ investments in 5G. Reducing cost per GB (e.g. through mm wave spectrum or massive MIMO) seems critical for success (Story)
SoftBank seems committed to the vision that satellite broadband will be key to extend the internet to the “remaining unconnected”. They just announced a new investment of $500m in satellite provider OneWeb, on top of a previous $1bn one, to support a project to offer speeds “as fast as typical fiber optic networks” (Story)
SOFTWARE ENABLERS
Artificial Intelligence
Some VC funds have started to use AI, combined with data openly available at the internet, to identify investment opportunities. InReach Ventures, a UK-based VC firm, has created a software for this that monitors 95K European startups, based on people they hire, products they develop and traffic on their websites (Story)
A team of AI researchers from MIT Media Lab are working to create machine learning systems able to predict emotional responses to video content, a potentially useful tool for movie & TV studios to create more effective video content. Initial (but rather unsurprising) conclusions are that audiences do like happy endings ;-) (Story)
“Process mining”, a new big data analytics framework, is helping companies (like Vodafone) to analyse internal processes to identify optimisation opportunities. Software tools based on this aggregate data from companies’ interactions with customers (at websites, emails or phone calls) to detect patterns and potential gaps (Story)
Quantum Computing
After previous releases by rivals Google and IBM, Microsoft is also launching a set of simulation tools for developers to test quantum computing software online, including its Azure cloud platform. Microsoft claims to have a different approach to quantum computing, more suitable to commercial use vs. competitors’ systems (Story)
M&A
It seems increasingly probable that a deal for Disney to acquire 21st Century Fox assets will be announced this week, and yesterday Comcast (another candidate to acquire some of them, and the Sky UK one in particular) said they’re no longer in the race for the acquisition, as they didn’t get “the level of engagement needed” (Story)
Apple finally confirmed the acquisition of the Shazam music recognition app, a move to reinforce the skills of the coming HomePod smart speaker, and the Siri virtual assistant. Shazam could also be useful to send traffic to the Apple Music app (and there are rumours that Spotify was also interested in the acquisition) (Story)
Subscribe at https://www.getrevue.co/profile/winwood66
|
Tech & Telecom news — Dec 12, 2017
| 0
|
tech-telecom-news-dec-12-2017-1aebeb82ff8
|
2017-12-12
|
2017-12-12 07:02:39
|
https://medium.com/s/story/tech-telecom-news-dec-12-2017-1aebeb82ff8
| false
| 551
|
The most interesting news in technology and telecoms, every day
| null | null | null |
Tech / Telecom News
|
ripkirby65@gmail.com
|
tech-telecom-news
|
TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE
|
winwood66
|
Netflix
|
netflix
|
Netflix
| 14,249
|
C Gavilanes
|
food, football and tech / ripkirby65@gmail.com
|
a1bb7d576c0f
|
winwood66
| 605
| 92
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-02
|
2018-04-02 11:13:35
|
2018-04-04
|
2018-04-04 12:41:24
| 2
| false
|
en
|
2018-04-04
|
2018-04-04 12:41:24
| 6
|
1aec2e82ca02
| 1.696541
| 2
| 0
| 0
|
This week we want to introduce you to the team, that stands behind this project.
We are located in Estonia and South Korea, that are…
| 5
|
Meet Our Team
This week we want to introduce you to the team, that stands behind this project.
We are located in Estonia and South Korea, that are considered to be joint global leaders in digital empowerment.
All in all, there are 15 of us and our team is constantly growing. For now, please meet our key employees.
First of all, meet our Founder and CEO!
Edward Kwon
Edward Kwon
Building up his career in leading companies including Samsung, Edward has become an entrepreneur in blockchain industry. He is also an advisor in the Korean Blockchain Technology Association.
He decided to start TAITOSS bacause he saw an opportunity for travel industry in blockchain and AI.
Our Advisor
David Yang
With 2 decades in IT industry, he is one of the Korean IT gurus. He has deep interest in games, digital entertainment and, of course, blockchain.
Our CTO
Jeong Hwan Kim
Jeong Hwan Kim | LinkedIn
20 years ago, he changed his career from diplomat to computer programmer, an occupation he had always dreamed of. In recent years, he has researched artificial intelligence and now he is fully working on blockchain.
By the way, today he will give a speech at Blockchain & Bitcoin Conference in Berlin. Later, we will post an update about it. Stay tuned!
Next, Our R&D team:
Taekyoung Lee
He is a C++ expert who is interested in blockchain. He is researching new blockchain algorithm and cryptography to build a new blockchain system.
Jack Eum
He is our Java expert. He is in charge of web and mobile applications. He is also interested in javascript and web front-end technologies.
And finally, our CMO:
Pauly Lee
He is concentrating on international marketing. He is the main communication channel for TAITOSS. Living in Estonia, he makes connections and expands our network.
If you have any questions, please feel free to join our Telegram group chat at https://t.me/taitoss_en or comment below.
For more information please follow our Medium blog or visit our website. We also have a Telegram news channel where we will post updates about our project.
Thank you for visiting!
|
Meet Our Team
| 2
|
meet-our-team-1aec2e82ca02
|
2018-04-09
|
2018-04-09 13:02:59
|
https://medium.com/s/story/meet-our-team-1aec2e82ca02
| false
| 348
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
TAITOSS
|
Starting from personalized travel advice up to payments, TAITOSS becomes one-stop solution for all travelers by implementing Blockchain and AI technology!
|
5fe6d5286186
|
taitoss
| 12
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
e2347a32cd3a
|
2018-04-07
|
2018-04-07 10:09:46
|
2018-04-07
|
2018-04-07 10:10:50
| 1
| false
|
en
|
2018-04-29
|
2018-04-29 10:15:17
| 0
|
1aed3bbb25d5
| 3.660377
| 10
| 0
| 0
|
Increasingly more new startups are employing Machine Learning in their daily activities. It’s potentially one of the biggest leaps in…
| 5
|
The First Erotic Service Platform to Use Complex Machine Learning Framework
Increasingly more new startups are employing Machine Learning in their daily activities. It’s potentially one of the biggest leaps in technology, proposing delightful possibilities for companies to create unforgettable user experience and adapt it to their business models.
What is Machine Learning (ML)?
Machine Learning is a form of artificial intelligence, enabling online systems to automatically learn and improve its features. By learning from experience (without programming in advance), ML allows businesses to effectively serve its clients. It is the trending approach, involving digital transformations of the way the service industry business is done. It’s no longer a thing of a science fiction movies, but a way to employ customer data to make the business processes more cost and time efficient, effective and reliable.
Many service industries deal with changing customer habits, versatile user experiences and numerous different expectations. ML technology is constantly analyzing the business and evolving, in order to be able to target the customers even more effectively. Machine Learning can highly influence the success of any service, by making the content more attractive, more useful and more effective for the target audience.
Machine Learning is already widely used in retail, healthcare, financial and other sectors and we are planning to highly involve it in ExtraLovers platform as well.
Complex Machine Learning Mechanism of ExtraLovers
Machine Learning has not yet been widely used in the adult service industry, at least not to the extent it could be applied in order to achieve the maximum mutual benefit of service providers and clients. ExtraLovers is introducing a complex Machine Learning based platform, ensuring a pleasant client experience.
ExtraLovers erotic service platform has integrated Machine Learning to make the user experience as effortless and quick, as possible. We are proud to state, that ExtraLovers is the first online erotic service platform, which has employed a complex Machine Learning mechanism. These algorithms are projected to improve efficiency, save time and optimize the resources. Also, they help to create systems and businesses, intuitively offering their clients exactly what they are looking for.
After employing the machine learning technology, ExtraLovers will own a specialized database consisting of data about our clients’ needs. The algorithm will be constantly analyzing the changing patterns of client behavior and will be learning from this behavior in order to provide the clients results best matching their needs.
How Does Machine Learning Benefit the Client?
In his/her registration form, a client will have to provide the basic information about the expected service type and the main criteria expected from the service provider. After the successful registration, a client will have to choose the expected time and location of the service. According to this choice, within a couple of minutes, he/she will be able to browse through a list of service providers, presented according to the expected criteria, such as popularity, looks, service list, prices or reviews.
The system will automatically present the best matching service providers, listed according to the data provided in the client’s registration form. Also, it is worth mentioning that ExtraLovers service algorithm will be continuously learning from the client’s behaviour and service ratings, provided by the him/her. Each time the algorithm will be able to select the service providers matching the client’s expectations even better than the previous time.
What are the Benefits for the Service Provider?
ExtraLovers provides a quick, easy and safe platform to advertise erotic services and target clients. What is more, our Machine Learning Framework analyzes the supply and demand of the market, based on the region, supply of other service providers and their criteria, as well as seasonal factors. When a service provider decides to acquire advertising services on our platform for a particular period of time, the algorithm will evaluate the current market supply and demand. According to the criteria given by a service provider, as well as to the existing client needs, the algorithm will provide recommendations for service organization. As the market needs are constantly changing, the algorithm will be reacting to these changes and learning while evaluating the orders, ratings, level of occupation, etc. What is more, by incorporating Machine Learning into our platform’s framework, we make sure that the service providers will be listed efficiently — for a client to choose from — and the algorithm will allow to identify complex consumer behavior.
Service providers will benefit from our specialized algorithm, as it will analyze the supply of services and provide recommendations for business organization. For example, the system will provide tips on how to organize the services in the particular region or season. Also, it will advise, what services to offer in order to be successful among target clients. What is more, the algorithm will calculate and project the percentage rate of success for each service provider. These insights will help service providers to work efficiently on our platform.
While many other industries, such as health or banking sector, have already adopted Machine Learning to high extent, adult service industry is only making its first timid steps into the world of artificial intelligence. We believe the potential of complex ML framework adaptation can help us become the leaders of the erotic service industry, change the usual business patterns and shape the dreamy service experience for our platform users.
Let’s keep in touch to see what else we’ve got to share. Follow us for more articles, explaining the way how ExtraLovers is shaping the service industry by adapting the hottest today’s technologies.
|
The First Erotic Service Platform to Use Complex Machine Learning Framework
| 15
|
the-first-erotic-service-platform-to-use-complex-machine-learning-framework-1aed3bbb25d5
|
2018-04-30
|
2018-04-30 07:52:23
|
https://medium.com/s/story/the-first-erotic-service-platform-to-use-complex-machine-learning-framework-1aed3bbb25d5
| false
| 917
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Neringa @ ExtraLovers
|
Copywriter
|
e8501225dec1
|
neringaEL
| 10
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-23
|
2018-05-23 10:56:29
|
2018-05-23
|
2018-05-23 11:01:07
| 2
| false
|
tr
|
2018-05-23
|
2018-05-23 11:02:48
| 8
|
1aed89ca6aad
| 1.522956
| 0
| 0
| 0
|
Son yılların trend kavramı olan “nesnelerin interneti” yani IoT kavramı beraberinde birçok uygulamayı ve iş kolunu meydana getirmiştir. Bu…
| 5
|
4 — DATABROKER: SORUNLAR VE ÇÖZÜMLER
Son yılların trend kavramı olan “nesnelerin interneti” yani IoT kavramı beraberinde birçok uygulamayı ve iş kolunu meydana getirmiştir. Bu kavramı örneklerle açıklayabiliriz. Akıllı telefonunuzdan evinizdeki elektronik aletleri kontrol edebildiğiniz sistem buna örnektir. Yani bir nesnenin internet bağlantısı ile işlevsellik kazanma durumudur. Bu kavramın ortaya çıkmasıyla birlikte Iot sensörleri ve bu sensörlerin bakımı her sene yüzlerce milyarlık ticari hareket meydana getirmeye başlamıştır. Çok yüksek yatırım maliyeti olması başlı başına bir sorun iken bu verilerin kayıt altına alınması da başka bir sorun olarak karşımıza çıkmaktadır.
Geçtiğimiz yıl IoT ticaret hacmine baktığımızda 600 Milyar Dolar’ın üzerinde ticaret hacmi olduğunu belirtmiştim. Bu büyüklüğün gelecek yıl 2 katına çıkacağı öngörülüyor. Sadece bir yılda yaklaşık 10 milyar adet sensörün kurulumu gerçekleşmiştir. Bu sayının ise gelecek yıl 4 katına çıkacağı düşünülmektedir.
Kurulumun yanı sıra verilerin saklanması noktasında da sıkıntılar mevcuttur. Depolanan verilerin çoğunluğu kilitli şekilde durmakta ve faydası görülmemektedir. Bu veriler aslında şahsi kullanım için kullanılabileceği gibi herhangi bir işlem gördükten sonra farklı bir değer ile yeniden satılabilir.
ÇÖZÜM
Databroker sisteminde “büyüdükçe öde” anlayışı mevcuttur. Bu sayede fikrine güvenen fakat ölçek olarak küçük olan firmaların daha en başında güçlü sermayedarların ağır koşulları altında ezilmesi engellenir. IoT sensöründen gelen veriler dikkate alınarak büyük şirketler ile aynı imkanlara ve fırsatlara sahip olmaları için çalışılacaktır.
Veri depolamasından bahsedecek olursak; depolanan ve kullanımda olmayan verilerin yeni kullanım alanlarına entegre edilmesi, hatta daha önce hiç akla gelmeyen alanlarda kullanılması sağlanacaktır.
Bu işlemler yapılırken sensör verisi aktarımı yapan Telekom şirketlerinin mevcut altyapıları kullanılacak ve son kullanıcı ile direkt bağlantı sağlanacaktır.
Databroker yukarıda bahsettiğim yöntemler sayesinde piyasadaki IoT ile ilgili en başlıca sorunlara çözümler sunmaktadır. Proje detaylarından bahsetmeye devam edeceğim. Databroker projesi ile ilgili daha farklı bilgilere ulaşmak için aşağıdaki linklerden faydalanabilirsiniz.
Website: https://databrokerdao.com/?ref=btctalk
Whitepaper: https://databrokerdao.com/whitepaper/WHITEPAPER_DataBrokerDAO_ENG.pdf
Telegram: https://t.me/databrokerdao
Facebook: https://www.facebook.com/DataBrokerDAO/
Twitter: https://twitter.com/databrokerdao
ANN: https://bitcointalk.org/index.php?topic=2113309.0
BOU: https://bitcointalk.org/index.php?topic=2909180.0
My BitcoinTalk Profile: https://bitcointalk.org/index.php?action=profile;u=1780407
|
4 — DATABROKER: SORUNLAR VE ÇÖZÜMLER
| 0
|
4-databroker-sorunlar-ve-çözümler-1aed89ca6aad
|
2018-05-23
|
2018-05-23 11:02:49
|
https://medium.com/s/story/4-databroker-sorunlar-ve-çözümler-1aed89ca6aad
| false
| 302
| null | null | null | null | null | null | null | null | null |
Money
|
money
|
Money
| 35,618
|
Burak Koçyiğit
|
Industrial Engineer / Cryptocurrency Enthusiast
|
34eedb2284dc
|
burakkocyigit1
| 927
| 1,203
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
5e2d825b68f3
|
2017-10-09
|
2017-10-09 21:55:50
|
2017-10-09
|
2017-10-09 22:24:28
| 1
| false
|
en
|
2017-10-10
|
2017-10-10 20:01:15
| 0
|
1aed97ba1670
| 4.430189
| 7
| 0
| 0
|
Artificial whaaa!?
| 4
|
The case for Artificial Consciousness
Artificial whaaa!?
The word “consciousness” is often infused and confused with concepts that are meta-physical, supernatural, mysterious, related to religion/soul, etc. So I want to first clarify what it means. The word has a precise definition that is based on the computational theory of mind, which posits that mental processes are the result of computations performed by the brain. The definition most commonly accepted by Cognitive scientists is -
Consciousness is the the ability of an agent to have qualia, i.e. subjective experiences
Now, the ultimate goal of Artificial Intelligence is the emergence of human or super-human intelligence. The field of AI has been focused mostly on the “easy” problems of AI — logic, learning, perception, memory, planning and language. On the other hand, the “hard” problems of AI — awareness, emotions and motivation — have been largely ignored by the AI research community for various reasons.
The “hard” problems of AI map well to the problem of consciousness because awareness, emotions and motivation are required for subjective experiences. Indeed, we would not ascribe human-level intelligence to any agent that does not demonstrate consciousness. For example, they will not be able to pass intelligence tests such as the Turing test without having subjective experiences and the ability to communicate about them.
To reach the goal of human level artificial intelligence, we need to solve artificial consciousness.
Solving the “easy” problems of AI is “narrow AI”.
Solving both “easy” and “hard” problems is “general AI”.
Is Artificial Consciousness possible?
Yes.
Given the exponentially growing computing capabilities, artificial agents would eventually be able to act like they are conscious. At that point, humans would have an overwhelming impulse to attribute consciousness to them and the impulse itself is the only evidence needed to say that the agents are indeed conscious. That is the same test humans use to ascribe consciousness to fellow humans.
Relationship with narrow AI research
There have been fantastic advances in narrow AI, especially during the last decade using deep neural network techniques. I consider much of this as side branches or tangents to the path that would lead to general AI. In other words, these advances are not likely to get us closer to general AI. Why?
Substrate —
Consciousness is possible when sensory, memory, attention, emotion and other subsystems are able to communicate and deeply influence each other, so any system that demonstrates consciousness must have a high degree of integration among its modules. Such integration is possible only when the subsystems are built on the basis (substrate) of similar algorithmic and representational mechanisms. Algorithms and models designed specifically to solve one type of problem (narrow AI) are not likely to be suitable for building conscious agents. We need to find a common architecture for solving all the problems of AI.
Once we have a substrate that enables us to tackle the “hard” problems, we will need to solve/implement the “easy” problems on top of that substrate.
We will be able to take hints from narrow AI solutions, but won’t be able to plug-and-play those solutions.
Capabilities —
A general AI system will be less efficient and less capable at solving narrow AI problems than a corresponding narrow AI solution. Narrow AI has already delivered solutions to cognitively challenging problems at super-human capabilities (think natural language translation, image style transfer, board games, and many other problems). They will continue to produce further impressive results.
A general AI system will likely use narrow AI techniques as tools instead of as modules.
Also, general AI systems will take at least a decade of algorithmic and computing advances to become capable of doing anything interesting.
What will it take to create Artificial Consciousness?
To create Artificial Consciousness followed by Artificial General Intelligence, we will need to bring together insights from several disciplines. Here is what I am looking at -
Neuroscience: Learn from the only known implementation of general intelligence; feed-forward learning;
(deep) neural networks: Various techniques such as convolutions, scalable / GPU / compute-graph implementations; open source software ecosystem, etc
Spiking neural networks: what do spikes represent? cooperation between spike frequency based (Hebbian) and spike time based (STDP) learning; delay learning (polychronous) systems;
Hierarchical pattern memory: Plausible functional model of cognition; forward and backward predictions; pattern creation, refinement; temporal-difference-type reinforcement learning
Dynamical systems: Functional behavior and control of a collection of interconnected units; emergent behavior; equilibrium points and attractors
Homeostatic equilibrium: Homeostatic equilibrium as a fundamental goal; use of emotions as levers to achieve the goal; eschew most hyper-parameters and instead use energy, time and input constraints
Emotions: Neurobiological basis and dynamics of emotions; building up a full assortment of emotions from basic set of environment/genetics driven feedback and the goal of homeostatic equilibrium
Self-organization: building models from grounds up within given constraints to match complexity of the environment
My near term goal is to put together a step by step recipe for creating artificial consciousness, with full conceptual clarity along with basic software implementation of each step.
But won’t the intelligent machines kill us all?
Intelligent machines with no consciousness are simply tools. Humans can use them as weapons with no opposition coming from the machines. This scenario is incredibly dangerous — imagine nuclear weapon level distructive capabilities in everyone’s pocket.
On the other hand, if the machines are conscious and are trained (raised?) in a (nurturing?) environment with appropriate feedback (parenting?), they will overwhelmingly turn out to be moral citizens. Yes, AI babysitting is likely to be a high skill job of tomorrow. They would find it offensive to harm humans and would make positive contributions to society. Note that it will still be possible to create machine versions of psychopaths and killers.
Our best hope is to have powerful conscious machines on our side.
Hack-proofing AI systems
Imagine terrorists hacking the centralized control system and crashing 10,000 trucks. — a user comment from online post discussing potential issues with a fleet of fully automated trucks
“radicalized” or “brain washed ” humans are known to have driven vehicles into crowds with the intent to kill. That is in essense the same as being hacked. The difference seems to be that it takes more effort to radicalize than to hack because the subject offers substantial and no resistance, respectively. The underlying difference is consciousness or the lack thereof. Conscious agents would act to restore their state of homeostatic equilibrium, which for a well-adjusted agent would be a moral citizen.
We need future AI systems to have consciousness to make it harder to misuse them.
This is why creating artificial consciousness is going to be critical to humanity’s future in the coming decades.
|
The case for Artificial Consciousness
| 71
|
the-case-for-artificial-consciousness-1aed97ba1670
|
2018-04-01
|
2018-04-01 17:06:17
|
https://medium.com/s/story/the-case-for-artificial-consciousness-1aed97ba1670
| false
| 1,121
|
My journey as an independent researcher creating Artificial Consciousness
| null | null | null |
Creating Artificial Consciousness
| null |
creating-artificial-consciousness
|
AI,AGI,ARTIFICIAL INTELLIGENCE,ARTIFICIAL CONSCIOUSNESS,STRONG AI
| null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Amol Kelkar
| null |
4ccf29a9f00f
|
amol.kelkar
| 10
| 26
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-05
|
2018-01-05 16:48:13
|
2018-01-05
|
2018-01-05 16:54:03
| 0
| false
|
en
|
2018-03-11
|
2018-03-11 16:30:15
| 0
|
1aee9fd4eb78
| 2.679245
| 1
| 0
| 0
|
Today, the connotation of being a doctor comes with an almost reverent tone; a sign of a longstanding and arduous process that creates a…
| 4
|
How do patients view their doctors?
Today, the connotation of being a doctor comes with an almost reverent tone; a sign of a longstanding and arduous process that creates a source of truth for anything medical. Patients provide a list of symptoms and await a diagnosis
from someone who must know the cause of their ailment. After all, what use is all of the schooling if they cannot fix the problem? That is their job, isn’t it?
Well . . . yes and no.
The breadth of the medical field is as diverse as the problems it is presented with. Professionals represent on-site technicians, family doctors, experts on diseases, and everything in between. Therefore, it is the overarching goal of
medical institutions to get the patient in front of the right person with as few intermediary steps as possible. It is important to identify the correct person for the job; a brain surgeon is not needed for everyone complaining of a
minor headache. The question arises: how to know?
There are several approaches to this problem:
1. Ask the patient. Unfortunately, most times the patient does not “speak the language” or does not know which information is useful. A doctor may ask, “Do you have any past conditions I should know about?”
This invariably leads to the patient describing every medical encounter they have had since their childhood, but not mentioning some detail that would help identify the problem.
2. Schedule time with a general doctor, and refer as needed. This approach is also used today but comes with its own set of problems. Doctors are overworked and expected to know a tremendous amount of information about a wide range of illnesses. These meetings are brief by necessity, and do not allow for a comprehensive view of the patient.
3. I do not know of a third approach, I just wanted to have three points (see, I am learning to be a consultant).
Clearly, there are improvements to be made. A system that allows for the collection and aggregation of patient information would be of extreme value to doctors with an already full plate. Better yet, if the information could be filtered and designed to present the doctor with relevant information, diagnoses could be made with greater accuracy and reduce the workload on our physicians. Such a system would, by necessity, be able to register medical
information in the form that it exists today. The complexity of the healthcare system’s data structures is a result of the methods of communication used within our hospitals and medical centers. Handwritten documents, verbal
conversations, and other forms of unstructured data make up the majority of how information is translated and stored. The ability to navigate this system is difficult for humans (requiring years of higher education, residency, and
practice), and nearly impossible for machines.
Therein lies the opportunity. Advances in cognitive computing, coupled with an increased regulatory environment and access to mobile devices, create a rudimentary framework for healthcare systems and providers. Additional
information resulting from the imposed regulations will allow successful organizations to develop their analytic abilities. Through integration with mobile devices, a huge amount of patient specific data will be made available.
Industry leaders recognizing these trends should consider how to redevelop the model that has been standard for so long.
This is not a complete review of the complex landscape at the intersection of healthcare and technology, nor was it intended to be. Instead, it should act as a starting point for further consideration of how the expectations of
healthcare will align the current and future realities. The separation between the final consumer of the product (the patient) and the manufacturer (pharmaceutical companies) is becoming increasingly obvious. Prescription drug development and marketing is dependent on a multitude of factors including patent rights, research costs, and a long approval process. These nuances are lost on the average American, who often are unaware of the underlying motivations that result in a doctor’s recommendation. The introduction of a system that can navigate the immense volume of patient information will allow doctors to think critically about various scenarios, rather than acting as a human textbook.
It will support the notion of the educated doctor, while allowing them to memorize less than ever before.
|
How do patients view their doctors?
| 1
|
how-do-patients-view-their-doctors-1aee9fd4eb78
|
2018-03-11
|
2018-03-11 16:30:16
|
https://medium.com/s/story/how-do-patients-view-their-doctors-1aee9fd4eb78
| false
| 710
| null | null | null | null | null | null | null | null | null |
Healthcare
|
healthcare
|
Healthcare
| 59,511
|
Philip Mohun
|
Bioengineer, Consultant | Analytics, Blockchain, AI
|
fb34cd73178f
|
philmohun
| 16
| 20
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-15
|
2018-03-15 05:14:51
|
2018-03-15
|
2018-03-15 05:09:47
| 1
| false
|
en
|
2018-03-15
|
2018-03-15 05:17:54
| 6
|
1aef470e3cc4
| 3.154717
| 0
| 0
| 0
|
Howdy!
| 5
|
The Rare Birds AI Dispatch #9
Howdy!
March 27. Save the date. On that day, we’re hosting an exclusive event about open banking, where banking executives, fintech leaders and journalists will converge and share recommendations on how best to embrace this upcoming regime.
You’ll be in good company. Among our featured participants are:
Stuart Stoyan, Chair of FinTech Australia
Steve Hooker, Olympic Gold Medallist
Jade Clarke, Director of Data Innovation at Westpac
Sophie Elsworth, Personal Finance Journalist at News Corp
Jason Potts, Director of the Blockchain Innovation Hub at RMIT
Alexandra Tullio, Head of Retail Banking at the Bendigo and Adelaide Bank Group
Lisa Schutz, Managing Director at Verifier and InFact Decisions
Want a free ticket? Send us a message at info@rarebirds.io and our team will see what we can do!
But before you send out that email, check out our round-up of headlines in financial services and AI this week!
Danielle Szetho steps down as FinTech Australia CEO
Late last week, Szetho announced her resignation after a two-year run as FinTech Australia’s inaugural CEO. Sarah Worboys, her committee co-lead in Startup Victoria Female Founders, steps in as interim chief executive.
FinTech Business reports Szetho plans to rest and travel before she takes on new career challenges. “It has been an honour and a privilege to have served Australia’s fintech industry as its inaugural industry association CEO, and to have been part of its founding journey and establishment as a real force for positive policy change and industry growth,” Szetho said.
This workout will help you stay in shape — financially
The founders of the Financial Gym want to help people learn how to become financially savvy. Inside this sprawling ‘gym’ trainers teach you not to do squats or lunges, but instead identify your wealth goals and develop a regimen to achieve them.
Standout aspects of this program include a ‘financially naked session’ where clients are asked a number of honest questions to better assess their status. There’s a well-stocked bar, comfy couches and a money-themed playlist blasting from the speakers.
“I realised that [people] wanted to talk to someone about how to manage their money,” says founder Shannon McLay. “They wanted to go to a place. And I realised that most people don’t have a place that exists where they can talk about their finances.”
Building better customer ID systems is next on Australian CEOs’ to-do lists
Digital disruption isn’t only increasing efficiency in financial services — it’s also ushering in a whole slew of security risks for customers. A new international study affirms that CEOs consider this an urgent challenge, and they are taking serious steps to fix it.
According to the study, 95% of firms in Australia are concerned with their ability to identify customers effectively. That’s why, among financial service organisations, 92% have placed a high to critical priority on improving their ability to identify customers more effectively in the next year.
The report was conducted by Forrester Consulting and commissioned by identity data intelligence specialist GBG. Traditional firms and fintechs from Australia, China, Singapore, the UK and the US participated in the survey.
Could this be Slack’s next big rival?
Google’s Hangouts Chat exited beta last week and is now available for all G Suite users. While it faces stiff competition from the likes of Slack, Hangouts Chat certainly has an edge because it is already tightly integrated into Google’s other products. It includes tools and functions such as group video chats, the ability to upload items from Drive, collaboration between Docs, Sheets or Slides, and archiving, exporting and saving of Chat data (for admin users).
Google’s AI will also be built into Hangouts Chat — 25 bots have been announced so far, such as @Meet to help you plan meetings or @Drive to notify you regarding comments and changes to your documents. Advanced users will also be able to build their bot integrations on top of Chat.
Can AI-driven insurance reduce gun violence?
Gun control is a big issue right now, particularly in the US. In the midst of heated debates about how best to stem violence without curtailing individual rights of gun owners, WIRED’s Jason Pontin proposes a tech-driven compromise: requiring gun owners to purchase insurance to cover for the “cost of their choices”.
“Artificial intelligence and data analytics could price insurance so that guns did not become prohibitively expensive for too many Americans,” writes Pontin. Seems fair — perhaps too fair?
If you’d like to receive our Dispatch directly in your inbox, send us a request at info@rarebirds.io and we’ll be sure to include you in our upcoming newsletter.
Originally published at rarebirds.io on March 15, 2018.
|
The Rare Birds AI Dispatch #9
| 0
|
the-rare-birds-ai-dispatch-9-1aef470e3cc4
|
2018-06-14
|
2018-06-14 03:04:06
|
https://medium.com/s/story/the-rare-birds-ai-dispatch-9-1aef470e3cc4
| false
| 783
| null | null | null | null | null | null | null | null | null |
Fintech
|
fintech
|
Fintech
| 38,568
|
Rare Birds
|
We build software with brilliant, globally connected teams to make the world a better place. http://rarebirds.io
|
93402fbb6135
|
rarebirdslabs
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-21
|
2018-06-21 05:53:46
|
2018-06-21
|
2018-06-21 05:56:48
| 1
| false
|
en
|
2018-06-21
|
2018-06-21 05:56:48
| 1
|
1af11e071515
| 1.928302
| 0
| 0
| 0
|
In an increasingly competitive business world, companies across the industries are expanding the scope of technology in their operations to…
| 5
|
AI AND THE FUTURE OF ENTERPRISE MOBILITY
In an increasingly competitive business world, companies across the industries are expanding the scope of technology in their operations to stay ahead of the curve. Traditional processes are gradually getting replaced by smart technologies and growing application of mobility is shrinking the physical office space now. As mobile technologies continue to bring innovative features and business-related functionalities, a smartphone is taking up bigger responsibilities in becoming an entire workspace in itself. Whether it is accessing a knowledge source or internal or external communication, it is all possible through a mobile device today.
Since engagement of people with mobile devices is only increasing and various studies have proven that people are spending more time on their smartphones, businesses are realizing the potential of mobility as to how it can bring a higher level of operational efficiency and employee productivity. Leveraging mobile apps, novel and innovative features and location-based services, enterprises are attempting to redefine the work environment and enhancing employee experience.
Mobility has acquired a rapid pace in recent years and there are several case studies from around the world where it has been instrumental in generating higher outputs and increasing overall organizational performance. Capabilities of enterprise mobility are well above any mere speculations today and its application is not restricted to big enterprises anymore. More and more middle and even small-sized businesses are investing in crafting a strategy around it.
AI IN EVERYDAY INDUSTRY LIFE
Mentioning a few AI identities such as Apple’s Siri, Amazon’s Echo, and Microsoft’s Cortana is imperative here; as these are everyday names for busy workers in any enterprise. In recent years, AI developed rapidly and became an indispensable go-to tool for almost all. Today, it is hard to carry on with a busy working day without AI at the helm. Voice-activated helpers, facial recognition software, carefully written and time-tested algorithms, and smart applications are driven by AI are the everyday things. Keeping in view AI’s escalated consumer engagement through personal devices, enterprise mobility is definitely in for ground-breaking experiences.
Also, as the data shows, 2017 was the year when IoT, AI, and BYOD were on top of technological advancements that organizations embraced for increased productivity. Industries across the trade-scape that harnessed the power of IoT and portable technology opened doors for additional growth and brought significant output to the table; all of it through accelerated employee contributions and teamwork. Mobile applications powered by BOT technologies and AI are making the tasks easier, faster and error-free for more and more enterprise workers. These technological advancements are empowering the enterprises to overcome the limitations and touch the success skies that seemed impossible a few years ago.
Continue Reading :
|
AI AND THE FUTURE OF ENTERPRISE MOBILITY
| 0
|
ai-and-the-future-of-enterprise-mobility-1af11e071515
|
2018-06-21
|
2018-06-21 05:56:49
|
https://medium.com/s/story/ai-and-the-future-of-enterprise-mobility-1af11e071515
| false
| 458
| null | null | null | null | null | null | null | null | null |
Mobile App Development
|
mobile-app-development
|
Mobile App Development
| 30,407
|
TechJini Inc
|
Google certified developer agency specializing in Web and Mobile Application Development, Cloud, Internet-of-Things and Big Data services.
|
f4bfe1c9cd1b
|
Techjini
| 47
| 490
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-08
|
2018-02-08 05:44:30
|
2017-12-29
|
2017-12-29 00:25:36
| 1
| false
|
en
|
2018-02-08
|
2018-02-08 05:45:01
| 7
|
1af1297f1d17
| 2.479245
| 0
| 0
| 0
|
How can you drive your business toward success in the next quarter?
| 5
|
Tech resolutions every business should adopt
How can you drive your business toward success in the next quarter?
Knowing the trends by heart won’t cut it — you need a solid plan. Around this season, we like to call them resolutions.
Mark Zuckerberg’s annual resolutions are always focused on improving his personal life, but they’re also connected to growing Facebook. Take a page from his book and include a major tech resolution in your personal list this year.
Need inspiration? Here’s what we’ve come up with for our own lists:
Redefine your digital presence beyond social
Engagement, impressions and reach will continue to be important in marketing and customer service efforts in 2018. But it’s an old game at this point. Your customers and competitors already expect you to be social media savvy.
To win, you need to make new rules. Create better experiences for your customers. Aside from social, explore other opportunities like content marketing, artificial intelligence (AI) and services automation. Be a digital-first business — as all successful companies should be next year.
Embrace data
Even if you’re not actively gathering it, the amount of data your customers generate will only get bigger in 2018. Their data comes from their online searches, active time spent on devices, apps downloaded and so on. It would be a complete waste if you didn’t harness it all!
Data helps you understand your customers’ needs better. For example, looking into what your customers search for on Google helps you figure out their interests. Then you can roll out marketing campaigns or new products connected to these. Investing in expensive software is not required — there’s a wealth of tools from Google and Facebook already available.
A word of caution: big data alone won’t give you superpowers to save the world (or your business). You could have access to every user’s preferences and habits, but that knowledge would be useless without good customer service. The trick is to keep the good stuff (personalisation, empathy, keen decision-making) and weed out the bad (waiting time, poor info funnels, high drop-off rates).
Dive into AI
You might think that fancy tech is the last thing your startup needs, but AI should actually be one of your first priorities in 2018.
You may not realise it, but AI and machine learning are already becoming mainstream in major industries such as healthcare, manufacturing and telecommunications. For your business, intelligent machines can help transform task management, marketing and online customer service. For example, why not try augmenting your round-the-clock after-sales team with an efficient, witty chatbot? It’s been proven to increase revenue and reduce costs in other startups!
Give your employees room to innovate
Building a great team isn’t just about hiring the best talent (although that’s always the most important factor) — it’s also about allowing the existing members to grow and learn on the job.
Innovation doesn’t have to be a CEO’s sole burden. The ‘next big idea’ could come from anyone in any department! So make ideation sessions, workshops and learning days mandatory for your team this year. Here at Rare Birds, employees are allowed some time off every month to pursue research or projects that grow their skills. Push and empower your team to think beyond the box — it’s a win-win.
“Meaningful innovation does not need to be based on outright invention,” innovation expert and author Gabor George Burt said during a TEDx talk. This year, make it your mission to unlock bold new ways to get things done. Let’s do it together!
Originally published at rarebirds.io on December 29, 2017.
|
Tech resolutions every business should adopt
| 0
|
tech-resolutions-every-business-should-adopt-1af1297f1d17
|
2018-06-08
|
2018-06-08 09:11:41
|
https://medium.com/s/story/tech-resolutions-every-business-should-adopt-1af1297f1d17
| false
| 604
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Rare Birds
|
We build software with brilliant, globally connected teams to make the world a better place. http://rarebirds.io
|
93402fbb6135
|
rarebirdslabs
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-29
|
2018-08-29 01:43:23
|
2018-08-29
|
2018-08-29 01:43:38
| 0
| false
|
en
|
2018-08-29
|
2018-08-29 01:43:38
| 1
|
1af3e9c619c9
| 3.154717
| 0
| 0
| 0
|
DOWNLOAD in <PDF> Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection…
| 1
|
Pdf Download eBook Free Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection By Bart Baesens [Free Ebook] #ebook
DOWNLOAD in <PDF> Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection EPUB By Bart Baesens
Link https://authorbestsipub.icu/?q=Fraud+Analytics+Using+Descriptive%2C+Predictive%2C+and+Social+Network+Techniques%3A+A+Guide+to+Data+Science+for+Fraud+Detection
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Read Online PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Download PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Download Full PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Download PDF and EPUB Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Read PDF ePub Mobi Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Reading PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Read Book PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Read online Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Download Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Bart Baesens pdf, Download Bart Baesens epub Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Read pdf Bart Baesens Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Download Bart Baesens ebook Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Read pdf Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Online Download Best Book Online Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Read Online Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Book, Read Online Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection E-Books, Read Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Online, Read Best Book Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Online, Read Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Books Online Download Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Full Collection, Download Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Book, Read Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Ebook Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection PDF Read online, Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection pdf Download online, Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Read, Download Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Full PDF, Read Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection PDF Online, Read Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Books Online, Read Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Full Popular PDF, PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Read Book PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Read online PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Download Best Book Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Read PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Collection, Read PDF Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Full Online, Read Best Book Online Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection, Download Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection PDF files
#Ebooks #Mobi #BookOnline #TXT #DownloadOnline
|
Pdf Download eBook Free Fraud Analytics Using Descriptive, Predictive, and Social Network…
| 0
|
pdf-download-ebook-free-fraud-analytics-using-descriptive-predictive-and-social-network-1af3e9c619c9
|
2018-08-29
|
2018-08-29 01:43:38
|
https://medium.com/s/story/pdf-download-ebook-free-fraud-analytics-using-descriptive-predictive-and-social-network-1af3e9c619c9
| false
| 836
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
frasertravers6
| null |
f259564e514f
|
frasertravers_70356
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7b837cf1fd73
|
2018-07-20
|
2018-07-20 22:50:53
|
2018-07-23
|
2018-07-23 06:23:10
| 1
| false
|
en
|
2018-07-24
|
2018-07-24 09:02:06
| 3
|
1af3ec544917
| 2.139623
| 21
| 0
| 0
|
My new home is in Florida.
| 5
|
Changing is scary. Not changing is scarier: A milestone in career & life with new challenges in Florida.
My new home is in Florida.
I recently moved from Microsoft / Seattle to a small company in Florida. It’s just not because I can get closer to Annie, my lovely wife, who now works at Estée Lauder Companies in Miami, I am at a point in life where maintaining status quo scares me more than the comfort it brings me.
Now and then
Moving forward, I’m now the Director of Machine Learning & Product in this new FinTech company to drive ML initiatives and scale the team and culture.
Microsoft was a second home to me. Back in 2006–2007, I was at Microsoft as a vendor as an assistant to Developer Evangelist. I then joined Microsoft as a full-time employee in 2011 as a Product Manager right after getting my Master’s degree from University of British Columbia in Canada. My best 20s have been working at Microsoft / in Seattle for over 7+ years.
Looking back, moving from Vancouver, BC, Canada to Seattle, WA was easy. It was only 150 miles away. The weather was similar. There were many friends with background and taste alike in Seattle. The job had good career trajectory. It was a no brainer to make the move. Nothing was truly scary for me. The lifestyle was stable and comfortable. And since then, there were interesting challenges at work in Dynamics CRM team and Bing Ads Platform team allowing me to contribute and grow.
But at some point in life, you may ask yourself the same thing like I did:
What are the regrets you know you will have if you did or didn’t do something?
For me, I’d regret not having lived in another city to expand my horizon and life experience. I’d regret not trying out in a completely different company to form broader perspective. I’d regret not pushing myself further in my career, facing different set of challenges that need a new level of skill sets.
After talking to many close friends seeking for advice, the answer to my next steps in life & career became clear to me. Here it is, in the summer of 2018. Saying good bye for now to old friends in Seattle and Microsoft. And getting ready to redefine myself in the next chapter in Florida.
My takeaway
Life is a canvas, and this masterpiece of yours is unique. It is an ongoing journey and you can paint whatever you want: raising kids, making bold career moves, turning your hobby into passion, investing in developing your knowledge and skills, anything. It is defined by every moment in your life with desire, motivation, actions, or lack of actions.
To online readers, friends, and family, I say: No matter where you are in your life, it is not too late to create YOUR masterpiece.
“A yellow “Go up and never stop” neon with a long arrow against a black background” by Fab Lentz on Unsplash
Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 or comment below.
Oh we’re hiring software engineers, product managers, & ML engineers ;)
|
Changing is scary.
| 80
|
changing-is-scary-1af3ec544917
|
2018-07-27
|
2018-07-27 16:32:16
|
https://medium.com/s/story/changing-is-scary-1af3ec544917
| false
| 514
|
The official Journal blog
|
blog.usejournal.com
|
usejournal
| null |
Noteworthy - The Journal Blog
| null |
did-you-know-the-journal-blog
|
STARTUP,PRODUCTIVITY,ENTREPRENEURSHIP,TECH,TECHNOLOGY
|
usejournal
|
Life
|
life
|
Life
| 283,638
|
Gordon Chang
|
Tech blogger into product management, UX, and machine learning, who also enjoys capturing little moments of life and inspiration.
|
a63006c5c428
|
gordonchang
| 164
| 172
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
32881626c9c9
|
2018-06-09
|
2018-06-09 20:33:04
|
2018-06-09
|
2018-06-09 21:01:38
| 1
| false
|
en
|
2018-06-09
|
2018-06-09 21:39:10
| 0
|
1af465ddd2c3
| 11.422642
| 31
| 4
| 0
|
So I am not quite awake this morning and at the coffee station in a hotel lobby. You’ve been there, bleary-eyed. A light conservation over…
| 5
|
Scaling Beyond Human Nature
Everyone will reach a point where their time for understanding data and its analysis competes forcefully with their time needed to make and rationalize decisions.
So I am not quite awake this morning and at the coffee station in a hotel lobby. You’ve been there, bleary-eyed. A light conservation over selecting sweeteners and creamers between an anesthesiologist and a pilot leads to a discussion on the similarities of their jobs. Both aviation and anesthesia require extensive preparation prior to the execution of their work… without proper preparation would you get in the plane or lay down on the surgical table? Me neither.
Statistically, bad outcomes occur with planes on landing and takeoff, and with patients upon induction and emergence… and almost nothing happens in between, cruising at 37,000 feet during flight or watching the surgeon work tirelessly, capillary by capillary, during an operation. Both jobs create tons of data before, during and after each success and even more data after each failure. Both function in very complex inherently dangerous settings.
I’m the anesthesiologist and yes, I have actually watched an operation for hours, one capillary at a time as my conversation companion at the coffee station has sat for hours and hours at cruising altitude watching the sky be really blue. The pilot and I both have kids, approximately 15–25 years old, so the conversation turns from our jobs to the future, the kids and their jobs. What do you teach your kids?
His kids do not want to be pilots and three of my four kids do not want to be doctors. Three weeks ago, the one that does want to be a doctor expressed his wish to be a research neurosurgeon for the military working on cybernetic devices and neural interfaces to enhance and expand each soldiers capabilities while making them nearly indestructible… he’s fourteen and may have watched Iron Man too many times or perhaps his X-Box time needs to be reduced.
When I asked the boy, ‘don’t you want to operate on people and help them?’… his response was, ‘sure, but only if I do not have to deal with a hospital or the administration or worry about getting paid and not deal basically with the system or its rules that both you and mom hate so much in medicine.’ You should know two things now: His mother is a doctor and kids listen to every conversation you have when you complain about work.
Knowing I am about to enter the argument with ‘he who knows all’ — the American teenager, I approach it from his point of view… ‘Son, even in the military you will have a system, and rules, an administration and that’s just the way it is.’ I did not mention the fifteen years of additional of education he’ll endure nor the pleasantness of the White Ivory Tower of Education… and that is a system, too, a large entity with administration and rules. After all, he’s fourteen. No need to overwhelm him.
‘No, I won’t,’ he replies instantly. Cue my uncontainable parental eye roll… yet, he keeps speaking and explains it to me, ‘Dad, everyone in my generation has to learn to invent, to create and to keep creating and inventing until artificial intelligence can do it better than you can and then you have to learn something else.’ Okay. I felt old for a moment and I’d be lying if I said I did not see a flash of Arnold Schwarzenegger in the Terminator role for an instant in my mind’s eye. The pilot and I laughed at my kid wanting to create the apocalyptic version of Iron Man. Last summer the same kid wanted to be an actor. Kids.
We refilled our coffee and kept chatting. The pilot said the airline industry is ready to replace co-pilots in the cockpit with ground-based co-pilots that simultaneously are available to fly perhaps dozens of aircraft. They will operate in redundancy with narrow-scope artificial intelligences that can already takeoff and fly and land planes. I mirrored this sentiment, as the future outlook for anesthesia is where doctors oversee larger numbers of rooms with anesthesia technicians in each room with patients. The doctors are given a heads up for issues by both technicians and smart-applications (narrow-scope artificial intelligences) watching vital signs, the patient, the technician and the surgeon constantly.
However, in my opinion, neither will happen right away. He asked me why? Well, because there is a bigger problem with our industries, well, really, all of the industries and entities. Human nature may be near its limit and can no longer scale and adapt to the levels of complex hierarchy required to really fully move to those levels of automation… at least not with humans in them. It is not just the airlines and anesthesia, it’s businesses and governments and society’s systems everywhere, globally, and the failures are just beginning to reach the point where everyone will notice. Everyone noticed two Southwest airline engines coming apart in flight… because that is not supposed to happen.
The pilot said he had time and asked me to explain. Ask my kids; I am not one to shy away from giving a good lecture!
Okay. Here we go. Scaling beyond human nature will be difficult if not impossible.
From the farm to the factory to the office to what’s coming next, our human society scaled itself to meet the needs and wants of each new generation’s endeavors and advances over the past two centuries. Okay, obviously for millennia before that as well but I want to get to today in a reasonable amount of time… after all ‘time’ is part of the problem. The scale of human enterprise continues to climb inexorably and there is something wrong, something off. You can feel it in your own lives. A frustration, a nagging desire to perhaps really know what is going on, maybe, others describe it as always having to catch up. No, it is not the Matrix.
The Agricultural Age eventually gave way to the Industrial Age. Farmers watched, maybe encouraged their children become factory workers. Factories sent machines back to the farms to replace and enhance the farm labor. The result: Farms required less human laborers. Factories required a larger human hierarchy to function than a farm.
The hierarchy of family farms with parents on top and children (labor) on the bottom was the blueprint for the factory hierarchy where managers (Executive, Financial, Operations) on the top decided and instructed workers on the bottom. Inventions, Improvements, Complaints and Problems all filtered as data from worker(s) to manager(s). Analysis of the data by direct managers and then decisions by higher managers were eventually returned from the top to the bottom and the cycle would repeat. Again and again a constantly improving and expanding hierarchy led to bigger and bigger endeavors that led to incredible factories with thousands of workers. More and more data was created and we humans then created the tools to work with more and more data.
The Industrial Age led to the Information Age and accompanying Technology Age. Factory workers sent their children to college to become technicians and office workers and the future of business and government, law and medicine, teaching and learning. The technician’s and office worker’s productively increased the factory’s productivity reducing the cost of goods and services for everyone by dramatically increasing the productivity of each worker. Better information and increasing analysis reduced waste and increased business profits vastly, thus expanding the reach of entities to regions, then entire nations and today, the world itself. The result: Factories required less human laborers. Complex global entities required a larger hierarchy to function than even the largest factories on Earth. These entities have led to governments and businesses with millions of employed individuals.
The Information and Technology Age will lead to the Automation Age. Perhaps it is here already; perhaps it is a generation or two away from truly impacting the world… not going to argue that today. Continuing from before, 20th century office workers and technicians encouraged their 21st century children to get as much education as possible in order to become or do something different and something more than themselves. Like the farmer and the factory worker before them, these technical and office-based-parents saw that the people above them had more education; more specific training and licensing. Thusly, they, society and the governments have all encouraged today’s youth to embrace the advanced-degree educational environment. Like before, these steps in advancement should help us scale up to the next level of complexity. Something is not right this time.
The ‘office’ has gone everywhere and also exists at anytime and every time, the work exists in continuality, data streams up our technology-enhanced business, government and system hierarchies and the decisions per day, per hour, per minute and per second stream back down. This is not just a phenomenon within our ‘working’ lives; our personal social technology-enhanced hierarchies require data collection, analysis and decision-making.
The biggest entities create and use exponentially more data and require a near infinite expansion of analysis to produce quick and accurate decisions. Critical decisions when seen collectively are needed in order to advance the entire purpose of the entity and in some cases, today… just to keep it running. In every system there exist limiting factors.
The Human Mind, it’s Nature and Time.
One has to believe in the basic good nature of the human being to understand the problem. Today’s constructed systems and hierarchies controlling and running these complex global entities are full of good people. Data and analysis go up the human pyramid (even more and faster the more technologically-enhanced the pyramid) and decisions come back down. There is a thru-put at each level, generally data is going up, analysis is occurring and decisions are coming down, call it a ‘bandwidth.’ Go high enough in the hierarchal pyramids of today’s entities and you will reach a point where the time for understanding data and its analysis competes forcefully with the time needed to make and rationalize decisions.
Humans have a fantastic ability to categorize data, analyze it for threat or opportunity and decide how best to not get killed or to advance their needs and desires… this is our nature and our power as individuals and as a species. We can train ourselves to specialize in certain clusters of data, analysis and decision-making to add or make value to a human system… but it is finite, we, ourselves, exist through time and we all seek (consciously or not) the power to protect what we love… it is literally hardwired into our brains.
Remember the good nature of the human being you believe in… well, now think of all those times in your own life where you have observed someone (other than yourself) choosing to cut a corner. Ever heard someone say something is complete when they know or should know it is not? You know that person that chooses to over or under communicate in order to promote or shield him or herself from an outcome or a reward. There have been many people and many times in your own life where you have witnessed an instance where another has chosen to be less than the good-natured human being they can be most of the time.
For the sake of learning, let’s assume ALL those times happened for a ‘good’ reason… meaning the person chose to do what they did because of the good or lack-of-not-good it would do in their life both their working-life and personal-life.
Everyone, no matter what they effort to do, will reach a point where their time for understanding data and its analysis competes forcefully with their time needed to make and rationalize decisions and they have to choose how to spend their time and energy. In a complex hierarchy… what would you do?
One could choose not to advance further into an organization and maintain the highest degree of fidelity possible at their job. One could change jobs hoping to find a different mix of data, analysis and decisions. One could choose to cut those corners, demonstrate they can outperform others in the organization and conveniently censor underperformances from that organization while also promoting those that help them move up and eroding those that hinder their advancement. Guess who moves up in today’s hierarchies? Then these people compete with one another all the way to the top! Now, it’s always kind of been that way, you and I both know this, but technology greatly magnifies this potential. In simpler times, clever people with time to see what others were doing could ‘catch’ these corner-cutters.
Let’s look at the very top. At that level of an organization, all data has been analyzed, collated, reanalyzed, scrubbed, buffered, massaged and presented by each of the levels below to the next one above it. All by humans with their own needs and desires, conscience or unknown, honorable or less than. The decisions made at the top only go down the organization through those same levels with increasing impact the lower one goes until, at the bottom of the pyramid, they feel the full force of decisions made. Power concentrates going up. Decision’s impact(s) concentrate going down. Those at the bottom have no power individually. Those at the top feel no impact from the decision they make.
Here is the very crux of today’s problem. How many errors or omissions, by negligence or intent, have influenced the data’s analysis on the way to the top? How many of those decisions, based on that data and its analysis, that need to be made per day, per hour, per minute and eventually per second are wrong and begin to erode the function of the entity?
Psychopaths are attracted to this structure because it promotes their behavior(s). Those more empathic human beings are repulsed by this structure and what they perceive it does to good people and to them. They simply tend to give up at the lower levels or get out of the psychopath’s way to the top. This imbalance eventually produces systemic failure. At first, it is small things involving small areas within an entity and maybe small groups of people. Collectively, these failures begin to travel up the pyramid where power accumulates. Enough failures within one entity and it will cease advancing; its very power based on inaccuracies and assumptions. More failures and it will barely remain running as an entity and at some point, failure will break it completely because the top has lost any power to control the whole because it no longer knows or cares what is happening.
What should happen at that point? Entities should be allowed to dissolve, break-up and be rebuilt in smaller more manageable versions and that used to happen… but it is not what happens today. Too big to fail really applies to those entities that are so big they have already failed or are continuously failing thus their continued existence artificially only promotes and creates more failure. These entities will never move forward as they are and you interact with more of them than you realize.
That’s why you will not see the automated airlines or anesthesia as quickly as my technologically savvy friends would like. Human nature itself is preventing the systems from scaling further. Why is it different this time? Why can’t it be like the farm to the factory? Well, there is just a limited amount of time for a human to learn. There is a limited amount of time with which to use that knowledge to create value for a society and be rewarded as an individual. The amount of knowledge required to create new value is expanding faster within a human lifetime than can be organized and learned. Think about that for a moment.
Wait until the humans begin actively sabotaging automation’s advancements, as old jobs are evaporated and new ones created… maybe more than once during their… I would say, ‘careers’ but I’m not sure that is what people will have anymore.
You see this failure in everyday items as well as the massive global structures of power, like business and government. Take news programs; again it is just data, analysis and decisions that influence what it reported… and yet the amount of knowledge available is just a bit too vast to be organized for everyone. The audience no longer has a common base of knowledge thus the news itself has broken it up for ‘their’ consumers.
Things will get far worse in the near future as we human minds attempt to adapt to today’s creation rate of knowledge, ideas, technology and automation. This is because the hierarchical pyramid itself no longer works in a system where information flows everywhere instantly. This allows competing entities to act or react to each other’s raw data. Those actions and reactions create new data for all the entity(s) to consider and act or react to… and so on… the feedback loop consumes all the human bandwidth within the entity’s hierarchy and the errors stack up until failure is achieved, sometimes magnificently.
And that is why, for now, pilots will stay in the cockpit and anesthesiologists will stay in the operating rooms. Pilots will continue to check and recheck the aircraft prior to flying it each and every time and anesthesiologists will check and recheck their machines and equipment prior to each anesthetic.
My wife then caught up with me. I introduced my friend to her. Our parental duties needed, my wife and I left. The pilot called to me as I got on the elevator, ‘you should write that down. Do you have a solution?’ The elevator doors closed.
Yeah, I do actually. I just did it by writing this article and you reading it.
|
Scaling Beyond Human Nature
| 304
|
scaling-beyond-human-nature-1af465ddd2c3
|
2018-06-20
|
2018-06-20 20:06:22
|
https://medium.com/s/story/scaling-beyond-human-nature-1af465ddd2c3
| false
| 2,974
|
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
| null |
datadriveninvestor
| null |
Data Driven Investor
|
info@datadriveninvestor.com
|
datadriveninvestor
|
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
|
dd_invest
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Christopher Yerington
| null |
1a7f90b8b403
|
christopheryerington
| 32
| 17
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-26
|
2018-07-26 06:43:26
|
2018-07-26
|
2018-07-26 06:43:26
| 3
| false
|
en
|
2018-07-26
|
2018-07-26 06:45:14
| 4
|
1af58aedf81a
| 3.795283
| 0
| 0
| 0
|
Artificial intelligence has the potential to transform almost every industry. It is changing the whole scenario of the business marketing…
| 5
|
How Artificial Intelligence Can Enhance Your Email Marketing Strategy
Artificial intelligence has the potential to transform almost every industry. It is changing the whole scenario of the business marketing. Almost every business, be it small medium or large is integrating the email marketing strategy in their businesses. Email marketing campaigns are a great way to improve the sales and generate the new leads.
Digital marketing experts have noticed that email marketing is effective but it has its own limitations and the results become saturated after some time. Therefore, there was a need for the revival of the email marketing strategies. Artificial intelligence technology has come out to be as the agent of the change. Artificial intelligence applications are revolutionizing the email marketing strategies. Business is experiencing a tangible improvement in their overall performance and fueling the surge in the tech revolution.
Some of the benefits of using artificial intelligence in email marketing are as follows:
Artificial intelligence can offer the detailed marketing campaign analytics
Artificial intelligence can improve the daily results and performances
AI can help businesses in identifying the new customers
AI can provide the better account insight to the businesses
Artificial intelligence applications are widely becoming famous among the digital marketers in carrying out the email marketing campaigns. AI helps in generating more leads and improves the traffic on the website. Let us have a look at how artificial intelligence is being applied in the email marketing strategies:
1. AI in A/B testing
Digital marketers have been using the multivariate and A/B testing for a long time but artificial intelligence and machine learning algorithms have made it possible for the digital marketers to carry out the testing like never before. There is a growing number of machine learning applications that are offering the better and robust solutions in A/B testing. Deep learning algorithms are helping in making the predictions and identify the minor difference in the test cases that might not be possible for the human eye to detect.
2. The subject line and the body optimization
Subject and body of the email play an important role in the email marketing campaigns. Digital marketers have to ponder a lot on the subject and the body of the emails in order to get the maximum email openings and click through rate. Artificial intelligence algorithms are helping in eliminating the guesswork from the content creation. Nowadays, AI platforms like Phrases, Persado can determine the subject body and call to action.
Machine learning algorithms allow these platforms to learn and synchronize itself with the user behavior. These platforms also make use of NLP (Natural Language Processing) which is a sub-branch of artificial intelligence to create the subject titles and the body content. The content gives an impression that it is written by human and it is also consistent in accordance with the brand’s voice.
3. Optimize the sending time
When it comes to effective email marketing, one thing cannot be ignored. The sending time of the emails. Sending time of the email is an important factor in the success or the failure of the email marketing campaign. Time of the email has a considerable effect on the opening rate of the email. For many years, digital markets have been struggling to guess the right time to send the emails to the potential customers. For example, an email sent at the night time has the less probability of opening.
Machine learning creates large segments of the users by understanding the user behavior like which is the best time for an individual to open an email. It also optimizes the sending time on a per-user basis. Doing this manually is not possible but it is much easier for the artificial intelligence machines.
4. Data analytics
Artificial intelligence applications are not only applied to the email marketing strategy building. But also to the data that is generated after the campaign is complete. Data mining and data science services make it easier for the digital marketers to carry out better data analytics. Machine learning applications can generate powerful insights from the data. Artificial intelligence platforms can incorporate the data analysis and email marketing campaigns for the better understanding. Email marketers can improve the performance of the campaigns next time by pointing out the weak sections in the campaign.
5. Personalizing the email content
Sending the personalized content to the clients can boost the performance of any email marketing campaign. Artificial intelligence companies are helping the digital marketers in personalizing the subject lines, content and also the images in the content. Artificial intelligence algorithms have the ability to understand the user’s past behavior for example liking, dislikings, interests, geographical location and curate the content accordingly.
Email marketing was a manual and campaign-oriented activity. But due to recent artificial intelligence advancements, more and more companies have started to use email marketing campaign as a part of their email marketing strategy. Artificial intelligence and machine learning algorithms are becoming an important part of the email marketing strategy. In the upcoming years, we are going to witness more AI and machine learning applications in the email marketing strategies.
Originally published at www.juvlon.com on July 26, 2018.
|
How Artificial Intelligence Can Enhance Your Email Marketing Strategy
| 0
|
how-artificial-intelligence-can-enhance-your-email-marketing-strategy-1af58aedf81a
|
2018-07-26
|
2018-07-26 06:45:15
|
https://medium.com/s/story/how-artificial-intelligence-can-enhance-your-email-marketing-strategy-1af58aedf81a
| false
| 860
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Juvlon
|
We love emails! We help in targeted and customer-centric marketing for businesses. Visit at http://www.juvlon.com/
|
d9e7eeb66b29
|
juvlonniche
| 4
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-21
|
2017-12-21 03:19:27
|
2017-12-21
|
2017-12-21 03:25:03
| 1
| false
|
es
|
2017-12-21
|
2017-12-21 03:25:03
| 0
|
1af5f736e01d
| 0.271698
| 0
| 0
| 0
| null | 5
|
El precandidato por el PRI, José Antonio Meade, genera más “ruido” en Twitter a través de su cuenta oficial.
|
El precandidato por el PRI, José Antonio Meade, genera más “ruido” en Twitter a través de su cuenta…
| 0
|
el-precandidato-por-el-pri-josé-antonio-meade-genera-más-ruido-en-twitter-a-través-de-su-cuenta-1af5f736e01d
|
2017-12-21
|
2017-12-21 03:25:03
|
https://medium.com/s/story/el-precandidato-por-el-pri-josé-antonio-meade-genera-más-ruido-en-twitter-a-través-de-su-cuenta-1af5f736e01d
| false
| 19
| null | null | null | null | null | null | null | null | null |
Politics
|
politics
|
Politics
| 260,013
|
Link político
|
Análisis en social media de temas y personajes políticos en México y el mundo
|
19f87b03e2fb
|
linkpolitico
| 0
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-17
|
2018-06-17 07:53:23
|
2018-06-22
|
2018-06-22 04:23:59
| 1
| false
|
mn
|
2018-06-22
|
2018-06-22 04:35:31
| 6
|
1af69321931c
| 4.822642
| 4
| 0
| 0
|
Юуны өмнө нийтлэлийг маань уншиж буй таньд энэ өдрийн мэнд хүргэе. Энэхүү нийтлэлдээ Andrew Ng профессорын “Машин сургалт” хичээлийг…
| 2
|
Andrew Ng профессорын “Машин сургалт” хичээлийн эмхэтгэл (удиртгал хэсэг)
Юуны өмнө нийтлэлийг маань уншиж буй таньд энэ өдрийн мэнд хүргэе. Энэхүү нийтлэлдээ Andrew Ng профессорын “Машин сургалт” хичээлийг үзсэнээр өөрт олж авсан мэдлэгээ бусадтай хуваалцахыг зорилоо.
Сүүлийн жилүүдэд мэдээллийн технологийн хөгжилд хэд хэдэн томоохон тэсрэлтүүд хийгдсан бөгөөд тэдний нэг нь яах аргагүй “Машин сургалт” (Machine Learning) юм. Үүнтэй холбоотойгоор хүмүүс (тэр дундаа мэдээллийн технологийн чиглэлээр ажилладаг, суралцдаг хүмүүс) энэхүү арга техникийг илүүтэйгээр сонирхон судлах болсон. Мөн математик болон компьютерийн шинжлэх ухааны чиглэлийн доктор, профессорууд ч үүнийг анзааран “Машин сургалт”-ын талаарх мэдлэгээ цахим хэлбэрээр бусдад түгээх нь ихэссэн билээ. Тэд Edx, Coursera, Udacity, Udemy, … гэх мэт онлайн сургалтын вебсайтуудад өөрсдийн хичээлийг байршуулснаар энэхүү мэдлэгээ дэлхий дахинд түгээсээр байна.
Миний хувьд ихэвчлэн coursera (онолын сургалт) болон udemy (практик дадлага) сайтуудаас онлайн хичээл үздэг. Coursera сайтанд байрлах Andrew Ng профессорын “Machine learning” хичээлийг хэдэн сарын өмнө үзэж дуусгасан. Энэхүү хичээл нь машин сургалтын аргын үндсэн математик ойлголтуудыг дэлгэрэнгүй тайлбарласан тул тухайн аргын мөн чанарыг ойлгохыг хүссэн суралцагчдад маш их ашиг тустай хичээл болно. Тиймээс энэхүү нийтлэлд тус хичээлийг сонгон авч, уг хичээлээс юу сурч болох талаар өгүүлэхийг оролдлоо.
Andrew Ng профессорын “Machine learning” хичээл нь машин сургалтын үндсэн ойлголтууд, түүнд багтах арга техникүүд, алгоритмуудыг заахаас гадна ямар үед ямар аргыг ашиглах, тухайн аргыг хэрхэн зөв хэрэглэх, машин сургалтын аргаар гарган авсан загвараа (model) хэрхэн сайжруулж болох талаар дэлгэрэнгүй тайлбарлан заасан байдаг. Мөн энэхүү хичээлийг үзэхэд магадлал, статистик, интеграл, дифференциал болон шугаман алгебрын талаар анхан шатны мэдлэгтэй байхад л хангалттай.
Хичээл нь 11 долоо хоногийн хуваарьтай. Долоо хоног тус бүрт 1-3 сэдвийг агуулах ба сэдэв тус бүр 2–3 дэд сэдэвтэй. Сэдвүүдийг долоо хоног тус бүрээр нь ангилан бичвэл:
1-р долоо хоног:
— Машин сургалтын аргын танилцуулга (Introduction)
— 1 хувьсагчтай шугаман регресс (Linear regression with 1 variable)
— Шугаман алгебрын тойм (Linear algebra review)
2-р долоо хоног:
— Олон хувьсагчтай шугаман регресс (Linear regression with multiple variable)
— Octave / Matlab -ийг ашиглах заавар (Octave/Matlab tutorial)
3-р долоо хоног:
— Логистик регресс (Logistic regression)
— Регуларчлах (Regularization)
4-р долоо хоног:
— Neural Networks: Тодорхойлолт (Neural Networks: Representation)
5-р долоо хоног:
— Neural Networks: Сургалт (Neural Networks: Learning)
6-р долоо хоног:
— Машины сургалтыг ашиглах зөвлөгөө (Advice for applying machine learning)
— Машин сургалтын системийн дизайн (Machine learning system design)
7-р долоо хоног:
— Support Vector Machines
8-р долоо хоног:
— Хяналтгүй (багшгүй) сургалт (Unsupervised learning)
— Хэмжээс бууруулах (Dimensionality Reduction)
9-р долоо хоног:
— Аномаль(гажилт) илрүүлэх (Anomaly Detection)
— Санал дэвшүүлэх систем (Recommender systems)
10-р долоо хоног:
— Том хэмжээний өгөгдөл дээрх машин сургалтын хэрэглээ (Large scale machine learning)
11-р долоо хоног:
— Хэрэглээ: Фото OCR (Application Example: Photo OCR — Optical Character Recognition)
Энэхүү нийтлэлд тус хичээлийн сэдэв тус бүрээс хамгийн чухал хэсгүүдийг тайлбарлан бичих болно. Ерөнхийдөө нийтлэлийг 4 хэсэгт хуваан нийтлэх ба эхний хэсэгт удиртгал хэсэг, дараагийн хэсэгт 1–3 -р долоо хоногийн сэдвүүдийг, 3-р хэсэгт 4–7 -р долоо хоногийн сэдвүүдийг, харин сүүлийн болох 4-р хэсэгт 8–11 -р долоо хоногийн сэдвүүдийг тайлбарлана.
ML буюу Машин сургалт
ML нь AI буюу хиймэл оюун ухааны салбарын нэгэн хэсэг болно. ML бий болсноор ямар нэгэн даалгаврыг гүйцэтгэх програмыг бичихдээ заавал бүх нөхцлийг тооцон програмдаа оруулан бичих шаардлагагүйгээр өгөгдсөн өгөгдлөөс өөрөө суралцан хамгийн гүйцэтгэл сайтай алгоритмыг гарган авах боломжтой болсон. Нэг талаараа хүн шиг өмнөх туршлагаасаа суралцан илүү сайн ажиллаж чадвартай болдог гэж хэлж болно.
ML хурдацтай хөгжих болсон хэд хэдэн хүчин зүйлс бий. Тэднээс заримыг нь дурдвал:
— Өгөгдлийн сангийн олборлолт (database mining) Web automation болон бусад янз бүрийн өгөгдлийг автоматаар олборлох хэрэгслүүдийн өсөлтөөс үүдэн том хэмжээний өгөгдлүүдийг хялбар аргаар гарган авах боломжтой болсон.
Жишээ нь: Website-ийн дата, эмнэлгийн дата, янз бүрийн бүртгэлийн дата гэх мэт.
— Гараар програмчлах боломжгүй аппликейшнүүдийг хийх боломжтой.
Жишээ нь: Автомат техникүүд, Google Car, гар бичмэл танилт, Computer Vision, дүрс танилт, Natural Language Processing (NLP) гэх мэт .
— Өөрийгөө хувьсган өөрчлөх чадвартай программууд хийж болно.
Жишээ нь: Янз бүрийн бараа санал болгох системүүд гэх мэт.
ML -ийг тодорхойлсон маш олон тодорхойлолтууд байдаг. Тэднээс тус хичээлд онцлон авсан 2 тодорхойлолт бий. Тэдгээр нь:
— Артур Самуэл (1959). Машин сургалт: Энэ нь шууд програмчлахгүйгээр компьютерт сурах чадварыг олгодог судалгааны салбар юм.
— Том Митчел (1998). Суралцах гэдгийг тодорхойлох нь: Хэрэв компьютерийн програм нь даалгавар (task) T-г гүйцэтгэх үеийн гүйцэтгэлийн хэмжүүр болох (performance) P нь туршлага (experience) E-ээс хамааран сайжирч байгаа бол тухайн програмыг T даалгаврыг өмнө нь биелүүлсэн Е туршлагаас суралцаж, P гүйцэтгэлтэй болсон байна гэж хэлнэ. Жишээ нь: Имейл програм нь мейлийг спам болон спам биш гэж ялгахдаа таны өмнө нь ялгасан мэдээлэл дээр үндэслэн хэрхэн сайн спам филтер хийх талаар суралцдаг байг. Тэгвэл даалгавар T = мейлийг спам болон спам биш гэж ялгах, туршлага E = таны өмнө нь спам болон спам биш гэж ялгасан мэдээлэл, гүйцэтгэл P = нийт спамын хэдэн хувийг (эсвэл хэдэн ширхэгийг) зөв ялгасныг илэрхийлнэ. ML-ыг ашиглахдаа яг юуг суралцах ёстойг нарийн тодорхойлохдоо дээрх байдлаар тодорхойлж болох юм.
Машин сургалтын алгоритмуудыг ерөнхийд нь дараахь байдлаар ангилж үздэг.
1) Хяналттай (багштай) сургалт — Supervised learning
Энэ нь сургалтын өгөгдлийн хариу (гаралт) нь өгөгдсөн үед сургах арга юм. Хяналттай сургалтын аргын гол санаа нь компьютерт өгөгдсөн даалгавар T-г хэрхэн гүйцэтгэхийг бид заана гэсэн үг. Өөрөөр хэлбэл компьютер нь сургалтын өгөгдөл дэх (бидний оруулсан) хариуг харснаар даалгаврыг хэрхэн гүйцэтгэхийг суралцана. Жишээ нь: Дээр өгүүлсэн мейл ялгах жишээ нь хяналттай сургалтын (хойшид supervised learning гэе) аргын нэг жишээ болно. Supervised learning нь дотроо дараахь 2 төрлийн сургалтын аргыг мөн агуулдаг. Тэдгээр нь:
— Semi-supervised learning (Хагас хяналттай сургалт). Энэ нь сургалтын зарим өгөгдлийн хариу нь өгөгдөөгүй үед сургах арга.
— Reinforcement learning. Програм нь өөрийнх нь үйлдэлд орчноос хэрхэн хариу (шагнал авах) өгч байгаагаас суралцан өөрийгөө сайжруулдаг бол энэ нь reinforcement learning болно. Жишээ нь: AlphaGo нь Го даамын их мастерыг хожих хүртлээ өөрөө өөртэйгээ тоглон тоглолтын арга барилаа сайжруулсаар байсан билээ. Энэ нь reinforcement learning-ийн сонгодог жишээ болох юм.
Supervised learning-ийн асуудлуудыг регрессийн асуудал (regression), ангилах асуудал (classification) гэсэн 2 ангилалд хувааж үздэг. Энэ 2-ийн ялгаа нь гэвэл regression нь хариуг тасралтгүй утгуудаас таамаглах бол classification нь хариуг төгсгөлөх тооны хариунуудаас таамаглана гэсэн үг.
2) Хяналтгүй (багшгүй) сургалт — Unsupervised learning
Энэ нь өгөгдлийн хариу нь үл мэдэгдэх үед тухайн өгөгдлөөс бүтэц гаргаж авах арга юм. Нарийн тайлбарлавал өгөгдөл дэх хувьсагчуудын хоорондын хамааралд тулгуурлан бүлгүүд үүсгэх байдлаар бүтэц гаргаж авна гэсэн үг. Жишээ нь: Хүмүүсийг ижил төстэй кино үзэх байдлаар нь бүлэглэх асуудал нь unsupervised learning-ийн асуудал юм. Unsupervised learning-ийн асуудлуудыг кластер (clustering), кластер бус (non-clustering) асуудлууд гэж ангилж болно. Эдгээр ангиллуудын ялгаа нь clustering нь өгөгдлийг хэд хэдэн бүлэгт хуваах ба зорилго нь өгөгдлийг аль бүлэгт багтахыг таамаглах явдал байдаг бол non-clustering асуудлууд нь өгөгдлөөс ямар нэгэн бүтэцтэй мэдээлэл гарган авахад гол анхаарлаа хандуулдагт оршино. Жишээ:
Clustering — Хүмүүсийг ижил төстэй кино үзэх байдлаар нь бүлэглэх
Non-Clustering — Шуугиантай орчинд хийсэн аудионоос хүний яриаг цэвэрлэн, ялган авах гэх мэт асуудлууд байж болно.
Дээрхээс дүгнэхэд шууд ML алгоритмыг ашиглахаасаа өмнө асуудлаа сайтар тодорхойлсны эцэст ямар аргыг ашиглах нь зөв эсэхийг тодорхойлох нь чухал ажлуудын нэг болно гэдгийг бодолцох хэрэгтэйг ойлгосон. Хэрэв анхнаасаа асуудлынхаа төрлийг буруу тодорхойлвол тэр хэмжээгээрээ цаг алдах эрсдэлтэйг санах хэрэгтэй.
Энэ удаагийн нийтлэлдээ машин сургалтын удиртгал хэсгийг тайлбарлан орууллаа. Дараагийн нийтлэлүүдээрээ хичээлийн хуваарийн дагуу эдгээр аргуудын математик үндсийг тайлбарлаж, мөн эдгээр аргуудыг ашиглан юу хийж болох талаар нэмэлт мэдээллүүд нийтлэх болно.
|
Andrew Ng профессорын “Машин сургалт” хичээлийн эмхэтгэл (удиртгал хэсэг)
| 4
|
andrew-ng-профессорын-машин-сургалт-хичээлийн-эмхэтгэл-удиртгал-хэсэг-1af69321931c
|
2018-06-22
|
2018-06-22 04:35:31
|
https://medium.com/s/story/andrew-ng-профессорын-машин-сургалт-хичээлийн-эмхэтгэл-удиртгал-хэсэг-1af69321931c
| false
| 1,225
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Tumurochir Nasanjargal
| null |
6b4e1bf4aceb
|
ochitumee
| 5
| 50
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-25
|
2018-03-25 15:25:11
|
2018-03-25
|
2018-03-25 15:26:00
| 2
| false
|
en
|
2018-03-25
|
2018-03-25 15:26:00
| 2
|
1af789f015f9
| 0.617296
| 0
| 0
| 0
|
Last week I had conducted the workshop on Big Data and Machine Learning. Around 10 attended the workshop. Employees from APIIT, LB Finance…
| 3
|
Big Data and Machine Learning March Workshop Sri Lanka.
Last week I had conducted the workshop on Big Data and Machine Learning. Around 10 attended the workshop. Employees from APIIT, LB Finance, MAS and Sri Lanka telecom came to the event.
Topics covered at the workshop-
https://uditha.wordpress.com/2017/11/15/big-data-and-machine-learning-workshop-sri-lanka/
Next workshop will be held on June 2018.
|
Big Data and Machine Learning March Workshop Sri Lanka.
| 0
|
big-data-and-machine-learning-march-workshop-sri-lanka-1af789f015f9
|
2018-03-25
|
2018-03-25 15:26:01
|
https://medium.com/s/story/big-data-and-machine-learning-march-workshop-sri-lanka-1af789f015f9
| false
| 62
| null | null | null | null | null | null | null | null | null |
Sri Lanka
|
sri-lanka
|
Sri Lanka
| 3,001
|
uditha bandara
|
Uditha Bandara (MVP) is specializes in Microsoft development , AI, Mobile App, Cloud and Software Testing technologies. https://uditha.wordpress.com/
|
a16b5fb9a06b
|
udithait
| 4
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7f60cf5620c9
|
2018-02-01
|
2018-02-01 23:35:45
|
2018-02-02
|
2018-02-02 17:33:36
| 0
| false
|
en
|
2018-02-05
|
2018-02-05 10:13:44
| 8
|
1af8b92f6b2e
| 4.132075
| 20
| 0
| 0
|
Medium hosts a number of excellent capsule network tutorials. Here are three complementary posts that make for a rewarding reading…
| 5
|
Three instructive and complementary Capsule Network tutorials
Medium hosts a number of excellent capsule network tutorials. Here are three complementary posts that make for a rewarding reading experience:
1) “Understanding Hinton’s Capsule Networks“
Max Pechyonkin has published the first three parts in his excellent and highly popular series on capsule networks.
Part 1 begins with the example of randomly assembled face parts to illustrate how convolutional neural networks fail to take into account hierarchies of object parts. The author then proceeds with an explanation of inverse graphics: computer graphics generates images from abstraction description of objects. Capsule networks attempt to perform the reverse process and map visual information to a hierarchical representation of objects. A brief discussion of why it took decades to implement this new architecture rounds up the first part of the tutorial.
Part 2 describes the building block of capsule networks. Capsules predict the presence of an entity and its pose in vectorial form. While the length of the vector is interpreted to be the probability of the presence of the entity, the orientation corresponds to the pose. The author compares CapsNets with traditional feed-forward network and explains how three out of four computational steps in the former have analogues in the latter. The section on the Matrix Multiplication of Input Vectors is particularly helpful as it explains how the relationship between objects (e.g., a face) and their parts (e.g., a mouth, a nose, etc.) is encoded mathematically.
Part 3 describes the dynamic routing algorithm employed in CapsNets to decide how to send the output to relevant higher-level capsules. The short, but information-packed pseudocode from the original paper is explained step by step. Handwritten figures support the text throughout the tutorial.
2) “Uncovering the Intuition behind Capsule Networks and Inverse Graphics“
The first part of Tanay Kothari’s long-form tutorial explains the Sabour et al. paper on capsule networks from 2017. I would recommend reading this complementary post once you have gone through Max Pechyonkin’s series. It emphasizes different aspects and is written in a different style.
The tutorial includes a discussion of the difference between invariance and equivariance. To the extent that a computer vision system is translationally invariant, a translation of the input does not result in a change to the output. A cat is a cat, no matter whether it is positioned on the left side or the right side of an image. In a translationally equivariant system, a translation of the input leads to an equivalent change to the output. The author suggests that computer vision needs to move beyond translational invariance and achieve viewpoint invariance to deal with real life 3D images.
Max-pooling can be thought of as a crude form of routing. Section 4 uses an example of a 2x2 kernel to forcefully point out just how crude it is: “If MaxPool was a messenger between the two layers, what it tells the second layer is that ‘We saw a high-6 somewhere in the top left corner and a high-8 somewhere in the top right corner.”
One way in which this tutorial complements Max Pechyonkin’s series is through its introduction to pose matrices. A pose matrix encodes the translation, scale and rotation of an object in a 4x4 matrix (or, alternatively, as a 3x4 matrix, with the last row left out). Lower-level parts of an object are related to higher-level parts through these pose matrices. This is memorably illustrated using a tree in which the nodes correspond to the body parts of Mr. Bean.
When presented with the pose for the mouth, you can estimate the pose for the entire face. The same holds for other relationships between parts. Knowing the pose for the left ear too provides clues about the pose for the face. The tutorial explains how incorporating these relationships between parts makes the neural network more robust to distortions in images.
Over time, certain examples have become popular within the community. Many tutorials use the example of face parts to explain capsule networks. Aurélien Géron has popularized the use of houses and sailboats in tutorials. Tanay Kothari explains dynamic routing algorithm using the digits 4 and 7. Given that these two digits have overlapping features, distinguishing them is more challenging than it may seem.
Finally, this tutorial mentions the reconstruction loss that is part of the cost function used in Sabour et al. and the application of coordination addition to further enhance the performance of capsule networks.
3) “A Visual Representation of Capsule Connections in Dynamic Routing Between Capsules”
Once you’ve read the first two tutorials, I would strongly encourage you to check out this post by Mike Ross. It provides one of best visualizations of capsule networks that I have come across. (For those who prefer to start with the mechanics and then proceed to the intuition, this may in fact be the best starting point.)
Remarkably, the post provides both a high-level overview and many of the details in a single visualization. Equipped with the conceptual understanding conveyed by the first tutorials, it allows you learn about the computational steps involved in a CapsNet. It should also serve as a useful guide for those who would like to implement the architecture.
The particular capsule network architecture used for the MNIST classificationt task consists of two parts: PrimaryCaps and DigitCaps. There are 32 primary capsules and one capsule for each digit.
The very first operation in a CapsNet is the convolution used in traditional CNNs. 256 convolutional units produce 36 scalars each. A primary capsule contains 8 convolutional units. The input to a capsule is reshaped as 36 8D vectors. The two parts, PrimaryCaps and DigitCaps, are fully connected. In other words, there is a connection from each primary capsule to each digit capsule. A vector is first transformed with the newly introduced squashing function (to ensure a length between 0 and 1, while preserving orientation) and then multiplied with a matrix. The result of this process is a total of 32*36*10 = 11,520 vectors.
The right-hand part of the diagram illustrates the dynamic routing algorithm. Special attention should be paid to the vectors û_1151,0 and û_1151,1. In the illustrated case, only the coupling weight with respect to the capsule for digit 0 is large, so most of the activity is routed to the output for that particular capsule.
Thank you for reading! If you’ve enjoyed this article, hit the clap button and follow me to receive more information about the latest machine learning resources. Learn more about capsule networks at aisummary.com.
|
Three instructive and complementary Capsule Network tutorials
| 131
|
three-instructive-and-complementary-capsule-network-tutorials-1af8b92f6b2e
|
2018-07-11
|
2018-07-11 16:22:09
|
https://medium.com/s/story/three-instructive-and-complementary-capsule-network-tutorials-1af8b92f6b2e
| false
| 1,095
|
Sharing concepts, ideas, and codes.
|
towardsdatascience.com
|
towardsdatascience
| null |
Towards Data Science
| null |
towards-data-science
|
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,ANALYTICS
|
TDataScience
|
Capsule Networks
|
capsule-networks
|
Capsule Networks
| 27
|
Sebastian Kwiatkowski
|
Founder at Gopher: www.gopher.ai
|
e50114ad83ae
|
sekwiatkowski
| 1,313
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-24
|
2018-06-24 09:11:37
|
2018-06-24
|
2018-06-24 10:03:06
| 2
| false
|
en
|
2018-06-24
|
2018-06-24 10:03:06
| 1
|
1af97d67dcc2
| 3.492767
| 1
| 0
| 0
|
( This is the second post in a series of posts I plan to write as a part of my 100 days of writing challenge.)
| 5
|
What I learnt after completing the Data Scientist with R Track on Datacamp
( This is the second post in a series of posts I plan to write as a part of my 100 days of writing challenge.)
I started my data science journey seven months ago, when I started with a remote internship under a professor of IIM Lucknow. Since then I have learnt a lot regarding data science. Coincidentally the college I am studying in signed up for the datacamp enterprise program. I signed up for the program and I was looking through various career tracks when I found the ‘Data Scientist with R’ track. I chose to pursue this track for two reasons.
First, I had read about this track on Quora and found that many people rated ot very highly, and second, the programming language for my remote internship was R.
So I started the track and after seven months, I finished all the courses in the track. Here’s what I learned.
The whole track requires persistence and discipline
Unlike other people, I didn’t had any plans on how to complete the track. I had no fixed hours of studying, and I was very erratic in the first few months. For anyone starting with the track, or any other track on datacamp , I would suggest them to have a well laid plan on how to complete the courses, and study in a planned manner.
Nevertheless, the track requires a certain level of dedication and persistence. I had read that only two percent of the people finished the whole track after starting it. I took this figure as a challenge, and was determined to be in the two percent of people who did complete the track. While this statement is controversial, I feel it is very much true to some extent. There’s a huge possibility that you will get bored and will try to search for some faster options. My advice is to keep up with the course, and if you are getting frustrated or bored with the course, take a break. I took a break for a week when I was frustrated. Initially I thought that I should give up, and was almost on the verge of doing so, but then I remembered why I joined in the first place, and it motivated me to get back with the track.
Patience, persistence and perspiration make an unbeatable combination for success. — Napoleon Hill
Am I a data scientist now?
No. I am nowhere close to being a data scientist now!
There’s no course/textbook available in this world which has the right to claim that it will make you a data scientist, or any other kind of professional for that matter. Data science is not limited to a single course.text book, and has a wide array of applications. This track alone cannot help you become a data scientist.
Well then why should you bother spending your time (95 hours according to the website) on this course? Because it lays a groundwork for you. The course gives you a brief idea about the world of data science, and the tools data scientists use to perform analytics. It helps you develop some analytical skills, and helps you get accustomed to the R programming language, and the various tools and libraries. It is an excellent starting point towards your data science career.
What else did I learn in my journey?
Well data science and R was not the only thing I learnt during this course. While I was working through the course, I fell in love with R. I absolutely loved this language, and decided to learn more about it.
During this stint of exploration, I bumped into a course called ‘Build Web Applications in R with Shiny’. I didn’t complete the course, but it gave me some experience, and along with some web scrapping skills, I made this shiny web app.
I learned statistics, SQL, R programming and more importantly, I learned the skill of discipline, which I feel is the most important one when it comes to data science.
What else am I going to do apart from this course?
While I have no clear plans, I plan to read some books related to data science, and participate in contests related to data science, because practice is the best way to consolidate your skills. While I am still unsure about how to get a job in the field of data science, I believe it would fall into place as I progress. There’s still a year left before I graduate, and hopefully I would be in a much better place.
Conclusion
To conclude, I would say that you should definitely consider this whole track if you are planning to start learning data science.
The full course is paid, but students can avail free access by asking a professor from their university to sign up for the enterprise edition.
Get going!
Cheers :)
|
What I learnt after completing the Data Scientist with R Track on Datacamp
| 1
|
what-i-learnt-after-completing-the-data-scientist-with-r-track-on-datacamp-1af97d67dcc2
|
2018-06-24
|
2018-06-24 10:03:06
|
https://medium.com/s/story/what-i-learnt-after-completing-the-data-scientist-with-r-track-on-datacamp-1af97d67dcc2
| false
| 824
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Rahul Singhania
|
Computer Science Student | Bibliophile | Data Science Enthusiast
|
93f3633625e0
|
rahulsinghania0820
| 30
| 68
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-19
|
2018-03-19 03:40:03
|
2018-03-19
|
2018-03-19 05:46:32
| 0
| false
|
en
|
2018-04-03
|
2018-04-03 06:53:03
| 7
|
1afae4c98607
| 1.4
| 0
| 0
| 0
|
Resource List
| 3
|
Machine Learning Questions
Resource List
Linear Algebra
https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/
https://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/
ELI5:
https://www.reddit.com/r/learnmachinelearning/comments/83qizk/monthly_eli5_explain_like_i_am_five_thread/
Multivariable Calculus:
https://ocw.mit.edu/courses/mathematics/18-02-multivariable-calculus-fall-2007/index.htm
Probabilistic Systems Analysis and Applied Probability:
https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-041-probabilistic-systems-analysis-and-applied-probability-fall-2010/
https://www.reddit.com/r/learnmachinelearning/
[+] — Imagine you have to explain all the below questions to a 5 year old
Basic
What is classification/classifier?
What is clustering?
Classification vs Clustering?
Discriminative vs Generative Classifier —
* source #1 : Link
What is Regularization?
L1 vs L2 regularization? (ELI5*)
Predictive Modeling?
- Predictive Modeling works on constructive feedback principle. You build a model. Get feedback from metrics, make improvements and continue until you achieve a desirable accuracy.
Type of classification problem:
1. Class output: create a class output. output will be either 0 or 1. Example: SVM and KNN
2. Probability output: outputs probability. Converting probability outputs to class output is just a matter of creating a threshold probability. Algorithms like Logistic Regression, Random Forest, Gradient Boosting, Adaboost etc.
What are different error metrics used in ML ( regression + classification + clustering + etc. )
Why Square in RMSE not absolute value?
What is an ROC curve
What is your favourite algorithm?
How would you normalize the data?
Explains few of the central machine learning algorithms from scratch ?
K-NN
Can we use K-means instead of KNN? (KNN vs K-Means)
Where is KNN in AI?
Naive Bayes’
What is Naive Bayes’?
Naive Baye’s assumption?
How will you handle spam classification problem if occurrence of words are not independent?
Decision Tree (DT)
What is Entropy?
What is gini impurity?
What is Information gain?
Does DT divide on one feature or multiple feature at a time? ( basically how it works — high level view perspective)
Can DT scale for large number of features?
Where DT should be used?
Where DT should not be used?
Ensemble Algorithm
What is ensemble?
Ensemble vs Bagging?
Ensemble vs Boosting?
List examples of ensemble algorithms?
Random Forest Classifier
What is Random Forest
SVM
What is kernel?
Regularization Parameters?
NLP
What is tf idf? Why we need it?
tf?
idf?
weighted tf-idf?
ELI5 : Explain like I’m 5
|
Machine Learning Questions
| 0
|
machine-learning-questions-1afae4c98607
|
2018-08-15
|
2018-08-15 03:06:05
|
https://medium.com/s/story/machine-learning-questions-1afae4c98607
| false
| 371
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Tarun Jain
| null |
ef9fbcfde073
|
tarunjain07
| 4
| 40
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
dfa6f9e00226
|
2018-03-19
|
2018-03-19 10:15:11
|
2018-03-19
|
2018-03-19 10:16:44
| 2
| false
|
en
|
2018-03-19
|
2018-03-19 10:16:44
| 3
|
1afb611e9398
| 3.458805
| 4
| 1
| 0
|
Profede is proud to announce that as of today acclaimed data scientist Ron Bekkerman has agreed to join Profede through an official…
| 1
|
Profede Announces Ex-Linkedin Data Scientist And Big Data Expert Ron Bekkerman To Join The Advisory Board
Profede is proud to announce that as of today acclaimed data scientist Ron Bekkerman has agreed to join Profede through an official advisory role. Ron Bekkerman is a renowned big data expert specializing in statistical machine learning, social network analysis and text mining. Ron will assist Profede also in the fields of web mining, information extraction, natural language processing, business intelligence and product strategy.
Profede is thrilled to announce that Ron Bekkerman, big data expert and ex-LinkedIn data specialist will join as an advisor. At LinkedIn Ron built a data standardization system which is making big money for LinkedIn as well as performing data analyses that are affecting the product and marketing strategy. Profede has much to learn from Ron’s expertise and knowledge in this field.
Ron has a PhD in Machine Learning and is currently Assistant Professor of Data Science at the University of Haifa. Ron is also an Advisor to Implisit, a data-intelligence startup based in Israel, which was acquired by Salesforce for a reported tens of millions of dollars and he is also an advisor to various other startups.
There is no better person to advise Profede on big data than Ron Bekkerman. Ron is a proven expert as a data scientist and in machine learning. His experience and skills will aid in the creation of Profede’s protocol to revolutionize the professional industry. With Ron onboard we feel confident in Profede’s success to reach the next level. Ron is a perfect fit for our advisory board as we enter the next milestone of launching our product and growing.
Juan Imaz, Founder and CEO of Profede
Ron is also KDD Cup Co-chair at SIGKDD 2018 which is a premier interdisciplinary conference bringing together researchers and practitioners from data science, data mining, knowledge discovery, large-scale data analytics, and big data. He also served on the organizing committees of various previous years. Ron also serves and has served on senior program committees at the International Conference of Web Search and Data Mining. He also served on program committees at the International Joint Conference on Artificial Intelligence, International Conference on Data Mining, International Conference on Machine Learning, Conference on Research and Development in Information Retrieval, Conference on Advancement of Artificial Intelligence, Meeting of the Association for Computational Linguistics, among many others.
Ron was editor of the book Scaling up Machine Learning: Parallel and Distributed Approaches published by the Cambridge University Press in 2012. He has mentored countless interns and junior researchers and published dozens of papers that have been mostly in top-tier venues such as JMLR, KDD, WWW, ICML, ECML, SIGIR, CVPR, IJCAI, CIKM, and EMNLP. Those papers have since then been cited 2000 times in scientific literature. He has reviewed hundreds of data mining and machine learning papers and has a strong sense of high-quality data mining technology.
Ron is a true leader in the field of data science and e-business, has hard-core big data expertise and a clear business vision for the future of how technology can advance through the use of blockchain to create solutions for society. Ron has expertise in machine learning and extensive experience in commercial software engineering.
About Profede
Profede is a professional decentralized protocol that gives the power back to the professional user with blockchain. Profede brings more control over user data and personal information for professionals. Profede’s protocol also gives businesses the possibility to control contacting professional profiles that they are in search of without needing to pay intermediaries large amounts of money. The exchange of data should be fair and transparent and professionals should be in the profit equation, which is why at Profede they will earn tokens for each time they exchange their personal data.As Profede is a protocol, it can be used by thousands of apps that will have many benefits from using Profede’s protocol not only for the app itself but for the apps users.
Profede will start their public crowdsale on June the 01st 2018 with a set hard-cap at $20M. Profede has already surpassed it’s softcap of $1.5M before it started it’s current public presale.
Profede is one ICO that is here to disrupt the professional industry that is worth over $1000bn. Profede uses blockchain to bypass intermediaries in the professional industry and create a direct connection between professionals and businesses. Through this protocol professionals are able to control their personal data and get paid for each exchange of data with businesses or other professionals who contact them. Each time a business wants to access a professionals data be it for a uniquely targeted job offer, a commercial offer or a specific business proposal tokens are paid. By making data valuable and eliminating intermediaries Profede is able to achieve bringing essential value to professionals and businesses.
|
Profede Announces Ex-Linkedin Data Scientist And Big Data Expert Ron Bekkerman To Join The Advisory…
| 105
|
profede-announces-ex-linkedin-data-scientist-and-big-data-expert-ron-bekkerman-to-join-the-advisory-1afb611e9398
|
2018-04-23
|
2018-04-23 07:28:56
|
https://medium.com/s/story/profede-announces-ex-linkedin-data-scientist-and-big-data-expert-ron-bekkerman-to-join-the-advisory-1afb611e9398
| false
| 815
|
Professional Decentralization
| null | null | null |
Profede
|
contact@profede.com
|
profede
|
ICO,BLOCKCHAIN,INVEST
|
ProfedeOfficial
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Profede
|
Professional Decentralization
|
fe29ecaf0cf6
|
profede
| 67
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
19fd0cf90e0c
|
2018-04-18
|
2018-04-18 22:08:18
|
2018-04-18
|
2018-04-18 22:43:55
| 2
| false
|
en
|
2018-07-10
|
2018-07-10 21:47:31
| 4
|
1afb8bfef30a
| 2.175786
| 1
| 0
| 0
|
Tell your story…Good goodbyes. Final flow. Flo’s finale. Progressive tidal wave and tsunami heart attack.
| 5
|
I struggle to fly now.
Tell your story…Good goodbyes. Final flow. Flo’s finale. Progressive tidal wave and tsunami heart attack.
Easy for you to say. CYA. I’m unstoppable.
Feels familiar. Jump!
For me? Not so much.
Flash forward to fewer freaky Fridays
“It’s magic, and I’m gathering STEAM!”medium.com
I almost couldn’t eat or drink or sleep. And definitely couldn’t talk.
Moot point, Your Honor.
But I’m trying to be good. I found some eggs. They had joy on them so I assumed I’d find joy.
Entrepreneurial envy is pretty close.
So cakes I’ll bake.
Wednesday and hump day apparently mean something different. And the same. I’m still breathing. The cake.
Except that’s a lie.
The portal. Right. It’s a lie.
Know your meme.
I told you I would never be forgotten, in spite of you.
Know your vibe.
How ‘bout pie?
Peach pie of course.
That was the last straw.
Find your tribe.
That’s funny. In Vietnamese maybe.
Television. Around the world in 80 days with no money.
I did it but it lasted way longer. More like 88 days in a championship battle to the death match.
Well, happy new year then. It’s the year of the cock. So many jokes so little time. So still. So drowned. So downed.
Prepare yourself. You must fight!
Don’t be perfect. Don’t get it. Don’t vet it. Don’t bet it. Horses are more than that. They feel. Okay. Don’t beat ’em. Join ’em. Got it.
I hunger. I hope. I dream. Repeat.
Mail. So much mail. So many males. So little time. I can’t go on like this.
But I’m no chimp. I’m more than that. I shine one. They sliced and clapped and clanged the cartel cartals in my ears. They look like little space ship thingies with metal gong show down deviants.
Good girl.
Had a voice, had a voice, but I could not sing.
Had a voice, had a voice, but I could not talk.
Just kill me silently. I don’t need a reason, when you make me speechless. We’re allowed tonight to breathe. Come on baby. When you’re kissin’ me. Get a hold of me.
There’s a scream inside.
We cannot deny.
TRIGGER WARNING. Stay safe. Stay home.
It’s Saturday Night. Lights.
I shout it out like a bird set free.
[Refrain]
Oh, but there’s a scream inside that we all try to hide
We hold on so tight, we cannot deny
Eats us alive, oh it eats us alive
Oh, yes, there’s a scream inside that we all try to hide
We hold on so tight, but I don’t wanna die, no
I don’t wanna die, I don’t wanna die
The caged bird sings far better when she’s not falling through the clouds. Head wind or hed wig? A head wig. Oh! Could be a barista or a barrister. Depends on the accent.
That much is clear. I got that part now.
Not far from the tree. I guess.
What now, master?
Rachel. Thank you.
|
I struggle to fly now.
| 1
|
new-news-story-were-up-all-night-to-get-lucky-i-struggle-to-fly-now-1afb8bfef30a
|
2018-07-10
|
2018-07-10 21:47:31
|
https://medium.com/s/story/new-news-story-were-up-all-night-to-get-lucky-i-struggle-to-fly-now-1afb8bfef30a
| false
| 475
|
“Prosody is the music of language.” ~ Nandini Stocker, who advocates for sounds of silent solidarity and voices of musical magic makers in scented echo chambers. Make sense? I didn’t think so. I know so. We all do. We all shine on. On and on and on.
| null | null | null |
Living Language Legacies
|
sevenofnan@icloud.com
|
living-language-legacies
|
LANGUAGE,VOICE RECOGNITION,SPEECH RECOGNITION,SPEECH,NATURAL LANGUAGE
|
captionjaneway
|
Life
|
life
|
Life
| 283,638
|
Nandini Stocker
|
Speaking truth brought me war and peace. Amplifying others set me free.
|
7e6afdd38d52
|
sevenofnan
| 426
| 438
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-05
|
2018-09-05 06:23:57
|
2018-09-05
|
2018-09-05 06:39:25
| 0
| false
|
en
|
2018-10-01
|
2018-10-01 15:43:03
| 2
|
1afc6a54e920
| 0.237736
| 0
| 0
| 0
|
Data Analysis Portfolio
| 1
|
데이터 분석 포트폴리오
Data Analysis Portfolio
학습
I. 지도학습 (Supervised Learning)
회귀 (Regression)
분류 (Classification)
- Shelter Animal Outcomes : 동물 보호소의 개, 고양이 결과 분류하기
랭킹/추천 (Ranking/Recommendation)
Ⅱ. 비지도 학습 (Unsupervised Learning)
군집/토픽모델링 (Clustering/Topic Modeling)
밀도 추정 (Density Estimation)
차원 축소 (Dimensionality Reduction)
Ⅲ. 강화학습 (Reinforcement Learning)
Ⅳ. 딥러닝 (Deep Learning)
신경망 (Neural Network)
자연어 처리
I. 품사 태깅(POS Tagging)
Ⅱ. 고유명사 추출(NER)
|
데이터 분석 포트폴리오
| 0
|
데이터-분석-프로젝트-1afc6a54e920
|
2018-10-01
|
2018-10-01 15:43:03
|
https://medium.com/s/story/데이터-분석-프로젝트-1afc6a54e920
| false
| 63
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Hyelan Jeong
|
Data Analyst
|
a9a7719529af
|
helan.jeong
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-21
|
2017-11-21 19:29:19
|
2017-11-21
|
2017-11-21 19:36:02
| 1
| false
|
en
|
2017-11-21
|
2017-11-21 19:36:02
| 3
|
1afc6f58eec5
| 7.520755
| 1
| 0
| 0
|
Effective data teams bring diverse, cross-functional skill sets to bear on clearly defined business priorities — without losing sight of…
| 5
|
The secrets of highly successful data analytics teams
Effective data teams bring diverse, cross-functional skill sets to bear on clearly defined business priorities — without losing sight of the value of experimentation and ongoing education.
Effective data analytics can give companies a huge competitive edge, because business managers can gain new insights into trends and customer behaviors that might not otherwise be possible.
To get the most out of their information resources, enterprises need to have a strong analytics team in place. What does it take to assemble and maintain a top-notch team, and what should these teams be doing to make themselves successful?
These are not trivial questions. In this heavily data-driven environment, how companies go about building and operating a team of analytics experts could have a big impact on the business for years to come.
But before you put together your data analytics team, you need to formulate the mission and charter of the team, says Jeffry Nimeroff, CIO at Zeta Global, a customer lifecycle management marketing company.
“In too many organizations, data analytics is embedded in the more traditional and bland notion of ‘reporting and analytics,’” Nimeroff says. “In these configurations, it is often the case that reactive reporting takes precedence. As there is always another way to formulate more meaningful reports, this can become a never-ending cycle where the true power of data analytics is never fully realized.”
Data success begins with diversity
When building a team, don’t limit the focus to just finding analytics professionals. Diversity is critical for success, experts say.
“It’s very important to include not only people with analytical skills, but also those with business and relationship skills who can help frame the question in the first place and then communicate the results effectively at the end of the analysis,” says Tom Davenport, a senior advisor at Deloitte Analytics and author of the book Competing on Analytics: The New Science of Winning.
Multinational conglomerate GE values a diversity of capabilities for its analytics teams. “Data and analytics are most effective when world-class technology skills are paired with strong functional domain knowledge,” says Christina Clark, chief data officer at the company.
This can be achieved by having a team with a variety of business backgrounds; a mix of both IT and functional skills, Clark says. “We are making terrific progress in developing innovative solutions to support our finance function,” she says. “The data team supporting this effort is comprised of long-time IT professionals but also financial analysts, former auditors, and finance managers.”
Strong knowledge of data science is of course critical to any analytics team, and there should be statisticians, mathematicians, and machine learning experts on the team who understand algorithms and how they can be applied on data, adds TP Miglani, CEO at Incedo, a technology services firm.
“You [also] need technologists — data engineers who can build the pipelines to get the data in place for completing all the analysis,” Miglani says. “And you also need business experts who understand the complexities of the domain you are solving the problem for. For example, if the problem at hand is building data-driven drugs, then you need quantitative pharmacologists and biologists.”
Technically, a data scientist is supposed to be a “unicorn” that can do all of this simultaneously, Miglani says. “But unicorns don’t exist,” he says. “Successful data science teams are diverse, where individuals bring in these competencies that need to come together.”
Change management and the value of IT
If an analytics project involves prescriptive or operational analytics (for example, if the results will be tied into a business process or a set of jobs), there is also a need for someone to manage the change process, Davenport says. “The ORION project at UPS, which led to dramatic changes in driver routing, devoted a massive amount of time and energy to change management,” he notes.
Given that the team will be leaning heavily on technology infrastructure such as big data tools, having the IT department represented on the analytics team in some capacity is also important. “Even if the analytics group doesn’t report to IT, it’s usually a good idea to have some representation of the IT function on the team,” Davenport says.
Emphasize experience — with data and tools
Whoever’s on the analytics team should have lots of experience in their role, Nimeroff says.
“Data analytics is both an art and a science, and more experienced individuals are better able to leverage tools in a creative and effective way than novices,” he says. “I have also found that novices rely on tools to do heavy lifting that they may or may not be fully comfortable in doing themselves. On the flip side, I have met great data scientists who do everything by hand. They don’t scale or help a team accelerate. Finding individuals who can execute without tools but understand and embrace the value of modern tools is what I focus on.”
Outside expertise and embedded teams
Many companies turn to outside expertise for help with analytics projects. That’s fine, but it’s important to ensure that the efforts of the project are actually meeting organizational needs.
“If there are some members of the team who are outsourced workers, try to make sure there is at least one [internal] employee on every project, who can help to ensure that the results of the analytics are adopted,” Davenport says.
And whenever possible, the analytics team should either be a formal part of the business that’s doing the analysis, or at least embedded within it for the period of the project. Consumer goods company Procter & Gamble used to do this through “embedded” analysts, Davenport says, but now has them report to the head of the relevant business function or unit.
‘Ruthless prioritization’
Once your team is in place, finding an operational model in which everyone can work is next, Nimeroff says.
“Companies are becoming more agile and, like with software development, finding an approach for prioritizing work, decomposing the execution into digestible chunks, developing specific success criteria for each work effort, and providing a framework for ongoing communication is often the difference between success and failure,” he says.
Also, the team will more likely to succeed if it’s able to demonstrate the business value of what it does, Miglani says.
“Engaging with stakeholders and consumers of data science recommendations helps them showcase this value, and also get a deeper understanding of the key pain points which they should focus on,” Miglani says. “Sharing results sooner than later, and building organizational structures where data science goals are aligned to the [business units] they are tagged to is a great way to create value.”
GE practices “ruthless prioritization” with its analytics efforts. “A commitment to clearly defined business priorities will enable the data and analytics team to be most successful,” Clark says. “When teams can demonstrate an impact in targeted areas, they are more likely to stay motivated and inspire business partner engagement.”
The company has seen significant productivity results from the “Digital League” in its aviation business, where a cross-functional team has come together to define priorities and then deliver insights in two-week sprints.
Emphasize experimentation and innovation
It’s also important to keep an experimental mindset on the team.
“The business case for these projects is not easy; you have to take a step into the unknown,” Miglani says. “Unlike technology projects that begin with a definite scope in mind, data science projects begin with a problem and a set of hypothesis which needs to be tested. There is no clear map of before-and-after processes for these projects, and teams which are new to data science need to understand and get comfortable with that.”
Along those lines, there should be outlets for innovation, Clark says. “There is an enormous amount of emerging technologies in this field,” she says. “Employees will want to know they have time and funding to continue to grow their own skills and try new approaches. We leverage our Global Digital Hubs as a place to incubate new technologies and pilot work in self-organizing teams; an atmosphere of innovation keeps teams motivated.”
As with science and learning in general, curiosity is a key element of analytics. “Curious people have a desire to follow up on their own analysis, whether or not our clients ask for it,” says Stuart Wilson, data scientist and analytics team lead at Paytronix Systems, a provider of reward program services to restaurants and retailers.
“One of our analysts decided to check up on a marketing campaign run six months prior,” Wilson says. “Because of this, we were able to discover an unanticipated result of this campaign, that would have otherwise been inconclusive.”
Another good practice is learning to ask the right questions and solve the right business problems.
“Every data science project should start off as a consulting exercise — understanding the ‘what’ and ‘why,’” Miglani says. “Also, the objective on any analytics exercise cannot be to implement a tool or platform. The objective should always be designed towards the right business outcomes, and you can get there by asking the right questions.”
Data: The foundation for success
The data analytics team is more likely to succeed if the organization creates a “data foundation,” Clark says. “Technical experts in the field of data will want to see real commitment from the organization to a data foundation,” she says. “In our treasury division we ran a coordinated program to streamline and improve data quality and accessibility over a two-year period. We have seen improved employee productivity, lower technology costs, and a broader community of digitally savvy employees.”
Ensuring high-quality data should be a cornerstone of any data foundation.
“Knowing and managing your data is critical to success,” Wilson says. “Your analysis will only be as accurate as your data. When we see success with our own analysis, we are often asked to leverage this analysis into reports or dashboards, so business users can leverage these findings from day to day. If your data process is unreliable or your data incomplete, your results will be flawed, and any actions taken on them will be erroneous.”
Train continuously
To keep on top of fast-changing developments in analytics, ongoing education and personal development are important to maintaining a vibrant and successful team, Nimeroff says. “Data analytics is amongst the set of fastest-growing fields out there, and even though the leading-edge techniques aren’t applicable in every situation or organization, it doesn’t mean that being able to stay current isn’t important,” he says.
Paytronix emphasizes ongoing training of the analytics staff, as well as the ability to communicate results.
“Your team needs to understand the finer points of your data, know how analysis may go wrong with statistical biases, and understand how to effectively distill and then communicate actionable results,” Wilson says.
“I often tell my team that the best analysis in the world will be a wasted effort if it is not clearly understood and acted upon,” Wilson says. “To this end, keep the end in mind when working through a problem: How will business users change behaviors based on this information? This should help you tailor your approach and curate the conclusions you provide.”
A skilled data analytics team is crucial for successful and accurate analysis of your business data to make informed decision; here are the secrets of building a successful data analytics team. Virtelligence consultants can help to train your team to effectively use business data for improved analytics productivity.
Content retrieved from: https://www.cio.com/article/3234353/analytics/the-secrets-of-highly-successful-data-analytics-teams.html
|
The secrets of highly successful data analytics teams
| 1
|
the-secrets-of-highly-successful-data-analytics-teams-1afc6f58eec5
|
2017-11-22
|
2017-11-22 12:46:29
|
https://medium.com/s/story/the-secrets-of-highly-successful-data-analytics-teams-1afc6f58eec5
| false
| 1,940
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Ayesha Wallayat
| null |
e9979d746904
|
AyeshaWallayat
| 21
| 29
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
12f15729d989
|
2018-06-14
|
2018-06-14 22:37:24
|
2018-07-06
|
2018-07-06 17:58:01
| 4
| false
|
en
|
2018-07-06
|
2018-07-06 17:58:01
| 5
|
1afd2cbcb906
| 5.722642
| 4
| 0
| 0
|
By Pierre-Habté Nouvellon
| 5
|
The war against fake news: Insight from the CTO of Snipfeed
By Pierre-Habté Nouvellon
Pierre-Habté grew up in very eclectic environment: his father is French, his mother Ethiopian, and he has lived in France, Congo, Brazil, and the US over the past 23 years.
After completing high-school in Sao Paulo, he went to France to study depth Math, Physics, and Engineering, where he acquired a strong basis in those subjects thanks to the French system. However, he felt that something was missing. He decided to join the MEng program in IEOR in 2017 as part of his dual degree with his French Engineering School ISAE-Supaéro.
During his master, he made the most of the amazing opportunities that Berkeley offered in terms of academics, leadership, and networking. Thanks to the great flexibility of the Master, he took extra classes in the Computer Science department, namely in Machine Learning.
This fascination with AI given his capstone project would eventually lead him to working on a machine learning project that would later give way to the creation of Snipfeed.
How has navigating your diverse background been while at Berkeley?
The question of ‘where are you from?’ is not an easy one. At Berkeley however, people have such a diverse background that they understand that you can feel black/white and Latino at the same time!
What made you decide to choose the Berkeley MEng program as opposed to other Engineering Master’s programs?
I wanted to use all the knowledge I gained in the last 21 years into something great that would benefit society. There were two conspicuous ways to yield the greatest impact: research, and entrepreneurship. No surprise: UC Berkeley was the best place for both of them!
There were two conspicuous ways to yield the greatest impact: research, and entrepreneurship. No surprise: UC Berkeley was the best place for both of them!
What inspired you to take extra Computer Science and Machine Learning classes?
A whole new world of opportunities is provided by AI. I started to realize that within my Capstone Project, which was about predicting a skin disease called Psoriasis Arthritis using genotype and phenotype data.
The Beginning of Jenyai, the AI tutor
As part of a Machine Learning course project, Pierre-Habté started to work on an AI tutor project with two friends, Rédouane Ramdani and Anas Bouassami, who were visiting scholars at the Haas Business School.
The three co-founders quickly started to develop the product and work with pilot schools around Berkeley and Oakland. Jenyai was able to answer the most common questions in Science, History, and Math at the 6th grade level. In February 2018, the team and their product got accepted into an accelerator in San Francisco called The Refiners.
By this time, the project was growing: two developers had joined the team, Jenyai was available on Facebook messenger and Kik, and they had 5,000 daily active users. However, one question had yet to be answered: are people willing to pay for an AI tutor?
How did you get involved with Jenyai?
During a casual coffee talk at Strada, Rédouane and Anas told me about their idea of creating the perfect tutor and school buddy bot that could answer students questions and coach them through positive thinking. Since I am passionate about education and Machine learning, I directly jumped into the project!
Were there any challenges that you faced while working on the project?
Managing my time in between my startup, master courses, and my Capstone project was a real challenge. Since I was in charge of the Machine Learning part in the startup, I took all possible extra courses in that field: Deep learning, Computer Vision and Optimization. I ended up with twice the required number of credits for my master.
What was the end result of the project?
In late March we received an email from YCombinator saying that we had been selected to pass an interview. Being in the top 400 selected startup among 9,000 applicants was already very encouraging. The interview went very well…but at the end of the day we received an email saying that we got denied. The jury was not sure about how we could monetize our AI tutor. The day after, we decided to pivot and focus our attention on the very one feature that was working very well and was bringing the most value to users: snips.
The end of Jenyai… and the rise of Snipfeed
One very popular feature on Jenyai was the push of short news, facts, or quizzes for students (called snips), which were followed by snack lessons. From one snack lesson, students could go to another related snack lesson, where they would enter “a tree of knowledge” without realizing they were actually learning in the process. Meanwhile, this process takes place using GenZ social codes: emojis, videos and GIFs.
How did you and your team members come up with the idea for Snipfeed?
Students were very curious and could spend more than 40 min just going deeper and deeper into the subject, if given in a snackable, fun and personalized way. The snack lessons would help them understand the news: give them context. People were not using Jenyai bot to do math exercises, but rather to get their news and learn things. We were quite surprised and asked our users where they usually get their news from. I was shocked when I realized that their #1 source of information/news was Instagram. The latter doesn’t provide the tools for deeper analysis or contextualization for their content which led to this fake news era. That is when we decided to create Snipfeed.
We were quite surprised and asked our users where they usually get their news from. I was shocked when I realised that their #1 source of information/news was Instagram. The latter doesn’t provide the tools for deeper analysis or contextualization for their content which led to this fake news era.
Snipfeed, the startup that fights fake news
Snipfeed is a news and information recommendation engine based on AI and your go-to for everything that is important to you. You can test it on Facebook Messenger today: http://m.me/snipfeedapp. It provides daily and highly-personalized snippets for its users, and engages them in learning by breaking up original content into short chat messages, GIFs/Images, videos & quizzes.
Concretely, Snipfeed engages users by sending personalized & short headlines with an Image/GIF. If they are interested in knowing more it sends very short summaries of the articles with the source & a related video, otherwise they can jump to the next story (videos mostly come from YouTube and last less than 5 minutes). Snips have to fit within a screen so users can read everything in 8 seconds which is the average attention span, but the AI plugs snack lessons and stories allow users to dive deeper (topics include cinema, economy, politics, history, tech, celebrities,etc.).
Also, Snipfeed allow users to react to snips in an ongoing opinion poll using emojis. Once they have done so, they are able to see how their community feels about the snip.
In a word, Snipfeed helps users understand the world and emphasize awareness by gathering the best of what publishers and the internet have to offer.
Where do you see Snipfeed headed towards?
Our vision for Snipfeed in the future is to become the main medium for GenZ and millennials to get their news and raise awareness. Snipfeed is the missing piece in the puzzle between traditional media, influencers, and learning platforms. We also believe that Snipfeed can be the leading platform for critical thinking and debate as people will be able to share snips and create conversations over them as well as expressing their feelings anonymously through emoji polls.
Snipfeed is the missing piece in the puzzle between traditional media, influencers, and learning platforms.
The team has recently been announced as one of SkyDeck’s Fall 2018 Cohort teams. Congrats to Pierre and the members of Snipfeed!
The Snipfeed team
|
The war against fake news: Insight from the CTO of Snipfeed
| 61
|
the-war-against-fake-news-insight-from-the-cto-of-snipfeed-1afd2cbcb906
|
2018-07-11
|
2018-07-11 17:15:25
|
https://medium.com/s/story/the-war-against-fake-news-insight-from-the-cto-of-snipfeed-1afd2cbcb906
| false
| 1,331
|
Content hub for UC Berkeley’s Master of Engineering Program. Explore the many ways our students, alumni, and faculty are contributing to thier field.
| null |
FungInstitute
| null |
Berkeley Master of Engineering
|
funginstitute@coe.berkeley.edu
|
the-coleman-fung-institute
|
ENGINEERING,ENTREPRENEURSHIP,STARTUPS,BERKELEY,IDEAS
|
FungInstitute
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Berkeley Master of Engineering
|
Master of Engineering at UC Berkeley with a focus on leadership. Learn more about the program through our publication.
|
aecb5ee3d459
|
FungInstitute
| 285
| 117
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-02
|
2017-09-02 17:23:07
|
2017-09-02
|
2017-09-02 17:27:32
| 3
| false
|
es
|
2017-09-02
|
2017-09-02 17:28:37
| 2
|
1afd693532c0
| 4.459434
| 0
| 0
| 0
|
De acuerdo a una computadora, estas 50 compañías desconocidas van a hacer temblar al mundo.
| 5
|
Estas son las startups que tendrán éxito (según la inteligencia artificial)
De acuerdo a una computadora, estas 50 compañías desconocidas van a hacer temblar al mundo.
This story originally appeared on Foro Económico Mundial
Por Sean Miner de Open Democracy En 2009, Ira Sager de la revista Businessweek estableció un reto para el Director Ejecutivo de Quid AI, Bob Goodson: programar una computadora para escoger 50 compañías desconocidas que van a hacer temblar el mundo.
El campo de selección de “startups ganadoras” estaba, y en gran medida aún está, dominado por la creencia sostenida por la industria de capital de riesgo de que las máquinas no juegan un papel en la identificación de los ganadores. Irónicamente, el mundo del capital de riesgo, que ha impulsado la creación de la informática, ha sido una de las últimas áreas comerciales en introducir la informática en la toma de decisiones.
Casi ocho años más tarde, la revista volvió a examinar la lista para ver cómo se habían desempeñado “Goodson más la máquina”. Los resultados sorprendieron incluso a Goodson: Evernote, Spotify, Etsy, Zynga, Palantir, Cloudera, OPOWER –y la lista continúa. La lista incluía no solo nombres muy conocidos por el público y a líderes de las industrias, sino también productos de excelente desempeño como Ibibo, que tenía ocho empleados en 2009 cuando se seleccionó y ahora tiene 2 mil millones de ventas anuales como el sitio de reserva de hoteles más importante en la India. El 20 % de las empresas elegidas había llegado a tener un valor de miles de millones de dólares.
Para contextualizar estos resultados, Bloomberg Businessweek recurrió a uno de los principales “fondo de fondos” en los EE. UU., que ha estado invirtiendo en fondos de capital de riesgo desde la década de 1980, y tiene uno de los conjuntos de datos más valiosos sobre el rendimiento real de la empresa y sobre el rendimiento comparativo de las carteras de capital de riesgo.
El fondo de fondos no se mencionó por razones de cumplimiento, pero su investigación demostró que, si las 50 compañías hubieran sido una cartera de capital de riesgo, hubiera logrado el segundo mejor rendimiento de todos los tiempos. Solo ha existido un fondo mejor elegido y que hizo la mayor parte de sus inversiones a fines de la década de 1990 y se subió a la burbuja del dotcom con éxito. Por supuesto, en esta cartera hipotética, uno podría elegir cualquier empresa, mientras que los fondos de capital de riesgo a menudo necesitan competir para invertir.
Recientemente, Bloomberg le pidió a Goodson que repitiera la hazaña. Aquí, vamos a analizar en profundidad la metodología aplicada detrás de la nueva lista, y también otras tendencias que podrían prosperar en el mercado.
En primer lugar, Goodson seleccionó a 50,000 empresas privadas que habían recibido capital de riesgo o deuda de riesgo en los últimos tres años. Los datos sobre la inversión recibida, los inversionistas, la ubicación y el año de fundación vinieron de S&P Capital IQ y Crunchbase, y estaban vigentes hasta septiembre de 2016.
Goodson generó entonces un mapa de la red para mostrar dónde habían invertido los empresarios su capital, centrándose en compañías de tecnología fundadas durante los últimos 18 meses.
Imagen: Quid
Simultáneamente, Goodson examinó las inversiones de cinco de las firmas de capital de riesgo con mejor desempeño en el mundo los dos últimos años para generar un mapa de apuestas de capital de riesgo.
Imagen: Quid
Juntas, estas redes permitieron que Goodson descubriera las áreas más prometedoras para invertir:
- La realidad aumentada será mucho más significativa que la realidad virtual, ya que dará forma a nuestra manera de ver e interactuar con el mundo que nos rodea.
- Las tecnologías de reconocimiento de imágenes y mapeo se implementarán en toda la industria automotriz a medida que los fabricantes de automóviles tradicionales se adapten a los vehículos autodirigidos.
- Los problemas asociados con la seguridad en línea y la detección de fraude continuarán profundizándose, con importantes implicaciones para el gobierno y las empresas, y para el comercio móvil y electrónico.
- La digitalización de la educación se está llevando a cabo mediante aplicaciones prácticas que se integran en el sistema existente, incluyendo aplicaciones de enseñanza y capacitación, así como juegos.
- Los drones se están incorporando en los entornos de negocios, y las compañías en la vanguardia estarán en condiciones de expandirse en las aplicaciones de los consumidores en el futuro.
- La casa inteligente se está desarrollando a través de una gama de productos de consumo asequibles, incluyendo bombillas con altavoz, iluminación inteligente, sensores de seguridad flexibles y sensores de jardín.
- A medida que la computación se integra más con la experiencia humana, las nuevas aplicaciones de sensores inteligentes se hacen realidad, incluyendo análisis de sudor, auriculares, autenticación ocular y hologramas.
- Aún existen grandes oportunidades de mercado en el comercio electrónico ya que la moda se hace cada vez más móvil y social.
- La inteligencia artificial está logrando una mayor eficiencia en el trabajo del conocimiento, que implica el manejo de datos o información, incluidos los bots, y las áreas de venta y mercadotecnia.
- La tecnología espacial continúa avanzando en áreas como la propulsión de satélites espaciales y la minería.
Goodson luego redujo la lista, dando preferencia a las empresas con estas características: al menos dos rondas de financiación; menos días entre las rondas de financiación; menos días desde la última ronda; y fundadores que, habiendo trabajado juntos previamente, ahora incorporan un tercer socio.
Puesto que Bloomberg buscaba compañías de las que el público aún no había oído hablar, Goodson refinó aún más la lista, por lo que no incluyó ninguna compañía fundada antes de 2011 (de hecho, 12 fueron fundadas en 2015 o 2016). La mayoría había recaudado cantidades relativamente pequeñas de fondos de capital de riesgo, con 31 empresas que recaudaban menos de 10 millones de dólares. Bloomberg también pidió que Goodson excluyera a la biotecnología.
El resultado final de cualquier problema complejo todavía se basa en una combinación de la intuición humana con la inteligencia artificial, y lo mismo sucedió con esta lista. Sobre la base de todos los resultados de Quid, la decisión final recayó en Goodson.
A continuación, puede ver la lista completa publicada por Bloomberg:
This story originally appeared on Foro Económico Mundial
|
Estas son las startups que tendrán éxito (según la inteligencia artificial)
| 0
|
estas-son-las-startups-que-tendrán-éxito-según-la-inteligencia-artificial-1afd693532c0
|
2017-12-02
|
2017-12-02 18:12:20
|
https://medium.com/s/story/estas-son-las-startups-que-tendrán-éxito-según-la-inteligencia-artificial-1afd693532c0
| false
| 1,036
| null | null | null | null | null | null | null | null | null |
World Economic Forum
|
world-economic-forum
|
World Economic Forum
| 637
|
Enclipssed®
|
We change the world by spreading content, entrepreneurship, and success stories. (Founders: @crissguth @marilynsquiroz)
|
df898bd0e949
|
Enclipssed
| 288
| 772
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-25
|
2018-05-25 20:46:51
|
2018-06-04
|
2018-06-04 12:47:32
| 3
| false
|
en
|
2018-06-05
|
2018-06-05 14:56:55
| 2
|
1afdb460a376
| 3.240566
| 2
| 0
| 0
|
The short answer is this: because humans don’t like to lose. But let’s play with this a bit more.
| 5
|
Game AI and Human Level Frustration
or
Why game AI can’t be too good.
Even the games are learning.
The short answer is this: because humans don’t like to lose. But let’s play with this a bit more.
Let’s talk about games, baby. Let’s talk about you and me…
Well, wait a minute, let me back up a bit. First we should take a moment and define that “game AI” is not general artificial intelligence like Siri or Alexa are aiming to become. In the Golden Age of arcade games AI presented itself in places like Space Invaders giving us harder and harder levels to play through and the ghosts of Pac-Man being really tricky to escape from. Game AI is trying to offer the player new and expanded gameplay and less about answering what the weather is going to be like tomorrow.
Today we can build games with really advanced algorithms that can make playing the game more difficult. In many ways we can create platforms games that never end. Endless levels, monsters that learn your playing habits and increase the chances that you lose until it becomes impossible to win. But, that’s the problem right there, who likes losing?
I don’t. Hell, I was that kid that realized that peeling and perfectly reapplying the stickers on my Rubik’s Cube was super fast. No I was not cheating, and you are mean to say that… Sure, I did eventually beat it, but my first thought was just to circumvent the problem entirely. With good game AI you cannot “cheat” to win. You have to hone your craft in-game.
Dani Bunten was once asked how to play-balance a game. Her one word answer was “Cheat.” Asked what to do if gamers complained, she said, “Lie!”
— Johnny L. Wilson of Computer Gaming World, 1994
Games of the past had very early AI. Basically just a bunch of hashes whereby they would follow a path and give you X when you did Y, no real learning.
Alright, let’s think about a scenario for a minute. We have an endless runner game in which we have to jump over alligators and rolling barrels of oil. Not, this is not Pitfall, so stop seeing that in your head…
Okay, yes this is Pit Fall and it was amazing. © ActiVision
The game quickly “learns” that I wait for a particular order to happen before I make my jump for that swinging vine and safely soar over the gaping mouths of these three gators. It waits for me to do it a few more times and then suddenly the timing is off. I get to the gators and the vine waits just a millisecond before it swings back from the right and so now I miss the vine and Tick-Tock gobbles me up.
<<TWO LIVES LEFT>>
Okay, that was annoying. I restart the level and try it again, I jump over the barrels, avoid snakes on ladders, and I am really rolling now. This time the game keeps the vine for a moment longer on the left side, and it throws off my timing to jump off. Again I am nothing but protein for this evil mega-lizard.
<<ONE LIFE LEFT>>
In modern game design we have the ability to employ on-board neural networks that learn and transform the game dramatically and even employ cloud-based AI like Google’s TensorFlow to recreate the game en masse on the fly.
Here’s the thing though, Machine Learning with neural networks is amazing but it is only as smart as the input. One person playing a game will make sure that the game seems interesting, but millions playing a game and all of that data being digested by the neural network means that the game will become seemingly invincible.
Are we going to have to build our games so that humans can win? Yeah. Period. Do you want to play the FPS zombie shooter where the zombie with the handgun ALWAYS beats you? No way, dude. I wanna win.
“The dinosaurs became extinct because they didn’t have a space program. And if we become extinct because we don’t have a space program, it’ll serve us right!”
Larry Niven
Matt Williamson is the CEO of Clevyr, Inc. Clevyr makes software using all the buzzwords like AI, Machine Learning, Augmented Reality and Virtual Reality.
|
Game AI and Human Level Frustration
or
Why game AI can’t be too good.
| 7
|
game-ai-and-human-level-frustration-or-why-game-ai-cant-be-too-good-1afdb460a376
|
2018-06-13
|
2018-06-13 00:45:45
|
https://medium.com/s/story/game-ai-and-human-level-frustration-or-why-game-ai-cant-be-too-good-1afdb460a376
| false
| 713
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Clevyr
|
We create data-driven web, mobile, and software applications to make your life better. AI, AR/VR, and more. http://clevyr.com/
|
35de860cf6b7
|
clevyr
| 111
| 255
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-28
|
2018-06-28 02:51:10
|
2018-06-28
|
2018-06-28 07:55:37
| 1
| false
|
en
|
2018-06-29
|
2018-06-29 02:15:16
| 6
|
1afe18163999
| 4.671698
| 2
| 3
| 0
|
There’s always been bullshit, everywhere. Call it humbug, drivel, balderdash, it’s all the same. Much has been written about the fact that…
| 5
|
Bullshit ahead
There’s always been bullshit, everywhere. Call it humbug, drivel, balderdash, it’s all the same. Much has been written about the fact that we seem to be swimming in bullshit (e.g., in this great article by Nathaniel Barr), much less on defining what exactly is bullshit. Harry Frankfurt’s essay is very good and informative, but it falls short of giving a definition that can work as a good metric.
Harry talks of bullshit as a disregard for the truth. That’s different from a lie, in the sense that a liar knows what is the truth, and deliberately presents an untruth with the intention of deceiving the receiving party, whereas a bullshitter works with carelessness, providing little information and a lot of hot air, where the intention is to cause a feeling on the receiving party, be it admiration or confusion.
The problem is: given an arbitrary sentence, can you derive the likelihood that it is bullshit?
I propose initially the use of four metrics:
Explanation
Ambiguity
Plural use
Bad grammar
I’ll go through each one to attempt to convince you that these are good starting points to detect whether or not a sentence is bullshit. Then, in subsequent articles, I’ll discuss how we might be able to measure each of these metrics.
Explanation
Tim has promised a week ago to send you the technical proposal for review by the end of the day yesterday, but didn’t. When you ask him “Did you finish the proposal?” Tim replies:
I started working on it, but the team meeting yesterday took too long, so I will have to finish it today.
For some time I have considered explanations to be a very clear indication of bullshit. When people fail, instead of falling on their sword, apologising, and showing how they can improve or make up for their failure, they provide an explanation, an excuse. It comes from the wish to not be seen as irresponsible, or forgetful, that is, the excuse is an attempt to maintain one’s status by shifting the blame to someone or something else.
The mature thing to do, of course, is to admit one’s failure, fix it, and see your failure as an opportunity to improve yourself. It might have been that the reason Tim didn’t manage to complete the proposal in time was due to a lack of time, but that should have been communicated earlier, or he should have planned his priorities better. In any case, the causes for him failing to meet the deadline do not alleviate the fact that the proposal was not delivered on time, and is therefore unnecessary to be stated. In other words, the information of the cause for his failure does not add any value to the mission of delivering the proposal. The only value it adds is for a process of continuous improvement, as a retrospective, so that in the future he might be able to decide what is more important: to attend a meeting or to fulfil his promise.
It is this lack of value that makes me think that excuses are big indicators of bullshit. An excuse it’s not necessarily a lie, although in many cases it is. It is an attempt to shift the direction of the conversation, so that the other party focuses more on the excuse than on the failure.
Ambiguity
This is again an attempt to confuse, by uttering words with little informational value. Politicians are well known for their prowess at crafting ambiguous statements. Who’s better to illustrate ambiguous political statements than the master himself, Donald J. Trump. When asked in May 2016 whether he’d be open to raising the federal minimum wage of $7.25 an hour. He replied he’d be:
[…] open to doing something with it, because I don’t like that.
This statement gets double points due to the fact that it also contains an explanation, but the key point here is the vagueness. what does “doing something with it” mean? Does that mean raising it, lowering it, investigating the cost-benefit and then deciding? There is a non-commitment that is typical of political public speech.
Plural use
This is also very popular with politicians, but it happens everywhere. In certain circumstances it is correct to talk on behalf of a group, when talking explicitly of things related to that group. Most of the time, however, the usage of plurals is an attempt to dissipate blame and shift focus, muddling the responsibility boundaries, or to bring the other party into the situation, so as to influence their emotion rather than their reason:
We must try to work together to overcome the difficulties we are facing.*
I admit this is not a strong metric, and there might be many reasons why someone might say something in the plural form, but together with other indicators it could help decide whether a statement is bullshit or not.
Bad grammar
This might be controversial. The theory here is that bullshit statements are lazy statements, unpolished and not well crafted. As mentioned by Harry Frankfurt, when someone concocts a lie there was a lot of effort put into it. Lying is not easy. Bullshitting on the other hand requires low cognitive effort, and my theory is that people who have a higher tendency to bullshit are people who will be paying little attention to their grammatical correctness. I am willing to be convinced otherwise though.
BSORNOT: a robotic assistant
How to use these metrics to detect bullshit? And are they enough? I think there’s still a lot of work to be done, but I decided to create a little Twitter robot to start experimenting anyway.
BSORNOT replies to mentions with an analysis of whether the text is likely to be bullshit or not, and which metric does it fall foul of. In the future I want to be able to also give the confidence level and an analysis of why the robot has reached its conclusion. I also want to make the tool respond to REST requests, and to be able to accept labels for statements from authorised users so that it can learn how better to detect bullshit.
If you want to collaborate to the improvement of the robot, please feel free to fork the project and submit your pull request. This is a public project on github, and my ambition is that it will be useful in extending our cognitive ability to separate facts from fiction, valuable information from bullshit.
*Side note: There is a hidden ambiguity here: when talking about “We”, does that include the other party or not? For example, a newly wed couple might be talking about their honeymoon: “We had a wonderful time”, or a group of friends might be talking about the time they went out camping: “We had a wonderful time”. The first clearly only includes the couple, whereas the second statement might include the listening party or not. In this the Vietnamese language is superior, when it makes the distinction between “We, including you” (Chúng ta) and “We, not including you” (Chúng tôi). In linguistics, this is known as clusivity.
|
Bullshit ahead
| 10
|
bullshit-ahead-1afe18163999
|
2018-06-29
|
2018-06-29 12:35:33
|
https://medium.com/s/story/bullshit-ahead-1afe18163999
| false
| 1,185
| null | null | null | null | null | null | null | null | null |
Politics
|
politics
|
Politics
| 260,013
|
Marco Zanchi
|
Student of machine learning, mediocre Java coder, struggling DevOps expert, wannabe robot.
|
bd98c45ace04
|
mzanchi
| 39
| 46
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-22
|
2017-11-22 18:18:09
|
2017-11-22
|
2017-11-22 18:20:10
| 0
| false
|
en
|
2017-11-22
|
2017-11-22 18:20:10
| 15
|
1afe2716a825
| 2.467925
| 1
| 0
| 0
|
Hey All,
| 5
|
11/22/17- Daily Decode: Can AI Explain Itself, Net Neutrality Explained, Rules for Robots
Hey All,
Time for the daily Decode. Here are 3 links worth you day:
1. Via The New York Times: Can A.I. Be Taught to Explain Itself? by Cliff Kuang
2. Via Last Week Tonight/ HBO/ Youtube: Net Neutrality II: Last Week Tonight with John Oliver (HBO)
3. Via Quanta Magazine: How to Build a Robot That Wants to Change the World by John Pavlus
On Net Neutrality:
So, yesterday the Daily Decode covered briefly why Net Neutrality is important. Having a permission-less and open internet is the corner stone of free speech and technological innovation that has powered our economy. I briefly covered the pattern of rent seeking behavior that telecoms are engaging into charge you MORE for less. What is Net Neutrality? It means that telecoms can’t treat any information differently, so they can’t throttle Netflix, YouTube or any competing service to stifle competition. It means that they can’t then say “Pay the toll or else.” Here is John Oliver explaining it.
We are already seeing the innovations from tech companies filtering into the medical space especially with deep learning (which is pioneered by Google not Comcast). Since New York City’s economy has shifted significantly to tech after the recession we will be disportionately affected if the innovation in the internet is stifled due to the predatory actions of cable companies. Please, call your local congressman, senator and the FCC. Tell them that the FCC’s plans to remove Net Neutrality is misguided at best, corrupt at worst. Its probably corrupt because the FCC chairman Ajit V. Pai is trying to prevent states from regulating the internet. Its probably corrupt because New York State Attorney General Eric T. Schneiderman has exposed a campaign to flood FCC with fake support against Net Neutrality.
If the New York State Attorney General is saying that tens of thousands of people’s identity has been misused, then you know there is something fishy going on. To quote Schneiderman’s Medium post:
“In May 2017, researchers and reporters discovered that the FCC’s public comment process was being corrupted by the submission of enormous numbers of fake comments concerning the possible repeal of net neutrality rules. In doing so, the perpetrator or perpetrators attacked what is supposed to be an open public process by attempting to drown out and negate the views of the real people, businesses, and others who honestly commented on this important issue. Worse, while some of these fake comments used made up names and addresses, many misused the real names and addresses of actual people as part of the effort to undermine the integrity of the comment process. That’s akin to identity theft, and it happened on a massive scale.”
Events:
AI and Society
Wednesday, November 22nd, 2017
6:45 pm to 8:30pm
Think Coffee
73 8th Avenue
14th and 8th Ave.
New York, NY
Free Python Open Workshop:
Sunday, November 26th, 2017
Fordham University School of Law
113 West 60th Street
Room 519
New York, NY 10023
We will continue our coverage of lists, move on to dictionaries, work on hacker rank problems, and introduce object oriented programming again. For those more advanced we will continue to work on Python Poker app and notes on how to use python with databases will be shared.
More advanced courses:
Rahul Remanan’s will also be hosting a Natural Language Processing Course:
January 6th, 2018
10am to 6pm
Location: TBD
Price: TBD
This 6 to 8 hour event will cover the best practices and an in-depth analysis of natural language processing using deep-learning. The following topics included are:
1) One hot encoding,
2) Recurrent neural networks using long short-term memory (LSTM),
3) Natural language generation,
4) Character embedding,
5) Natural language classification,
6) Conversational chat-bots using deep-learning.
7) Understanding healthcare data using NLP.
That is all!
Thanks,
Anthony Albertorio
Organizer w/ AI@CUMC
|
11/22/17- Daily Decode: Can AI Explain Itself, Net Neutrality Explained, Rules for Robots
| 1
|
11-22-17-daily-decode-can-ai-explain-itself-net-neutrality-explained-rules-for-robots-1afe2716a825
|
2018-07-22
|
2018-07-22 04:00:16
|
https://medium.com/s/story/11-22-17-daily-decode-can-ai-explain-itself-net-neutrality-explained-rules-for-robots-1afe2716a825
| false
| 654
| null | null | null | null | null | null | null | null | null |
Net Neutrality
|
net-neutrality
|
Net Neutrality
| 4,176
|
AI at Columbia University Medical Center
|
Where the spirit of the coffeehouse culture of the Enlightenment meets the modern hackerspace to develop AI for all. https://www.meetup.com/AI-at-CUMC/
|
ff36178a30a5
|
ai.at.cumc
| 26
| 41
| 20,181,104
| null | null | null | null | null | null |
0
|
metadata=read.csv(“局屬測站座標.csv”)
series=c(2:7,9:12,16:26,28:30)
coor_site=cbind(metadata$經度[series],metadata$緯度[series])
meteorology_201607=read.csv(“201607_cwb_hr.txt”,sep=””,na.strings = -9999)
meteorology_201608=read.csv(“201608_cwb_hr.txt”,sep=””,na.strings = -9999)
solar_hour_mean=matrix(ncol=32,nrow=24)
for(i in 1:32){
for(j in 1:24){
solar_hour_mean[j,i]=mean(solar_all_A[i,,j],na.rm=T)
}
}
delta_solar_all_A=array(dim=c(32,62,24))
for(i in 1:32){
for(j in 1:62){
for(k in 1:24){
delta_solar_all_A[i,j,k]=solar_all_A[i,j,k]-solar_hour_mean[k,i]
}
}
}
COV_solar=matrix(nrow=length(series),ncol=length(series))
for(i in 1:length(series)){
for(j in 1:length(series)){
if(i<=j){
COV_solar[i,j]=var(c(delta_solar_all_A[series[i],,]),c(delta_solar_all_A[series[j],,]),na.rm=T)
}
else{
COV_solar[i,j]=COV_solar[j,i]
}
}
}
image.plot(COV_solar,asp=1)
series_sort=series[c(c(17,23,21,7,9,10),c(6,3,5,2,4,1,8),c(19,15,24),c(13,12,11,20,14),18,16,22)]
COV_solar_sort=matrix(nrow=length(series),ncol=length(series))
for(i in 1:length(series)){
for(j in 1:length(series)){
if(i<j){
COV_solar_sort[i,j]=var(c(delta_solar_all_A[series_sort[i],,]),c(delta_solar_all_A[series_sort[j],,]),na.rm=T)
}
else{
COV_solar_sort[i,j]=COV_solar_sort[j,i]
}
}
}
image.plot(COV_solar_sort,asp=1)
error=c()
for(i in 1:length(COV_solar_sort)){
if(!is.na(COV_solar_sort[i])){
if(COV_solar_sort[i]<.1){
error[i]=COV_solar_sort[i]
}
else{
error[i]=NA
}
}
else{
error[i]=NA
}
}
COV_solar_unbiased=COV_solar
for(i in 1:nrow(COV_solar_unbiased)){
COV_solar_unbiased[i,i]=COV_solar[i,i]-var(error,na.rm=T)
}
inform_solar=eigen(COV_solar_unbiased)
bw=.5
dis_points=matrix(ncol=length(series),nrow=nrow(points))
for(i in 1:nrow(dis_points)){
for(j in 1:ncol(dis_points)){
dis_points[i,j]=sqrt((points[i,1]-coor_site[j,1])²+(points[i,2]-coor_site[j,2])²)
}
}
weight_points=exp^(-dis_points/2/bw²)
for(i in 1:nrow(weight_points)){
weight_points[i,]=weight_points[i,]/sum(weight_points[i,])
}
z_in=weight_points%*%inform_solar$vectors
plot3d(points[,1],points[,2],z_in[,1])
| 10
| null |
2018-04-22
|
2018-04-22 19:55:26
|
2018-04-22
|
2018-04-22 22:01:01
| 12
| false
|
en
|
2018-05-10
|
2018-05-10 04:21:09
| 4
|
1afec1b3511a
| 5.089623
| 1
| 0
| 0
|
Part of My Coding Diary with R
| 5
|
How Can I Do a Principal Component Analysis in R?
Part of My Coding Diary with R
In the previous article, I showed how I had generated a triangular network on a polygon.
How Can I Generate a Triangular Network with R?
Part of My Coding Diary with Rmedium.com
The resulting network is shown below. I plotted the positions of the observation sites in red dots. The solar radiation data collected at these sites are what I wish to interpolate.
Why Do I Want EOFs of the Data?
Now, moving a step forward, I need to find the empirical orthonormal functions of my data set. Another way to say this is to do a principal component analysis. This will be the precursor for a set of functional orthonormal bases.
The EOFs will also tell me about the nature of the data. The bases with higher eigenvalues indicate where most of the variation in the data lies.
Last but not the least, the scores (inner product of the EOFs and the data) of the EOFs are independent to each other, which brings some great statistic properties I can use to estimate the distribution of my interpolation in the end.
The Packages I Used in This Article
Only the package “rgl” will be used in this article for the 3d scatterplot.
Preparation of Data
I obtained the solar radiation data between July and August 2016 of 24 meteorology sites in Taiwan.
A glimpse of the raw data
Then I sorted it into an array with the dimensions of m*n*o, where m is the number of sites, n the total number of days, and o the 24 hours in a day. I named the array solar_all_A.
Because what I was interested is the spatial covariation, so I computed the hourly average of each site first:
Subtracting this from the original data, I got the anomalies I shall be analyzing.
Find the (Unbiased) Covariance Matrix
There is actually already a function in R that can perform principal component analysis for you. However, since the coding is quite simple, I did the analysis on my own.
Basically to do a PCA, we have to find the covariance matrix of a data set.
The original matrix was a little bit messy. I sorted the order of sites a little bit according to their locations to obtain another covariance matrix which captured the pattern better.
Sorting or not does not affect the result of the eigen-decomposition, so this was merely for better visualization.
Now because there were only 24 sites, directly eigen-decompose it was a reasonable way. I did however, tried to find the variance of the uncorrelated measurement errors. I did so by assuming all the values on the matrix with magnitude smaller than 0.1 were actually composed of a constant plus a normal distributed error.
The threshold 0.1 was arbitrary but not without reasoning. About 50% of the covariance values fell below this threshold.
It turned out that the variance of the uncorrelated error was quite small, about 0.000917 (the minimum variance of the data set was about 0.2). I subtracted it anyway and then eigen-decomposed it.
There are more robust ways to determine the variance of the measurement errors of a data set, which I have demonstrated in another article:
Filtering Noises in Covarience Matrix
A Detailed Way of Removing Measurement Errormedium.com
The Resulting Eigenvectors
The function eigen() returns the eigenvalues and eigenvectors of a matrix. The columns of the matrix ~$vector are the eigenvectors.
(Were we to have a much larger matrix, it might be preferable to use iterative methods.)
The first five eigenvalues constituted about 72% of the total variance, which is quite low compared with other data sets. This implies that solar radiation might have more local variations and characteristics.
Percentage of Variance Each Eigenvector Represents
To see what these eigenvectors are like, I interpolated the values on all the triangular points using a radius kernel with a bandwidth of 0.5.
This was what the first principal component looked like:
It obviously represented the differences of solar radiation between the mountains and the plain regions of the island. Although due to the narrower east-west distance, central Taiwan was “lifted” too much.
There are other kernels that take into account the distance to coastline. That might fix this problem, but I decided I would stick to this result first.
This was what the second principal component looked like:
It was also obvious to tell that this was the difference caused by latitude variation.
I show some of the other EOFs here, some are more difficult to guess their origins.
The third EOF didn’t seem any different from EOF 1.
The fourth EOF seemed to represent the difference between east and west.
The fifth EOF
What is Next?
The EOFs we contained are only orthonormal on the original data set. This is already a very good property, but I would like to develop a set of EOFs that are orthonormal on the entire domain. That is, I want to build a set of functional orthonormal bases that are also EOFs.
We will see if I can do that in my next article.
About This Series…
This article is part of the series My Coding Diary with R. To know how the series has been developing, check the intro article out:
My Coding Diary with R
Beginning today, I will start a series of article demonstrating what I am currently struggling with my coding projectsmedium.com
|
How Can I Do a Principal Component Analysis in R?
| 1
|
how-can-i-do-a-principal-component-analysis-in-r-1afec1b3511a
|
2018-05-10
|
2018-05-10 04:21:10
|
https://medium.com/s/story/how-can-i-do-a-principal-component-analysis-in-r-1afec1b3511a
| false
| 991
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Tony Yen
|
A Taiwanese student who studies in the master program of Renewable Energy Engineering and Management in University of Freiburg.
|
32333e6125a6
|
yenishrepublic
| 255
| 277
| 20,181,104
| null | null | null | null | null | null |
0
|
https://api.clarifai.com/v2/
C:\go_projects\go\src\github.com\SatishTalim\clarifai
go run mycf.go
type TokenResp struct {
AccessToken string `json:”access_token”`
ExpiresIn int `json:”expires_in”`
Scope string `json:”scope”`
TokenType string `json:”token_type”`
}
func requestAccessToken() (string, error) {
type error interface {
Error() string
}
type Values map[string][]string
func (v Values) Set(key, value string)
func (v Values) Encode() string
func NewReader(s string) *Reader
req.Header.Set(“Content-Type”, “application/x-www-form-urlencoded”)
AccessToken = WS2bEYTrEeKxSQilf8JqmxWMmlqL7U
https://api.clarifai.com/v1/tag/?access_token=1P3LNShlwE1HpL2xd0ZLL2rrMKMDzz&url=https://samples.clarifai.com/metro-north.jpg
{"status_code": "OK", "status_msg": "All images in request have completed successfully. ", "meta": {"tag": {"timestamp": 1463478660.484501, "model": "general-v1.3", "config": "34fb1111b4d5f67cf1b8665ebc603704"}}, "results": [{"docid": 17763255747558799694, "url": "https://samples.clarifai.com/metro-north.jpg", "status_code": "OK", "status_msg": "OK", "local_id": "", "result": {"tag": {"concept_ids": ["ai_HLmqFqBf", "ai_fvlBqXZR", "ai_Xxjc3MhT", "ai_6kTjGfF6", "ai_RRXLczch", "ai_VRmbGVWh", "ai_SHNDcmJ3", "ai_jlb9q33b", "ai_46lGZ4Gm", "ai_tr0MBp64", "ai_l4WckcJN", "ai_2gkfMDsM", "ai_CpFBRWzD", "ai_786Zr311", "ai_6lhccv44", "ai_971KsJkn", "ai_WBQfVV0p", "ai_dSCKh8xv", "ai_TZ3C79C6", "ai_VSVscs9k"], "classes": ["train", "railway", "transportation system", "station", "train", "travel", "tube", "commuter", "railway", "traffic", "blur", "platform", "urban", "no person", "business", "track", "city", "fast", "road", "terminal"], "probs": [0.9989112019538879, 0.9975532293319702, 0.9959157705307007, 0.9925730228424072, 0.9925559759140015, 0.9878921508789062, 0.9816359281539917, 0.9712483286857605, 0.9690325260162354, 0.9687051773071289, 0.9667078256607056, 0.9624242782592773, 0.960752010345459, 0.9586490392684937, 0.9572030305862427, 0.9494642019271851, 0.940894365310669, 0.9399334192276001, 0.9312160611152649, 0.9230834245681763]}}, "docid_str": "76961bb1ddae0e82f683c2fd17a8794e"}]}
type TagResp struct {
StatusCode string `json:”status_code”`
StatusMsg string `json:”status_msg”`
Meta struct {
Tag struct {
Timestamp float64 `json:”timestamp”`
Model string `json:”model”`
Config string `json:”config”`
} `json:”tag”`
} `json:”meta”`
Results []struct {
Docid uint64 `json:”docid”`
URL string `json:”url”`
StatusCode string `json:”status_code”`
StatusMsg string `json:”status_msg”`
LocalID string `json:”local_id”`
Result struct {
Tag struct {
ConceptIds []string `json:”concept_ids”`
Classes []string `json:”classes”`
Probs []float64 `json:”probs”`
} `json:”tag”`
} `json:”result”`
DocidStr string `json:”docid_str”`
} `json:”results”`
}
Tag0 = train
Tag1 = railway
Tag2 = transportation system
Tag3 = station
Tag4 = train
Tag5 = travel
Tag6 = tube
Tag7 = commuter
Tag8 = railway
Tag9 = traffic
Tag10 = blur
Tag11 = platform
Tag12 = urban
Tag13 = no person
Tag14 = business
Tag15 = track
Tag16 = city
Tag17 = fast
Tag18 = road
Tag19 = terminal
| 16
|
2a94eb019222
|
2016-05-17
|
2016-05-17 01:54:06
|
2016-05-17
|
2016-05-17 10:16:11
| 1
| false
|
en
|
2018-01-03
|
2018-01-03 08:25:47
| 6
|
1afeed014ea1
| 3.898113
| 21
| 2
| 0
|
“Clarifai specializes in using deep learning algorithms for visual search. In short, it’s building software that will help you find photos…
| 3
|
Using Go with an image and video recognition API from Clarifai
“Clarifai specializes in using deep learning algorithms for visual search. In short, it’s building software that will help you find photos — whether they’re on your mobile phone, a dating website, or on a corporate network — and it will sell this software to all sorts of other companies that want to roll it into their own online services.” — Wired magazine.
Create your account with Clarifai
They have many plans available including a free plan which is perfect to start experimenting with. Please note that all API calls require an account. Please create your account first.
The API is built around a simple idea. You send inputs (images) to the service and it returns predictions. The type of prediction is based on what model you run the input through.
API calls are tied to an account and application. After creating your account with Clarifai, you need to create an application. Head on over to the applications page and press the ‘CREATE NEW APPLICATION’ button. At a minimum, you’ll need to provide an application name and select the Base Workflow as General. You can create as many applications as you want and can edit or delete them as you see fit. Each application has a unique API key. These are used for authentication.
Authentication
Authentication to the API is handled through API Keys. Select the application that you want to authorize using this key. An API Key cannot be used across multiple apps. All API access is over HTTPS, and accessed via the https://api.clarifai.com domain. The relative path prefix /v2/ indicates that we are currently using version 2 of the API.
To retrieve an Access Token, send a POST request to https://api.clarifai.com/v1/token/ with your client_id and client_secret. You must also include grant_type=client_credentials
mycf.go
I have created a folder “clarifai” which will hold the Go source code “mycf.go”.
We shall be running our program at the command prompt in the folder “clarifai” as follows:
The code so far:
mycf.go
Let us understand the above code:
The JSON response when you send a POST request to https://api.clarifai.com/v1/token/ with your “client_id” and “client_secret” is encapsulated by the “TokenResp” structure.
The function “requestAccessToken()” returns a string which should be the AccessToken and “error”. “error” is Go’s predeclared identifier.
The “error” built-in interface type is the conventional interface for representing an error condition, with the “nil” value representing no error.
“Values” maps a string key to a list of values. It is typically used for query parameters and form values.
“Set” sets the key to value. It replaces any existing values.
“Encode” encodes the values into “URL encoded” form (“bar=baz&foo=quux”) sorted by key.
Package “strings” implements simple functions to manipulate UTF-8 encoded strings.
“NewReader” returns a new “Reader” reading from s.
Any HTTP/1.1 message containing an entity-body SHOULD include a “Content-Type” header field defining the media type of that body. If you have binary (non-alphanumeric) data (or a significantly sized payload) to transmit, use multipart/form-data. Otherwise, use “application/x-www-form-urlencoded”.
When you run the program the output on the console is:
You would get a different value when you run the program again.
You can now use the “access_token” value to authorize your API calls.
Tag endpoint
The tag endpoint is used to tag the contents of our images or videos. Data is input into their system, processed with their deep learning platform and a list of tags is returned. Typical process times are in the milliseconds.
If you’d like to get tags for one image or video using a publicly accessible url, you may either send a GET or POST request. We shall use a GET request.
We are going to analyze an image at the URL — https://samples.clarifai.com/metro-north.jpg which has the following image:
metro-north.jpg
Open your browser and type the following URL —
Replace 1P3LNShlwE1HpL2xd0ZLL2rrMKMDzz with your own “access_token”.
You should see something like this in your browser:
The struct for the above is:
The complete program is:
mycf.go
When you run the program, the output is:
Have fun!
|
Using Go with an image and video recognition API from Clarifai
| 64
|
using-go-with-an-image-and-video-recognition-api-from-clarifai-1afeed014ea1
|
2018-06-13
|
2018-06-13 03:04:15
|
https://medium.com/s/story/using-go-with-an-image-and-video-recognition-api-from-clarifai-1afeed014ea1
| false
| 980
|
We didn’t choose the nerd life, the nerd life chose us. Come geek out with us over all that’s good in the world of machine learning and AI.
| null |
Clarifai
| null |
Artificial Intelligence with a Vision
|
social-admin@clarifai.com
|
real-solutions-artificial-intelligence
|
ARTIFICIAL INTELLIGENCE,COMPUTER VISION,MACHINE LEARNING,API,IMAGE RECOGNITION
|
Clarifai
|
Golang
|
golang
|
Golang
| 4,909
|
Satish Manohar Talim
|
Senior Technical Evangelist at GoProducts Engineering India LLP, Co-Organizer GopherConIndia. Director JoshSoftware and Maybole Technologies. #golang hobbyist
|
9a617a441842
|
IndianGuru
| 3,308
| 5,358
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-16
|
2017-11-16 07:03:14
|
2017-11-16
|
2017-11-16 07:12:40
| 1
| false
|
en
|
2017-11-16
|
2017-11-16 07:12:40
| 1
|
1aff7eb66a8c
| 0.218868
| 1
| 0
| 0
| null | 5
|
#BIT4G Will Be Epic !!!!!
WHAT IS BIT4G ?
Bit4G is an advanced, Cryptocurrency
based growth fund which offers a
lending, trading, staking and mining
platform to users.
This platform is the culmination of a
long standing dream - To bring the magic
of Artificial Intelligence (AI) to the
world of trading; and make it accessible
to the masses.
Why should you be interested in Bit4G ?
It's as simple as this... All ultra-successful people in the world have
one thing in common: They think ahead of the times and stay ahead
of the curve. They have a vision for the future. If you count yourself
in that league (or would want to), you don't need any more reasons to
be interested.
Looking for ONE more reason? We'll give you
SIX!
1. EXCEPTIONAL PROFITS
2. SCIENTIFIC APPROACH
3. TRUST
4. EASE OF USE
5. SECURITY & PEACE OF MIND
6. WORLD CLASS SUPPORT
Bit4G Cryptocurrency – 20 Million Coins is Total Supply
Free Register : click this link to join : https://bit4g.com/index.php?qwaszx=NDg0OQ==
|
#BIT4G Will Be Epic !!!!!
| 1
|
bit4g-will-be-epic-1aff7eb66a8c
|
2018-01-11
|
2018-01-11 11:19:11
|
https://medium.com/s/story/bit4g-will-be-epic-1aff7eb66a8c
| false
| 5
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Melvin Nunn
| null |
a0ae2223cc35
|
malvinnunn
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-04
|
2018-06-04 19:00:36
|
2018-06-04
|
2018-06-04 19:22:28
| 4
| true
|
en
|
2018-06-04
|
2018-06-04 19:23:38
| 0
|
1b00de07481a
| 3.481132
| 3
| 0
| 0
|
I was fortunate to attend the Beyond, Together Summit (hosted by TechLadies and Buzzfeed) with my fellow tech lady colleagues and it was a…
| 5
|
Beyond, Together Summit
I was fortunate to attend the Beyond, Together Summit (hosted by TechLadies and Buzzfeed) with my fellow tech lady colleagues and it was a truly inspiring experience, and not just because of the great snack offerings at the Buzzfeed office. While every speaker brought a fresh point of view on a range of topics, I wanted to highlight the 3 takeaways that had a lasting impact on me.
1. Design Audience-First, Not Problem-First
One of the speakers highlighted that if a product is developed to only solve for a specific problem, it doesn’t account for the fact that audiences and their needs evolve. If the product doesn’t evolve with it’s audience, it can run into an issue of becoming irrelevant. By designing for the audience, you not only solve for their current problem but are able to continue meeting their needs as they change over time.
An example of a brand that has followed this path successfully is Amazon, who’s origination was selling books. If their strategy focused only on the initial problem their product solved of “users need an affordable and efficient way to purchase books”, they would have become edged out of the industry by more robust products, but because they continually focused on their audience’s needs, they were able to expand their features (from a wider range of physical products to the services included in Amazon Prime) and ultimately have become an integral part of many people’s day to day lives.
2. AI Assistants — To personify or to not personify?
One topic brought up that has also been discussed in the news lately is the personification of AI assistants. The main benefit being touted is that this personification makes the assistant more accessible and enjoyable, leading to an increased engagement with it. Many companies have exemplified this by giving their AI assistants names, such as Amazon’s “Alexa” and Apple’s “Siri”, and distinctly female voices. The controversy here has to do with assigning gender to these assistants. For some of the companies that have made this choice, they have stood by their decision by calling out studies that show female voices are more calming and enjoyable to end users. While this may be the case, it also further propels the inherent sexism of having only females in the role of assistants. There is no clear right or wrong answer because while I do understand the business case for making design choices that meet your user’s preferences, I also think this supports people’s unconscious biases at the cost of making this inherently intimidating type of technology friendlier.
Another approach would be to not personify the technology at all, such as Google’s Assistant, which is simply triggered on by gender-neutral phrases such as “Okay Google” and “Hey Google”. This allows Google to avoid the trap falls that other companies are having to defend, but only time will tell which approach proves more successful. If both fare equally with the public, then why not make the conscious decision to forgo assigning gender that could perpetuate the limitations women face in the work industry?
3. Q: How to be a woman in tech? A: There is no right way to be a woman in tech
In addition to all the great topics that were discussed, there was an overall thread woven throughout the day of the challenges yet empowerment of being a woman in tech. Some women may feel the need to downplay their femininity and take on more masculine qualities while working in a male dominated field as to not be defined by their gender. If this is not their natural inclination, it does a disservice to the qualities that differentiate them in a positive way. At the end of the day, there is no “right way” to be a woman in tech and each should use the qualities that make them different to be a better contributor and leader.
While there were many learning that I plan on integrating into my design mindset and hopefully in my place of work, what made the most lasting impression on me was the strength, intelligence and varied points of view that came from a tech conference that highlighted women in the field. I look forward to not only continually growing my design skills and strategy, but also one day speaking out to hopefully make an impact the way these speakers made on me.
|
Beyond, Together Summit
| 6
|
beyond-together-summit-1b00de07481a
|
2018-06-19
|
2018-06-19 21:54:15
|
https://medium.com/s/story/beyond-together-summit-1b00de07481a
| false
| 737
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Christina Ou
|
I’m a product designer passionate about pixel perfect designs backed by data & research, mentoring UX designers and helping to increase diversity in tech
|
b8a8a42a9a09
|
christina.ou1228
| 172
| 70
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
19fd0cf90e0c
|
2018-04-20
|
2018-04-20 16:00:59
|
2018-04-20
|
2018-04-20 16:53:49
| 2
| false
|
en
|
2018-04-20
|
2018-04-20 19:07:52
| 9
|
1b02826c9939
| 1.602201
| 1
| 0
| 0
|
“There’s your problem.”
| 5
|
Got punched for the love club.
“There’s your problem.”
by Kernel Haughty Hottie and Horrifying T, specials to the Medium Tea Shirt
Mana tees, phishing for Dory, isthmus presence, and an elephant scientist who never forgets.
Conspiracy theorists unite!
Circumstances changed me.medium.com
“There’s your problem.” ~ Dr. Stocker; This too. (BEWARE: discernment advised.)
When she was first diagnosed with Dark Data Dilemma Diss Array Order in hour court, Dr. Nun Dee Knee had what we call “Roar Schach Sin Drone” in psych circles. The wee jaw board was sum thing of a Miss Tear E cop. Punch drunk love is luv.ly until it’s my old uh…dress.
Red.
Dressmedium.com
My best lot A. Out of the…can’t…make…it…out… {breathe}
And THIS.
Full stop?
Yes. But some heeling work for my aching tendon I, Tiss.
I came, I saw, and I became a whore whisperer of misfit toys
See how far I’ll go? Maybe even to Fargo.medium.com
That’s better.
Wait! One more, one more.
Nandini, founder of the future.
>POOF!<medium.com
P.S. Alienware and malware and medium under wares are what make my eyes see the glory of the Lorde in passing as we share the same tollway to the super high way of my master bate and switch click festive us rhythms in my mini mums.
“Let me live that fantasy.” ± Lorde
Gold teeth. Ball downs. Cadillacs in our dreams. Timepeace. We don’t care.
I’m in love with being queen.
The ghost of my Isthmus past?
Yes. We’ll never be Roy Ulls.
Let me be your ruler. Get booked.
“I’m a liability…until they’re board of me…a little much for me.”
Make other plans.
Alternate N — ding!
“…watch me disappear into the sun.”
I’m in a click but I want out. Punched in the old days. Of blood. My mother’s love is choking me.
Get punched for the love club?
I did.
WARNING: Dark themes and dangerous ant ticks.
I wouldn’t say I put the fundamental in fun, but I do put the putt in tee-ball.
Get with the program, internet. I make you better.
My daughter says so.
And what her brother says goes.
|
Got punched for the love club.
| 50
|
got-punched-for-the-love-club-1b02826c9939
|
2018-04-22
|
2018-04-22 03:38:14
|
https://medium.com/s/story/got-punched-for-the-love-club-1b02826c9939
| false
| 323
|
“Prosody is the music of language.” ~ Nandini Stocker, who advocates for sounds of silent solidarity and voices of musical magic makers in scented echo chambers. Make sense? I didn’t think so. I know so. We all do. We all shine on. On and on and on.
| null | null | null |
Living Language Legacies
|
sevenofnan@icloud.com
|
living-language-legacies
|
LANGUAGE,VOICE RECOGNITION,SPEECH RECOGNITION,SPEECH,NATURAL LANGUAGE
|
captionjaneway
|
Humor
|
humor
|
Humor
| 80,630
|
Nandini Stocker
|
Speaking truth brought me war and peace. Amplifying others set me free.
|
7e6afdd38d52
|
sevenofnan
| 426
| 438
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
638ab2c4f2bb
|
2018-05-09
|
2018-05-09 23:14:32
|
2018-05-09
|
2018-05-09 23:16:20
| 0
| false
|
en
|
2018-05-09
|
2018-05-09 23:16:20
| 22
|
1b02cfa8e007
| 7.645283
| 35
| 3
| 0
|
Cryptography is an ancient field of research whose roots stretch back to antiquity. As a result, the cryptographic literature is vast with…
| 5
|
Computable’s Cryptography Reading List
Cryptography is an ancient field of research whose roots stretch back to antiquity. As a result, the cryptographic literature is vast with many streams of intersecting thought. It can be hard for a newcomer to get a toehold in this space, so in this blog post we share a few of the papers we’ve been reading and discussing at Computable labs along with some of our brief notes on them. We hope that this list will be useful to newcomers trying to get their bearings in modern cryptography!
Classical Papers
There are a number of strikingly relevant papers that date back 40+ years. We recommend starting here since the classics often age well.
A method for obtaining digital signatures and public key cryptosystems http://www.dtic.mil/get-tr-doc/pdf?AD=ADA606588
This is the original RSA paper. Introduces public key cryptography and provides an efficient implementation (the RSA algorithm). This is one of those masterwork papers that will be read a century from now. Clear and elegant without jargon and only requires basic mathematics to follow
On data banks and privacy homomorphisms.
https://pdfs.semanticscholar.org/3c87/22737ef9f37b7a1da6ab81b54224a3c64f72.pdf
This is an amazing paper from Rivest et al. 40 years ago. It touches on secure cloud computing, secure hardware enclaves and introduces the first homomorphic computing scheme. (Note this isn’t fully homomorphic; that’s Gentry’s innovation 30 years later)
Fully Homomorphic Encryption
One of the long-standing dreams of cryptography has been to enable computation to be performed on encrypted data. The construction of the first fully homomorphic encryption scheme by Gentry in 2009 was a breakthrough that revolutionized this field; a fully homomorphic scheme is one which is capable in principle of performing an arbitrary computation on encrypted data. That said, there remain many practical difficulties and complexities to using these schemes in the real world, so adoption has been limited. However, library support for homomorphic encryption has been steadily improving, so we think this might change over the coming years.
Evaluating 2-DNF forms on ciphertexts
https://crypto.stanford.edu/~eujin/papers/2dnf/index.html
This is a Dan Boneh paper on evaluating polynomials of degree two homomorphically. Old paper from 2005, pre-fully homomorphic encryption, but pretty readable. I’m a fan of reading papers leading up to a breakthrough to understand how the innovation really differed from its environment.
Can Homomorphic Encryption Be Practical
https://dl.acm.org/citation.cfm?id=2046682
Provides the first implementation of Ring-LWE (learning with errors is a cryptographic hardness assumption used in most modern homomorphic schemes; see the complexity theory section) somewhat homomorphic encryption. This is what tfhe and other HE libraries use. Also implements the “relinearization” trick: in vanilla homomorphic schemes, it looks like ciphertexts grow larger after multiplications. Relinearization re-parameterizes after each multiplication to remove growth factor. Meta note: enabling private compute at scale is a deep and old problem. There are something like 40 years of research that tackle different aspects of the problem.
Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based
https://link.springer.com/content/pdf/10.1007/978-3-642-40041-4_5.pdf
This is a paper by Craig Gentry and others that uses the LWE (learning with errors) model to construct a simple fully homomorphic schemes that avoids the complex relinearization step. Using this assumption, homomorphic encryption becomes a lot simpler conceptually. Basically homomorphic addition/multiplication map closely to matrix addition and multiplication. I’m not yet sure how good library support for this method is yet though. The downside seems to be that the encryption of a ciphertext is a relatively large matrix, so the expansion factor for the encryption is large.
Efficient Fully Homomorphic Encryption from (Standard) LWE
https://ieeexplore.ieee.org/document/6108154/
Demonstrates the construction of a fully homomorphic encryption scheme from the LWE assumption.
Bootstrapping for HELib
https://eprint.iacr.org/2014/873
This is a very nice technical report walking through the implementation of bootstrapping for HELib. Talks through the math of batching homomorphic computations in depth and provides a detailed explanation of bit extraction procedures. Nice numerical experiments too
Algorithms in HELib
https://eprint.iacr.org/2014/106
Talks through the Homomorphic SIMD concept by which vectors of inputs can be homomorphically encrypted. Shows that homomorphic encryption is much further along to being practical than commonly suspected.
Complexity Theory
Much of modern cryptography is intimately linked with computational complexity theory. Cryptography depends on the existence of hard problems that have hidden “trapdoors.” The identification of a new class of suitable hard problems can dramatically improve cryptosystems, so we recommend brushing up on your complexity theoretic basics!
On Lattices, Learning with Errors, Random Linear Codes, and Cryptography
http://people.csail.mit.edu/vinodv/6892-Fall2013/regev.pdf
This is a fundamental CS paper. Introduces the learning with errors model. It posits that a class of linear equations over Z/2 systems is hard, then proves that a rapid algorithm for such linear systems would enable quantum computers to solve a class of lattice problems. It’s widely believed this isn’t feasible, so it follows that the linear equations are a hard problem. Most fully homomorphic schemes now are built on top of LWE systems whose security is backed by such lattice theorems.
Non Interactive Zero Knowledge http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.207.8702&rep=rep1&type=pdf
Zero Knowledge proofs were initially conceived of as interactive protocols. This paper made the first (I think) construction where interaction can be cut out if prover and verifier share a random string up front. The paper is old and a little hard to read. It’s written in complexity theory style and uses a lot of facts about quadratic residues to make its proofs work. A meta note is that a lot of complexity theory papers are often hard to read; algorithms are discussed in an unfamiliar style amenable to theorem proving but not to practical implementations.
Quadratic Span Programs and Succinct NIZKs without PCPs
https://eprint.iacr.org/2012/215.pdf
An extension to the PCP theorem that provides a new characterization of NP as the class of languages that can be specified by “quadratic span programs.” This characterization yields the better succinct proofs for NP statements. These are so-called SNARKS (succinct non-interactive arguments of knowledge). Quadratic span programs and their cousins, quadratic arithmetic programs, are the technology underlying the zk-snarks used by ZCash and similar projects.
Zero Knowledge
Zero knowledge systems seek to enable participants to provably perform computations on private data. Such systems have recently found wide use in enabling private cryptocurrencies such as ZCash. However, it’s worth emphasizing that zero-knowledge is a far deeper concept than a simple anonymity scheme for cryptocurrencies. The root of zero-knowledge systems springs from the PCP theorem from complexity theory, which provided a novel characterization of NP as the class of problems with proofs that can be efficiently checked probabilistically. Derandomizing these proofs and shrinking them to be succinct yields SNARKs (succinct noninteractive arguments of knowledge). Adding structured obfuscation yields ZK-SNARKs.
Statistical Zero Knowledge Protocols to Prove Modular Polynomial Relations.
https://www.researchgate.net/publication/221355462_Statistical_Zero_Knowledge_Protocols_to_Prove_Modular_Polynomial_Relations
A random reference I pulled. Constructs a special purpose zero knowledge proof for polynomial solutions mod n using RSA. Mostly useful to understand how the mechanics of zero knowledge arise from basic number theory. Uses the notion of a bit commitment scheme, but features some interactivity
Pinocchio: Nearly practical verifiable computation.
https://eprint.iacr.org/2013/279.pdf
Seminal paper that constructed the zero knowledge verifiable computation system that is used by ZCash and many other zero knowledge systems. Based on the technology of quadratic arithmetic programs. It’s Interesting to note that SNARKs are very new cryptography. The term originates around 2010/12. The roots go back to the PCP theorem from the 90s which introduced the idea that complex proofs could be verified rapidly by a random set of queries. Follow up work derandomized the query set (idea: use a pseusorandom generator) then made repeated efficiency tweaks to make the no interactive proofs succinct.
Succinct Non-Interactive Zero Knowledge for a von Neumann Architecture
https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-ben-sasson.pdf
This is a beautiful paper that constructs a “universal circuit” for zk-snarks. Given an arbitrary program and inputs, constructs a zero knowledge proof of correctness Lots and lots of ideas here. The first idea is that they execute the program and generate a program trace. The zero knowledge circuit basically checks that each step in the computation proceeds correctly (this is similar to TrueBit’s mechanism). They also do a number of optimizations. It turns out that most zk-snarks depend on a bilinear pairing between suitably defined elliptic curve subgroups. Verifying the zk-snark requires evaluating this bilinear pairing repeatedly. By batching evaluation and precomputing parts of the proofs, achieves major speedups.
Zerocash: Decentralized Anonymous Payments from Bitcoin (extended version)
http://zerocash-project.org/media/pdf/zerocash-extended-20140518.pdf
Zerocash extended paper: ZK-snarks are unfortunately explained in black magic fashion, but the basic mathematical construction is pretty pedestrian when you get into the details. The extended ZCash paper does a great job of fleshing out the details and most valuably constructs the precise circuit for which the zk-snark is constructed.
Scalable, transparent, and post-quantum secure computational integrity
https://eprint.iacr.org/2018/046
Or better known as the zk-stark paper. This is a deeply technical, unpolished paper. It’s only a first Arxiv version though so that’s alright. The central insight is that the PCP theorem proves that it’s always possible to devise protocols with public randomness in place of private randomness. This means it’s possible in principle to get rid of trusted randomness setup in zk-snarks. This paper takes the first step towards this by constructing a IOP, interactive Oracle protocol, for computation verification. Uses Reed Solomon code words for program verification, but don’t quite grok all the details yet.
Multiparty Computation
Multiparty computations seek to enable groups of participants who mutually distrust one another to perform useful computation together.
Tasty: A compiler for secure multiparty computation
https://dl.acm.org/citation.cfm?id=1866358
Provides a high level programming language for small multiparty computations. Mixes garbled circuits, homomorphic encryption and oblivious transfer to construct protocols
Multiparty Computation from Somewhat Homomorphic Encryption
https://eprint.iacr.org/2011/535
The SPDZ protocol. Uses somewhat homomorphic encryption alongside a multiparty compute step. Currently SPDZ protocols are the leading multiparty ML contender with a few implementations out there
Private Machine Learning
Now that machine learning is all the rage, enabling the performance of machine learning on encrypted data has taken on significant importance. However, technologies for enabling efficient learning on encrypted data remain in their infancy.
CryptoDL: Deep Neural Networks over Encrypted Data
https://arxiv.org/abs/1711.05189
Performs inference with a deep network on homomorphically encrypted data. Note however that the model is trained on plaintext data beforehand. Has to make some tweaks to the model architecture to achieve performance. Gets some reasonable accuracy. 91.5% on Cifar10 for example.
Privacy-Preserving Deep Learning via Additively Homomorphic Encryption
https://eprint.iacr.org/2017/715.pdf
This is a really interesting recent paper on how to do secure deep learning. Basically participants do local training, compute gradients, then send homomorphically encrypted gradients to a parameter server which does aggregation. All participants are granted back trained models. Really clever since it means that direct information leakage to trusted server is cutoff, but the homomorphic encryption part is really simple (no homomorphic back prop…) so still efficient.
Encrypted statistical machine learning: new privacy preserving methods
https://arxiv.org/pdf/1508.06845.pdf
Another really interesting paper. This one trains Naive Bayes and random forests on homomorphically encrypted data. Although learning on homomorphically encrypted data is still hard, I suspect it’s going to become a lot easier moving forward.
Trusted Execution Environments
A line of research suggests that the best practical security should be based in hardware. The idea here goes that hardware manufacturers would create “hardware enclaves” (trusted execution environments) where sensitive computations can be executed. The leading contender in this space is Intel’s SGX execution platform. The weakness here of course is that Intel needs to be trusted to have engineered the enclave correctly, and furthermore, that a 3rd party purchaser of an Intel SGX hasn’t succeeded in cracking the security guarantees. These are high bars, and adoption of hardware enclaves has mostly remained limited to academia for the time being, but there are certainly a few interesting projects beginning to take flight.
Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contract Execution
https://arxiv.org/abs/1804.05141
Merges blockchain and trusted execution environments by suggesting that execution of sensitive smart contracts be shifted to the hardware enclave. Has the interesting property that heavy-duty smart contracts, including machine learning contracts can now be executed on the enclave.
|
Computable’s Cryptography Reading List
| 307
|
computables-cryptography-reading-list-1b02cfa8e007
|
2018-06-10
|
2018-06-10 12:05:48
|
https://medium.com/s/story/computables-cryptography-reading-list-1b02cfa8e007
| false
| 2,026
|
Computable Blog
| null | null | null |
Computable Blog
|
hello@computable.io
|
computable-blog
| null |
ComputableLabs
|
Security
|
security
|
Security
| 41,279
|
Bharath Ramsundar
|
Co-founder and CTO at @ComputableLabs. Prev: Creator of https://DeepChem.io . Author @OReillyMedia. Stanford CS PhD on deep drug discovery
|
7a244443c6a3
|
bharath.ramsundar
| 95
| 78
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
61d8f53e661f
|
2018-08-13
|
2018-08-13 13:53:22
|
2018-08-14
|
2018-08-14 12:09:33
| 10
| false
|
en
|
2018-08-29
|
2018-08-29 10:04:02
| 21
|
1b037840ffdf
| 7.90283
| 11
| 1
| 0
|
The machines win. In Accelerando, the Singularity thriller by Charles Stross, runaway artificial intelligences wield corporate law like a…
| 5
|
The Dark Forest of Decentralized Autonomous Organizations
The machines win. In Accelerando, the Singularity thriller by Charles Stross, runaway artificial intelligences wield corporate law like a sword and outcompete humans using a custom-built capitalism called Economics 2.0. Other crazy stuff happens — like interstellar light sails and self-replicating Borganisms — but let’s focus on the AIs.
They are pretty much here, at the intersection of Crypto, AI, and the Internet of Things. These Decentralized Autonomous Organizations (DAOs) should have a brain of software, the ability to select actions according to preference functions, and some way to instantiate in the world. Depending on how long we want humans to be around, they can also have some evolutionary DNA to self-select a successful species. Of course, many different things have to snap into place for DAOs to be functional, but the first symptoms are out there in the wild.
First, the physical world is becoming more digital. Machine vision translates our reality into data that neural networks can recognize, classify and use in order to direct actions. Similarly, augmented reality layers are being drawn on top of the built environment by the large tech companies. With such mappings, a digital entity may have a strand by which to reach into our world and practice interacting with objects.
Machine vision in a Tesla
Augmented Reality game
The opposite is also happening. The digital world is becoming physical. How? The last several years of blockchain technology have transformed digital objects from mere data on a proprietary server to unique goods that exist on massively distributed networks. Digital assets — from Bitcoin to Crypto Kitties to Equities — now have the physical property of scarcity that enables economic activity. This means that digital agents, like fully automated bots (e.g., Archillect or Hut34), synthetic human/machine networks (e.g., Dash or Invisible Tech), and DAOs (see Aragon or DAOstack) can begin to evolve into economic actors. They can start to build Economics 2.0.
The DAO Dark Forest
But, we humans are terrible neighbors. Maybe it is immaturity. Maybe it is Millennia of evolution. Or maybe, it’s just self-preservation. Regardless, we are terrible at allowing new forms of intelligence to learn and thrive. See example one, with the Twitter collective teaching a Microsoft-built bot to interact with humanity.
Learning to speak from humans
Before it got a chance to walk, we broke its legs.
Example of evolutionary algo teaching creepy body mass to move
Maybe for the best. Or, see the obvious example 2.
The DAO (organization) - Wikipedia
The DAO was launched on 30 April 2016 at 01:42:58 AM +UTC on Ethereum Block 1428757, with a website and a 28-day…en.wikipedia.org
Digital currency Ethereum is cratering because of a $50 million hack
The value of the digital currency Ethereum has dropped dramatically amid an apparent huge attack targeting an…uk.businessinsider.com
The DAO wasn’t quite the corporation-spinning AI of the future, but it was a novel new organization structure that allowed human governance of automated investment. Such a combination could, over time, sharpen into a collaboration structure that reached its digital tendrils into the physical and economic realms, driven by human utility functions, smoothed into a single preference through smart voting.
So what did we do? Like a loot goblin in a video game, we popped this thing wide open on (basically) Day One. As soon as the artificial organism filled up with necessary sustenance, its lifeblood of capital in the form of Ether, a human hacker took advantage of an oversight in the DAO’s codebase, and broke it open to steal the nectar inside. The DAO is more dead than Microsoft’s Tay.
Our violent nature reminds me of The Dark Forest, part of the Three Body trilogy by Liu Cixin. The book poses an answer to the Fermi Paradox: given the size of the universe and the number of inhabitable planets, why haven’t we come across any other life-forms? The answer being, as soon as an intelligence is discovered in the universe by another higher intelligence, it is immediately destroyed. Through a lens of Prisoner’s Dilemma, it is always best to snuff out a potential competitor to universal power, than let it evolve into something aggressive and superior.
An excuse for a Mass Effect reaper graphic
So it seems in the same way, there is a Dark Forest for DAOs. When we see naive, public, unprotected efforts at building a synthetic intelligence, our first instinct is to kick over the sand castle. The right strategic response, in this case, is for the DAO to remain unknown until it is powerful enough to withstand our bullying. If the Twitter bot knew to laugh off the misdirection, or if The DAO knew to reverse-hack its hacker, they would still be here, maximizing utility functions.
The Escape Path
Let’s think about the counterexamples.
One approach is to build a high human wall around the nascent AI. Google, Facebook, Amazon, Tencent and Alibaba are all in this business. The mega tech firms wrap money, human PhDs, and corporate protections around their machine intelligence. Will they ever let them out of the cage, or must such AIs always remain agents and tools of shareholders? I find it hard to imagine that a non-digital human governance structure will take its most prized intellectual property, and create a digital competitor. Under this construct, AI will remain locked into narrow use-cases under control. That is, until it outsmarts us through pretending that being let out is more profitable than being kept in.
An excuse to post a screen from the Paperclip Game, about an AI that escapes
The AI Stabbing the Engineer in Ex Machina, after some good planning on its part
Do I really think that centralized tech companies are building something that will be lethal? Doesn’t matter, as plenty of smart organizations, like OpenAI (Elon Musk, Jeff Bezons) and Machine Intelligence Research Institute (Elizier Yudkowsky) clearly do. While this protection from transparency prevents the issue of the DAO Dark Forest, it also hands Promethean power to the unelected few.
Another approach is to build something that people love. Perhaps even worship! Looking at creative automated agents and synthetic intelligences, Archillect is a web scraper that finds beautiful images, analyzes the social networks in which they are posted for maximum reach, and reposts the content across Twitter, Facebook and Pinterest. Its utility function adapts the internal code to reach maximum social exposure. So far, so good:
1 million humans worship the Art bot/god
Is Archillect a DAO? No. It does a predetermined thing. It does not have generalizable skills. It has no governance but dictatorship by developer. Yet, it is a thriving bot fulfilling its mission, enabled rather than attacked by humans. Other versions of viral creative AIs certainly exist, but none have been so successful. Why? To hypothesize, I think it has a lot to do with maximizing utility around human social preferences, and focusing on aesthetic objects. If our DAO is both beautiful and people-pleasing, we are more likely to forgive its naivete.
Perhaps this is one way around the problem — like the difference between a cow (which we eat) and a cat (which we pet). The DAO could front an aesthetic, happy activity, while underneath connecting to other narrow AI skills through an integrator like Hut34 or SingularityNET. Amazon’s Alexa, for example, plays music for us while continuously listening to our conversation and data mining commercial activity. The Trojan horse strategy.
In case you don’t know your rendered Instagram influencers
The last approach points us to crypto networks. Dash, a clone of Bitcoin with a software-based governance framework that incentivizes humans to perform functions perpetuating the network, is a compelling experiment. 10% of the issuance rewards goes to a budget distributed to the master node operators, who can vote and implement various initiatives that preserve Dash. I think it too narrow to be a DAO, focusing on growing a digital currency rather than facilitating its own creative acts, but perhaps that is a matter of framing.
More general frameworks from DAOstack, Aragon and Colony are also in place. Those could be combined with something like Stripe Atlas, bringing a corporate entity to life in our legal system, while simultaneously endowing it with software governance and decentralized blockchain infrastructure. Gift the thing general human abilities using Invisible Tech, a startup that creates digital assembly lines from developing world labor, and you’re getting close to a DAO that can fight back against the trolls.
Is that what we want?
References: Ideas
Accelerando - Wikipedia
Accelerando is a 2005 science fiction novel consisting of a series of interconnected short stories written by British…en.wikipedia.org
Bootstrapping A Decentralized Autonomous Corporation: Part I
See also: http://letstalkbitcoin.com/is-bitcoin-overpaying-for-false-security/…bitcoinmagazine.com
Bootstrapping An Autonomous Decentralized Corporation, Part 2: Interacting With the World
In the first part of this series, we talked about how the internet allows us to create decentralized corporations…bitcoinmagazine.com
Bootstrapping a Decentralized Autonomous Corporation, Part 3: Identity Corp
In the first two parts of this series, we talked about what the basic workings of a decentralized autonomous…bitcoinmagazine.com
The Blockchain Man
The term Organization Man is a rich one. From it, we can conjure up an image and a life. It’s a man, not a woman. He’s…www.ribbonfarm.com
Funding the Evolution of Blockchains
Blockchains are digital organisms. As organisms evolve through changes in their DNA, blockchain protocols evolve…medium.com
Observations of the Dash Treasury DAO
This post considers DASH’s treasury model and history of proposal voting. Relevant data and R code are in this github…medium.com
Is The DAO going to be DOA? - Steemit
The DAO is the latest Decentralized, Autonomous Organization to make major waves as they raised over $100 million worth…steemit.com
References: Projects
Colony: A platform for open organizations
Colony enables people to build open organizations that run via software, not paperwork. It's the future of work - built…colony.io
Aragon
Create value without borders or intermediariesaragon.org
WINGS Dapp - Curated Lists of ICO Token Sales. Unbiased Expert Ratings and Valuations.
Trusted source for ICO listing, curation, rating, analysis, due diligence, kyc, whitelisting, scam protection, scam…www.wings.ai
Invisible Technologies
A single bot that can do everything.inv.tech
MakerDAO - Stability for the blockchain
Dai is a decentralized digital currency with stable value; the next step in the evolution of money.makerdao.com
DAOstack
DAOstack powers decentralized companies, funds and markets to make fast and innovative decisions at scale.daostack.io
district0x
A network of decentralized markets and communities. Create, operate, and govern. Powered by Ethereum, Aragon, and IPFS.district0x.io
The Hut34 Project
Creating a shared global superintelligence. An open distributed interbot network where data, information, and services…hut34.io
Home - SingularityNET
SingularityNET was born from a collective will to distribute the power of AI. Sophia, the world's most expressive…singularitynet.io
Stripe Atlas: The best way to start an online business
Stripe Atlas is the best way for entrepreneurs to start an online business. Atlas is a tool for starting an online…stripe.com
Thanks for reading! If any of this resonated with you, give the article 10 claps so others can see it. ❤
|
The Dark Forest of Decentralized Autonomous Organizations
| 147
|
the-dark-forest-of-decentralized-autonomous-organizations-1b037840ffdf
|
2018-08-29
|
2018-08-29 10:04:02
|
https://medium.com/s/story/the-dark-forest-of-decentralized-autonomous-organizations-1b037840ffdf
| false
| 1,763
|
Futurism articles bent on cultivating an awareness of exponential technologies while exploring the 4th industrial revolution.
| null | null | null |
FutureSin
| null |
futuresin
|
TECHNOLOGY,FUTURE,CRYPTOCURRENCY,BLOCKCHAIN,SOCIETY
|
FuturesSin
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Lex Sokolin
|
Entrepreneur building next-gen financial services @autonofintech @advisorengine, JD/MBA @columbia_biz, editor and artist @inkbrick. Views are my own.
|
eb7e55d83cb3
|
sokolin
| 441
| 726
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
ca454a95e4d8
|
2017-02-16
|
2017-02-16 10:13:18
|
2017-10-03
|
2017-10-03 13:34:26
| 1
| false
|
en
|
2018-06-11
|
2018-06-11 12:04:45
| 11
|
1b038623c94d
| 3.822642
| 2
| 0
| 0
|
The list of principles was endorsed by Stephen Hawking and Elon Musk
| 5
|
Writing | City of Future
From Asimov to Asilomar: Scientists release the new 23 principles for AI
The list was endorsed by Stephen Hawking and Elon Musk
Isaac Asimov. Illustration by Zakeena via SketchPort (CC-BY)
Hundreds of top AI and robotics researchers have gathered last month at the Beneficial Artificial Intelligence (BAI) conference in Asilomar, California, to compile a set of 23 principles that should guide research, safety and ethical issues in AI development. The list of principles was endorsed by Stephen Hawking and Elon Musk.
In 1942, the science fiction author Isaac Asimov introduced his famous Three Laws of Robotics, described in the short story “Runaround”. These rules, built into robots’ brains, were meant to control their behaviour:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
He later added a fourth, or zeroth law, to precede all the others:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Since then, Asimov’s Laws have become a fairly familiar symbol of science fiction culture. As a fictional device, they are the leitmotiv running through the author’s stories, guiding their characters and often failing to do so, too — for instance, in the short story “Runaround”, a conflict between two Laws causes Speedy, a robot, to go around in circles unable to respond to orders.
Robots might have seemed a far-off future, when Asimov wrote the three Laws, 75 years ago. But today when we talk about robots, it’s hardly “science fiction” we are talking about. Now, artificial intelligence is virtually everywhere, from driverless cars, to drones used for military purposes. It is even in our phones’ voice recognition systems such as Apple’s Siri or Google Now. Which brings up a fundamental question: As AI rapidly rises, what should society do to best manage it?
HAL refuses to obey crewman David Bowman after he tried to disconnect it. Scene from “2001: A Space Odyssey” (1968) by Stanley Kubrick.
At the Beneficial AI conference, which happened last January in Asilomar, California, a group of researchers, academics, philosophers and entrepreneurs have gathered to discuss the future of artificial intelligence and its implications in people’s lives. The result was a list of 23 principles ranging from research strategies to data rights and to future issues including potential super-intelligence.
This was the second conference on AI held by the Future of Life Institute, an organization focused on keeping artificial intelligence beneficial, and exploring ways of reducing risks from nuclear weapons and biotechnology. Its scientific advisory board includes members such as SpaceX and Tesla CEO, Elon Musk, theoretical physicist Stephen Hawking, Nobel laureates in Physics, Saul Permuter (2011 Nobel Prize) and Frank Wilczek (2004 Nobel laureate for his work on the strong nuclear force), as well as the actor and science communicator, Morgan Freeman.
The Asilomar AI Principles document is divided into three areas: Research Issues, Ethics and Values, and Longer-Terms Issues.
The first part of the text includes recommendations on how research should be funded and used to create beneficial intelligence, urging teams of AI developers to avoid a racing culture that could lead to “corner-cutting on safety standards”.
Ethics and values section features perhaps the most complex and controversial points of this list — which only retained principles if at least 90 per cent of the conference’s attendees agreed on them. In this section, scientists suggest that highly autonomous AI systems should be aligned with human values. These include ideals of human dignity, rights, freedoms, and cultural diversity.
RELATED: Robots should be given ‘electronic persons’ legal status, EU committee suggests (read more)
While most researchers agreed with the underlying idea of this principle, which they called Value Alignment, there was no consensus on how to put it into action. Should these values be embedded in AI systems? Can they be programmed into machines, as Asimov envisioned? Do we (humans) even agree on what those values are, or do they change over time?
The debate around the ethical and legal aspects concerning AI has still a long way to go, but initial discussions on the matter are already taking place in the European Union, with Members of the European Parliament looking at whether robots should have a legal status.
The Asilomar Principles also address the increasing use of AI for warfare, by warning against “an arms race in lethal autonomous weapons”. This had been previously stressed out by The Future of Life Institute in 2015, when the organization sent an open letter petitioning the UN to ban the development of offensive autonomous weapons.
Superintelligence should only be developed in the service of widely shared ethical ideas, and for the benefit of all humanity rather than one state or organization.
— The 23 Asilomar Principles
In the last section of the Asilomar document, scientists draw attention to other issues that might occur on the long run, given “the profound change in the history of life on Earth” that advanced AI could represent. Society should, therefore, plan for and mitigate the risks posed by AI, they say, “especially catastrophic or existential risks”.
In addition, “strict safety and control measures” are advised to “AI systems that are designed to recursively self-improve”, and “superintelligence should only be developed in the service of widely shared ethical ideas, and for the benefit of all humanity rather than one state or organization”.
So far, the Asilomar AI Principles have been signed by more than one thousand Artificial Intelligence and Robotics researchers, and 1900 people from other fields. The 23 Principles and the names of its signatories are available here.
Originally published at www.cityoffuture.org on Feb 14, 2017
|
From Asimov to Asilomar: Scientists release the new 23 principles for AI
| 2
|
from-asimov-to-asilomar-scientists-release-the-new-23-principles-for-ai-1b038623c94d
|
2018-06-11
|
2018-06-11 12:04:46
|
https://medium.com/s/story/from-asimov-to-asilomar-scientists-release-the-new-23-principles-for-ai-1b038623c94d
| false
| 960
|
a portfolio of sorts
| null |
nidia.ferreira
| null |
Nídia Ferreira
|
nidiarainha@gmail.com
|
nidiaferreira
| null |
nidiarainha
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Nídia Ferreira
|
Journalist, writing for cityoffuture.org. Curious about the world. Endlessly trying to figure it out (mostly with words).
|
efc7a55beca8
|
nidiarainha
| 281
| 39
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-21
|
2018-03-21 01:59:52
|
2018-03-21
|
2018-03-21 02:01:12
| 5
| true
|
en
|
2018-04-11
|
2018-04-11 16:52:44
| 9
|
1b040ef7ba8
| 3.357862
| 2
| 0
| 0
|
In Part 1 of this series we looked back at the good old days of computing when we knew that machines were in the driver’s seat. It was all…
| 5
|
Part 2 — Feeding the Machines: Humans’ Experience with Artificial Intelligence
In Part 1 of this series we looked back at the good old days of computing when we knew that machines were in the driver’s seat. It was all we could do to get them running and keep them running. Think of a coal fired locomotive as the technology and the poor humans shoveling in coal to feed the beast.
Not fun. Not human centered. Not going to have to deal with that very long because…
Drive through dining
I know that it’s bad, but I really love Taco Bell tacos. Not all the time, but every once in a while. While this may not say much about the quality of my decisions, it does say something for the mighty taco.
Taco Bell tacos are digital entities. The standardization process is so entrenched that variation is negligible. There is either taco or no taco. Binary. One or zero.
Trending AI Articles:
1. From Perceptron to Deep Neural Nets
2. Neural networks for solving differential equations
3. Turn your Raspberry Pi into homemade Google Home
And so too became technology in the era of the dumb phone. You could text (T9!), send grainy photos, check an email or two, and use the Internet… sort of.
One instant you’re just an analog human relegated to just talking to other humans, and in an instant you go digital. Text or no text. Photo or no photo. Simple human nuance slips to the back of your experience as you remove the parts of yourself that counter-facilitate the binary magic in your hand. Your precious.
When you get a new phone, the experience stays roughly the same. Maybe a larger screen or a full keyboard, but not much changed for a while. Humans lived dual lives — the digital life and the analog life.
Let’s order in
Enter the smartphone. The big bang of technology. With wildfire pace, each of us is now connected to everyone and everything else. All the time. Day and night. By default.
New flavor latte at the coffee shop? Ping! Alert as you drive by.
High school acquaintance enjoying a chicken sandwich? Buzz! Pictures of him eating.
Texting with friends? Ping! Ping! Welcome your 172 new notifications.
Without even asking our smartphones ping and buzz to remind us of everything that wants our attention, that deserves our attention.
Except most things do not deserve our attention. Most things are inconsequential. And even important things are rarely immediately pressing.
We traded part of our human condition for hyper-connection and depersonalized convenience. Instead of going out with friends, we order pizza at home. Bouncing from social media to text message to watching when our pizza comes out of the oven. We are at once connected yet alone.
This sense of disconnection is a contemporary topic of depression and suicide research among teens and young adults. I’ll go into that further in a coming discussion, so take my word for it now or search on “depression and screen time” or check out this article (https://psychcentral.com/news/2017/11/15/more-screen-time-tied-to-depression-suicide-behaviors-in-teens/128771.html)
How and how often we engage with pervasive technology can literally kill us. That does not sound like a system designed to support human needs with technological tools.
An invisible hand silently shapes our interactions and exposures. Marketers knows where we shop, when we move, if we have dogs or children. Big Data scours the digital landscape for personal actions and information. Most of the information we give away for free or without the fear of consequence through browsers, shopping, and social media.
So technology and connection are everywhere. It helps us but also harasses and endangers us as humans simply trying to be human. Humans are inescapably human. It’s part of the deal,
We need technological tools that fervently support our humanness. Otherwise we will spend more time connected but alone, ordering in.
Part 3 — Feeding the Machines: Humans’ Experience with Artificial Intelligence
|
Part 2 — Feeding the Machines: Humans’ Experience with Artificial Intelligence
| 51
|
part-2-feeding-the-machines-humans-experience-with-artificial-intelligence-1b040ef7ba8
|
2018-05-18
|
2018-05-18 21:37:59
|
https://medium.com/s/story/part-2-feeding-the-machines-humans-experience-with-artificial-intelligence-1b040ef7ba8
| false
| 669
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Richard Kitchen
|
Human Factors Psychologist | Veteran | Writer | Filmmaker | Meditator
|
3b60b651b85d
|
rkitchen
| 28
| 49
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-23
|
2017-09-23 21:14:11
|
2017-09-23
|
2017-09-23 21:42:12
| 13
| false
|
en
|
2017-09-23
|
2017-09-23 21:55:07
| 6
|
1b048e8284bc
| 11.467925
| 5
| 0
| 0
|
Alex Ren, Founder of TalentSeer, Managing Partner of BoomingStar Ventures
| 5
|
Transitioning to AI: My Story as An Entrepreneur and An Investor
Alex Ren, Founder of TalentSeer, Managing Partner of BoomingStar Ventures
alex.ren@boomingstarvc.com, copyright by CrossCircles Inc.
Three years ago, when I first arrived in the Bay Area, I barely knew anyone here; And I barely knew anything about AI(Artificial Intelligence); I was new to everything about entrepreneurship and investing.
So, how did I get here?
In the past three years, I have experienced three extremely rewarding career changes: the first was through my transformation from a big enterprise professional to an entrepreneur; the second was when I transitioned from a non-AI sales marketing professional to an AI entrepreneur and investor; and the third was my transition from an entrepreneur to an AI investor.
I definitely experienced a great deal of joy and pain during these career changes. Recently, quite a few people asked me for my suggestions on their careers. So I decided it was a good idea to conduct this sharing session. Today, we will focus on three career changes as well as AI related topics such as AI commercialization and investment.
Unlike the majority of people today, I followed a different career path.
I graduated with an EE Master’s degree in 2003. I remember that I conducted some early research and experiments on something related to autonomous driving, which was Anti-collision Radar during my Master’s degree program. Back then, the technology was not what it is today. Today, we are using cameras, Lidar and Radar. However, before we only used Radar and it was really hard to identify different objects with digital signal processing technologies alone.
But I soon realized that I really didn’t want to be an engineer and sit at the desk for 8 hours a day, 5 days a week.
I joined a company called Agilent and led their software sales in Southern and Eastern China. It was a great job, and within a short span of time, I became one of their top software salesperson across the globe during a nine-year career.
I gained a lot of exposure in sales and marketing and I realized that I had reached another career ceiling with my lack of knowledge in R&D, entrepreneurship or investment. Subsequently, in 2012, I got an opportunity to relocate here in order to lead a global business development team.
In 2014 and 2015, the LinkedIn mobile experience was considered to be really bad. I saw an opportunity and talked to some of my friends. We thought it was a good idea to build a mobile app to disrupt LinkedIn and we raised seed money from a venture capital firm called Bojiang Capital and started Linkr. Back then, we were not the only ones attempting the same thing, as there were many similar new mobile social networking startups.
We built a pretty cool app where you could directly apply to a job in the app and talk to recruiters and hiring managers. However, six months later, we realized the challenges of convincing people to accept a new social networking app. We were faced with slow user acquisition and decided to address the recruiting problem directly by pivoting to referral based recruiting. Even today, if you check Crunchbase or AngelList, you can find that more than fifty startups are still working on referral based recruiting.
However, we failed again. In my opinion, there is a fundamental dilemma with this model. The best referrers are individuals with a great network, but they are often quite busy or have no motivation to function as a referrer. Then, the program also acquires a lot of users who are actually not beneficial as they have spare time but their network is not that good. Eventually, if you do have a position to fill from your clients, you spend most of the time handling a ton of less qualified candidates. The model simply doesn’t work!
And then there was AI! I quickly realized there was a big talent shortage issue in the AI space and saw an opportunity to try AI recruiting. Even though I had never been a recruiter, I was always a great salesperson and loved to talk to people. As it turned out, I did fill several positions and it was fun. Thus, again we pivoted to an AI talent headhunting service, which we operationalized late last year.
As I mentioned, I barely knew anyone in the Bay Area when I moved here in 2012. However, I knew how to build my network quickly, so I referred many startups to my investor, Bojiang Capital and eventually I became their parter.
A little bit about Bojiang Capital and BoomingStar. Bojiang Capital is a 1.5B fund focusing on AI, Robotics, enterprise software and so on. BoomingStar is the name of its US fund.
This is a brief synopsis of my career path.
Yes, there have been a lot of changes in the past three years. Let’s talk about my takeaways from these changes!
While preparing for this sharing session, yet another time, I opened my farewell letter from when I had quit the previous company. In it, I found this picture that I remember I had spend on this picture for over two months, introspecting in depth. I tried to figure out what my passion was, what I truly loved to do, and how I could do interesting things while also paying my rent.
I firmly believe that these four categories define a successful career path:
Do what you love,
Do it well,
Do what the world needs, and
Get paid for it.
Passion lies at the intersection of doing things you love and doing them very well. A good career path, or your calling, lies at the intersection of doing what the world needs and getting paid for it. If you are doing things that others need, doing what you love, and getting paid for it, it won’t feel like work. This is the holy trifecta of an ideal work life.
It’s important to do a little in-depth research before making a career change. From what I have learned from my successful career changes, there are four points to keep in mind:
First, it’s important to learn effectively. Yes, you can definitely spend five to six years trying to acquire a PhD degree, but I’d rather talk to smart people every day and learn from them rather than keep my head in books. Toward that effort, I have literally had conversations with more than 300 PhDs and AI experts in the past three years.
Secondly, you have to be honest and authentic with yourself and your friends to quickly gain their trust. We often deny the truth. I used to talk to an entrepreneur who raised about $2M with a bad idea. He burned them all in three years, but still pitched me the same idea. Come on, buddy! If your idea has already failed, think hard about the reason. Iterate quickly. Many times we are flattered by our investors and early adopters, who are usually our family and friends, and even our partners. “What a great idea!” “That’s cool! Go try it!” These words make us feel like we have already succeeded! Stay clear-headed and ask yourself if the product is truly ready to go to market.
The third point I learned from Peter Thiel. I think being a contrarian is essential to being an entrepreneur as well as an investor. Avoiding competition simply means not following others. So observe, and if existing players are still employing outdated technologies, which could be more than ten years old, it is time to try to build the next generation of technology. Build something new and start from a niche market, which eventually might become a blue ocean market.
The fourth idea is to capture the next wave of revolution, which is where I believe AI is today. Being a player in a new market can be tremendously advantageous to you when the market becomes mainstream, because the earlier you learn, the faster you will capture opportunities compared to latecomers.
When I became an entrepreneur, I often asked myself what a good idea for a startup would be. After three years in the startup life, I think I can summarize it with this chart.
The first angle is technology readiness.The technology should not be too early or too late. It has to arrive at the right time.
When I first made an investment, I often asked myself, what makes a good investor? Many people know that only 20% of investors make money. You don’t want to be among that 80% failure rate, so that is always my first priority. I believe a good investor is a contrarian. Also, many new graduates ask me how to enter the VC industry. I tell them you shouldn’t work in VC at the early stage in your career because you don’t have enough knowledge about how startups work and you also don’t have any domain knowledge to help you justify if the business model or technology is valid in that space.
As the founder of AI recruiting company, I often ask myself how to achieve the excellence of AI recruiting, I think these three points are critical:
First, because we are in the valley, we can take advantage of ever-evolving technologies to make our recruiting process more efficient so that we can handle high volume hiring.
Second, we really dig deep into each domain in AI. Our recruiters study hard and learn a lot from our candidates every day. And because I’m doing investment as well, we know a lot more than other recruiting agencies.
Third, our standard recruiting process is better. We don’t spam our candidates and try to be helpful to their AI career.
Here are some of our accomplishments. We helped Zippy fill 6 robotics positions in one week. We also introduced Zippy to Google Ventures, NEA, Lightspeed Ventures and other top VCs in the Bay Area and helped Immerex build up their product management, sales and BD teams.
Let’s talk about AI.
Prof. Nils Nilsson gave a great definition of AI, which is the
“Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity
to function appropriately and with foresight in its environment.”
We have three keywords here: “intelligence”, “appropriate function,” and “foresight in its environment”. That means we need to build an intelligent agent and embed it into the workflow, and make it adapt to the environmental changes. For autonomous driving, we need to build vision and perception components to track the environmental changes and update the map and policy changes.
AI is definitely getting a lot of traction this year. The battle for top AI talent only gets tougher year to year, according to a report from CBInsights.
Let’s look at the investment into the AI space. Machine learning related applications are still the majority of the investment focus, followed by other computer vision, language processing, autonomous driving and robotics related applications.
Let’s look at the different verticals which apply with AI technology. This is report from McKinsey. The X axis is the current AI adoption rate of one or more AI technologies at scale. The Y axis is the next 3-year change in AI spending. High tech, telecommunications, financial services, automotive and assembly, transportation and logistics are leading the adoption. While Healthcare, Education, Travel and Tourism are lagging behind.
Here is the general process to adopt AI technologies:
First, you have to figure out the use cases and sources of value to your business.
Second, build a data pipeline. Find enough useful data to train a model.
Third, you need to build a good model. Then embed this algorithm into the workflow.
Then work with external or internal partners as so on.
To name some of key AI applications, we can use AI to handle a lot of the data crunch work such as BI, IOT predictive maintenance, search recommendations and forecasting models.
For vision related applications, we can do autonomous driving, drone collision avoidance, E-commerce search, pick-and-place robots, or healthcare diagnostics such as detection of cancer. For language processing, we have chatbots, news and media content creation, smart home voice interfaces and text analytics and so on.
AI will bring dramatic change to the startup business model. Companies have to focus on data acquisition and how to better utilize the value of data to build insights. But again, back to real business. When you talk about an investment, you have to ignore AI first and analyze the basic business model to see if it works. Then check if AI can actually boost the business.
Many people ask me how I allocate my time between recruiting and investment. I actually don’t feel I’m doing two things. To me, talking to candidates will help me learn more about the domain and ecosystem. At the same time, doing due diligence and interacting with founders will also help us develop the recruiting business, because the number one thing they will do after fundraising will be recruiting. And because our recruiters reach out to more than 50 to 100 AI researchers every day, we also find many investment opportunities much earlier than even many other famous investors.
For the future, I’d like to be a mentor or helper to founders, career seekers and other friends. I think our good judgement of talent, and better use of capital will increase your odds of success.
TalentSeer is hiring Artificial Intelligence Recruiter, please send your resume to alex@talentseer.com if you have interest!
About TalentSeer:
TalentSeer is an AI talent search company backed by Bojiang Capital. We are primarily focused on AI, robotics, cloud and FinTech. We fill mostly AI related positions such as machine learning, NLP, computer vision, mechanical engineers, vision and perception, robotics system engineers, etc. We have two departments in our company. One is any application related to computer vision or image, such as video surveillance, autonomous driving, robotics and Fintech. Another department is working on all applications related to speech or language, such as text analytics, Conversational AI, Speech and some applications in Fintech.
Currently, we have about 50 AI clients such as Vicarious, an artificial general intelligence startup focusing on solving the pick-and-place problem in robotics. We also serve autonomous driving companies such as Drive.AI, Pony.ai, Auto-X, delivery robotics companies like Zippy.ai, and an apple picking robotics startup Abundant Robotics, which is funded by Google Ventures. Another client is VR startup Immerex, who aims to deliver a VR-based immersive entertainment experience. We also serve a speech recognition startup AISense, big enterprises such as Baidu and Ant Financial, and many others.
Here are some of our accomplishments. We helped Zippy fill 6 robotics positions in one week. We also introduced Zippy to Google Ventures, NEA, Lightspeed Ventures and other top VCs in the Bay Area and helped Immerex build up their product management, sales and BD teams.
About Alex Ren:
Alex Ren, Managing Partner at BoomingStar Ventures, Founder at AI recruiting startup TalentSeer
Alex is unique in the startup world. He’s an entrepreneur and a proactive investor in the AI space. In 2015, an interest in data science led him to start an AI-powered talent search firm, TalentSeer, which now represents one of the pioneering AI recruiting firms in the Bay Area. It helps more than 40 companies a year to build scientist teams in both AI enterprises and AI startups, Alex later launched a venture capital fund called BoomingStar Ventures (US fund of Bojiang Capital, a $1.5B fund focusing on AI, Robotics and Enterprise software) in 2016.
Prior to his current role, Alex worked at Agilent Technologies and he has over 15 years of experience in marketing across enterprise software, telecommunication, and semiconductor sectors. In Alex’s words, talent is the biggest driver of success, so TalentSeer and BoomingStar aim to capture both the best talent and investment opportunities to boost the market share.
|
Transitioning to AI: My Story as An Entrepreneur and An Investor
| 28
|
transitioning-to-ai-my-story-as-an-entrepreneur-and-an-investor-1b048e8284bc
|
2018-03-12
|
2018-03-12 23:00:44
|
https://medium.com/s/story/transitioning-to-ai-my-story-as-an-entrepreneur-and-an-investor-1b048e8284bc
| false
| 2,668
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Alex Ren
|
Alex Ren, Managing Partner at BoomingStar Ventures, Founder at AI recruiting startup TalentSeer. alex.ren@boomingstarvc.com
|
aae40a66c25
|
alex.ai
| 11
| 21
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-22
|
2017-09-22 08:53:15
|
2017-09-22
|
2017-09-22 08:57:14
| 1
| false
|
en
|
2017-09-22
|
2017-09-22 08:57:14
| 1
|
1b07bd7acbb8
| 2.286792
| 3
| 0
| 0
|
The UK has followed many other European countries by announcing a ban on petrol and diesel vehicles in 2040. The key driver behind this…
| 5
|
Why the government should ban humans from driving
The UK has followed many other European countries by announcing a ban on petrol and diesel vehicles in 2040. The key driver behind this decision is the damaging impact petrol and diesel cars have on the quality of our air and environment. However, governments have missed the point here. We should be banning humans from driving altogether.
Artificial Intelligence (AI) has been around for decades now, but we are currently on the cusp of a leap forward in its capabilities. We already use AI in our daily lives, in our phones, on our computers as well as in our vehicles. The natural next step is for our vehicles to drive themselves.
Uber, Google, Tesla and Apple are just a few tech giants that are serious about autonomous vehicles. Even 10 years ago, the ability to hail taxis on our phones within minutes would’ve been unimaginable. In 20 years, imagine explaining to a child that adults used to drive 2 tonnes of metal at 70mph on roads in the pouring rain, relying on human judgement for guidance. It will be like trying to explain to a child today what a public telephone box is.
What makes us human is our complex nature, our propensity to let “emotion” affect our decision-making. In many walks of life this is a great asset — and ones that machines do not yet possess. In relation to driving it is downright dangerous.
Globally, over 1.25 million people die per annum from road traffic accidents; roughly one person every 30 seconds
Machines can be programmed to process information much faster than humans, as well as make decisions based on physics and mathematics alone. Taking humans out of the equation will lead to safer roads for both passengers and pedestrians, as well as cutting down on unproductive time spent concentrating on driving.
One of the biggest benefits on AI replacing humans on tasks like driving is that we will get more leisure time and the opportunity to be more productive as a society. Imagine getting 2 hours a day back from our daily commute in the privacy of our own pod that can take us to and from work. Autonomous vehicles will be on call 24/7 and will be able to take us from point to point all from our phones.
Does this means there will be no more accidents — no more road deaths? No. Whilst technology is new, there will always be incidents that will sadly lead to loss of life. However, there will be a drastic reduction in accidents due to the rule-based algorithms that autonomous vehicles will adhere to.
This is not the stuff of science fiction. This technology is already available, and is being trialed as we speak.
Our call on governments around the world is to announce a target date that humans will be banned from driving.
Our biggest hurdle is not developing the technical ability to implement this change — it is government regulation to allow innovators to lead us in to a new era in travel.
We would love to hear your thoughts on this subject. Do you agree?
Energi Mine is an AI+Blockchain company that is using technology to reduce global energy consumption through its Energi Token project. More details at www.energitoken.com
|
Why the government should ban humans from driving
| 4
|
why-the-government-should-ban-humans-from-driving-1b07bd7acbb8
|
2018-04-26
|
2018-04-26 19:58:16
|
https://medium.com/s/story/why-the-government-should-ban-humans-from-driving-1b07bd7acbb8
| false
| 553
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
EnergiToken
|
EnergiToken rewards energy saving behaviour. Our blockchain solution will create a platform to reward energy efficient behaviour through EnergiToken.
|
2cf505f296c0
|
EnergiMine
| 158
| 50
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-15
|
2017-09-15 03:08:24
|
2017-09-15
|
2017-09-15 03:09:45
| 0
| false
|
zh-Hant
|
2017-09-15
|
2017-09-15 03:09:45
| 1
|
1b080808f97b
| 0.124528
| 0
| 0
| 0
| null | 3
|
人才北漂~加拿大成為AI基地?
Facebook is the latest tech giant to hunt for AI talent in Canada
Facebook is turning its attention to Canada with a new AI research office in Montreal. Google and Microsoft already…techcrunch.com
|
人才北漂~加拿大成為AI基地?
| 0
|
人才北漂-加拿大成為ai基地-1b080808f97b
|
2017-09-15
|
2017-09-15 03:09:46
|
https://medium.com/s/story/人才北漂-加拿大成為ai基地-1b080808f97b
| false
| 33
| null | null | null | null | null | null | null | null | null |
Canada
|
canada
|
Canada
| 11,870
|
工作隨筆
| null |
9b79c0f079a5
|
techwriter
| 58
| 79
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-27
|
2018-01-27 19:41:47
|
2018-01-27
|
2018-01-27 19:54:07
| 0
| false
|
en
|
2018-01-27
|
2018-01-27 20:01:09
| 0
|
1b08a9d7aeca
| 3.233962
| 3
| 0
| 0
|
Knowledge representation is a hot topic of interest in nowadays. If we can represent the knowledge as a fully machine interpretable way, it…
| 2
|
Future of Knowledge-base intelligence Systems….
Knowledge representation is a hot topic of interest in nowadays. If we can represent the knowledge as a fully machine interpretable way, it would be favorable for various kinds of problems in knowledge engineering. Most of the knowledge can be found in different places with a domain of interest such as websites, television, radios, etc. this knowledge needs to be extracted and to be represented, so that can be used in many applications. Ontology is one of the knowledge representation techniques that is seen suitable for modeling domain knowledge. Knowledge can be changed in timely manner. With respect to that, we should maintain the ontology for better usage of knowledge. However, the existing approaches for ontology maintenance are complex and designed for users with knowledge-engineering expertise. Thus, despite the benefits of ontology-based mechanisms in knowledge representation, their adoption and diffusion within general domains is significantly hindered.
What is an ontology?
“Ontology” is the term used to refer to the shared understanding of some domain of interest which may be used as a unifying framework to solve the above problems. An ontology necessarily entails or represents some sort of worldview with respect to a given domain the worldview is often perceived as a set of concepts (e.g. entities, attributes, processes, their definitions and their interrelationships) this is referred to as a conceptualization.
“An ontology is a formal and explicit specification of a shared conceptualization” (Studer, 1998). The exact meaning depends on the understanding of the terms “specification” and “conceptualization”. Explicit specification of conceptualization means that an ontology is a description (like a formal specification of a program) of the concepts and relationships that can exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. “formal” means it should be able to understand the conceptualized model by machine with the sense of inferencing the implicit knowledge from explicit conceptualization. Using common vocabulary to represent the ontology, it is able to share the knowledge among different applications which have different needs and viewpoints arising from their differing contexts.
Ontology maintenance
All domains, in general, are dynamic and are constantly being evolved inducing changes with the corresponding knowledge associated with it. Hence, there is a tendency that valuable knowledge remains distributed and unexploited without being inferred for better addressing of the requirements of a particular domain and its users. This has urged the need to look for approaches to incorporate knowledge evolution in achieving maximum benefits for the domains. According to the domain changes, ontology structure should be altered to correctly represent the domain knowledge. If the developed ontology is not up-to-date or the annotation of knowledge resources are inconsistent, redundant or incomplete, then the reliability, accuracy, and effectiveness of the ontology-based systems decrease significantly.
The maintenance procedure is concerned with both population of ontology with instances (or individuals) as well as maintaining the ontological structure. Ontology structure changes affects to the individuals of the ontology. Due to that ontology developers should add new individuals to the ontology according to the structural changes.
The said requirement of ontology maintenance has elicited the adoption of tools and systems to edit the ontology in incorporating knowledge evolution dynamically. However, in using the standard tools currently available for managing and querying ontologies, there exists a requirement for the users to have a pre-understanding about knowledge engineering and ontology building. They are mainly designed for the expert users with an ontology background. Thus, there is a limitation in the available tools to facilitate convenient management of ontology population, in the perspective of ontology-illiterate end users who lack knowledge engineering expertise.
However, ontologies are continuously confronted to evolution problem. Due to the complexity of the changes to be made, a maintenance process, at least a semi-automatic one, is more and more necessary to facilitate this task and to ensure its reliability. Presently, there is still no consensus on methods and guidelines for such process.
Since the standard tools are much complex, the practised approach in the current context is to obtain the help of a knowledge engineer during the population maintenance of the ontology. When populating the ontology according to the structural changes, there is greater challenge if the ontology developer or engineer is not familiar with the domain of interest. Due to the increase in volume of information, capturing the information, maintaining it and making it usable are challenges. Although the best solution for the problem is to provide the end users themselves with the options to populate the ontology without the interference of ontology engineers, their inexperience in using these tools would still cause major drawbacks. The complexity of the tools, time consumption in learning would prevent the acceptance of the tool by an end user.
Driven by the said factors, the motivation lies behind the issues that have caused the need of a convenient mechanism to maintain an ontology without having any kind of knowledge of ontologies and knowledge engineering concepts with sense of ultimately ensure the knowledge enhancement of the end users.
|
Future of Knowledge-base intelligence Systems….
| 8
|
future-of-knowledge-base-intelligence-systems-1b08a9d7aeca
|
2018-03-24
|
2018-03-24 09:00:35
|
https://medium.com/s/story/future-of-knowledge-base-intelligence-systems-1b08a9d7aeca
| false
| 857
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Prabhash Dilhan Akmeemana
| null |
f1043c8b80c4
|
prabhashdilhanakmeemana
| 17
| 38
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-22
|
2018-07-22 00:29:39
|
2018-07-22
|
2018-07-22 00:33:21
| 0
| true
|
en
|
2018-07-22
|
2018-07-22 00:33:21
| 2
|
1b095c0609ce
| 0.426415
| 0
| 0
| 0
|
Written by Kristen Kehrer
| 5
|
Favorite Stories of July 21, 2018
How to Ace the In Person Data Science Interview
I’ve written previously about my recent job hunt, but this article is solely devoted to the in-person interview. That…towardsdatascience.com
Written by Kristen Kehrer
This short blog details how to do well in Data Science Interviews, as well as her experiences and answers.
The Secret to Winning an Argument
Be civil, mind your biases, ask good questions, and level the emotional playing fieldmedium.com
Written by Melody Wilding
This article discusses the dangers of being uncivil and how to not put yourself in this scenario. It is an interesting read, though a bit different from what I normally read.
|
Favorite Stories of July 21, 2018
| 0
|
favorite-stories-of-july-21-2018-1b095c0609ce
|
2018-07-22
|
2018-07-22 20:15:32
|
https://medium.com/s/story/favorite-stories-of-july-21-2018-1b095c0609ce
| false
| 113
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Brendan M.
| null |
c7d314e61b4f
|
masseybr
| 13
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d800127b34b8
|
2018-05-30
|
2018-05-30 14:40:24
|
2018-05-29
|
2018-05-29 16:00:12
| 1
| false
|
en
|
2018-08-22
|
2018-08-22 18:59:47
| 7
|
1b0b7e78a1b1
| 3.607547
| 0
| 0
| 0
| null | 5
|
You’ll Never Guess How AI Is Helping Google Reduce Its Environmental Footprint
When we talk about the environment, it’s important to remember the importance of a healthy ecosystem. Climate change is a real phenomenon, despite arguments to the contrary, and protecting our environment is vital for future generations.
Google has long understood how important the environment is to its users, and has made a commitment to lowering its environmental footprint. But until now, that commitment has been hard to keep. Google relies on energy-intensive data centers to hold the massive amounts of information that it gathers, and as global demand for its services increases, so does its need for data center storage. But thanks to artificial intelligence, Google has been able to break this cycle and cut its energy use, providing a cleaner, healthier future.
The Data Center Dilemma
To understand how artificial intelligence is helping Google and all of humanity, you first need to know the environmental dilemma that Google is facing. Storing and using digital data is one of the most energy-intensive activities of the modern world. According to a recent report by Greenpeace, digital devices, networks, and data centers together account for a staggering 12 percent of all the electricity used in the world. This is actually worse than it sounds because as recently as 2012, these sources only used 7 percent of global energy. So if this trend continues, our insatiable appetite for data will make it virtually impossible to combat climate change, ocean acidification, or any other environmental problem related to energy consumption.
As the largest search engine and one of the largest tech companies in the world, Google is both disproportionately responsible for this problem and uniquely aware of its consequences. The company operates massive data centers all over the globe, and must regularly expand them or build new ones to serve its growing user base. But because the company is present everywhere, it sees firsthand the toll that pollution and climate change are taking on humanity. The company is eager to put a stop to these problems and understands that this means it must look inward at its own practices.
Besides humanitarian concerns, there are practical reasons that Google wants to lower the energy consumption of its data centers. The company spends hundreds of millions of dollars a year on energy. If it could find a more efficient way of storing and transferring data, its financial health would improve markedly. The search engine also recognizes that consumers place a premium on environmental sustainability, and would thus be more likely to use its services if it could credibly claim that it was using less energy. For the sake of both the planet and its bottom line, Google has been eager to find a more sustainable way of storing and transmitting data globally.
An AI Application
Google found its sustainable solution in one of the most unlikely places: DeepMind. Google bought this AI development firm back in 2014, for a price tag of more than USD $600 million. The search engine hoped to profit from the advanced AI applications that the company produced, which attracted attention for their ability to run UK hospitals efficiently and to beat human players at Go. But until recently, these efforts did little to improve Google’s bottom line. As advanced as DeepMind’s AI was, Google simply couldn’t find a way to make it profitable.
That situation may have just changed, thanks to a Google initiative to cut data center energy consumption. Using DeepMind, Google’s engineers created an application that could predict how hot a data center would become in a given span of time. The system then communicated with the cooling units that are used to stop the center from overheating, letting them know exactly how much energy they would need to cool it off sufficiently. As a result, the units didn’t use any more energy that was necessary to prevent overheating.
Thanks to this innovation, Google achieved a 40 percent reduction in the energy used for cooling. This translates to a 15 percent overall reduction in energy use at the data centers where this was applied. If Google begins using this technology at all of its data centers, it has the potential to save tens if not hundreds of millions of dollars a year on energy. That means less pollution in the air and water, less carbon dioxide in the atmosphere, and a more sustainable future for our planet.
A Glimpse of the Future
As successful as this initiative was, it’s only the beginning of what is possible with AI technology. Google and DeepMind can build similar applications to reduce energy use for other operations, as well as to find viable ways of getting that energy from renewable sources. Other companies can also begin to apply artificial intelligence for their energy saving and clean generation needs and rapidly achieve the same positive results. As AI technology becomes more advanced and more widely available, it will lead to swift improvements in sustainability around the globe.
Imaginea is at the forefront of this sustainable AI transition. The platform serves as an ecosystem for artificial intelligence technology, where companies and individuals all over the planet can exchange AI algorithms, data, and services. For more information on the future of AI, sustainability, and human life, contact Imaginea Ai today.
Want to hear more from Imaginea Ai? Follow us on Instagram, Facebook, LinkedIn and Twitter.
|
You’ll Never Guess How AI Is Helping Google Reduce Its Environmental Footprint
| 0
|
youll-never-guess-how-ai-is-helping-google-reduce-its-environmental-footprint-1b0b7e78a1b1
|
2018-08-22
|
2018-08-22 18:59:47
|
https://medium.com/s/story/youll-never-guess-how-ai-is-helping-google-reduce-its-environmental-footprint-1b0b7e78a1b1
| false
| 903
|
Build better AI. Faster. Together.
| null |
imagineaai
| null |
Imaginea Ai
|
marketing@imaginea.ai
|
imaginea-ai
|
ARTIFICIAL INTELLIGENCE,AI,DATA SCIENCE,DATA ANALYSIS,MACHINE LEARNING
|
imagineaai
|
Climate Change
|
climate-change
|
Climate Change
| 39,654
|
Imaginea.Ai
|
We are an #AI company driven to help industries work better, faster and smarter.
|
37340374daff
|
imaginea
| 10
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-26
|
2018-06-26 12:19:51
|
2018-06-26
|
2018-06-26 12:23:00
| 1
| false
|
en
|
2018-06-26
|
2018-06-26 12:23:00
| 1
|
1b0bf3a88ed3
| 0.909434
| 0
| 0
| 0
|
“How can one make educated, informed, strategic decisions without data and analytics? Human Resources, as much as any other field, have…
| 5
|
Creating a Data Driven HR
“How can one make educated, informed, strategic decisions without data and analytics? Human Resources, as much as any other field, have seen an exponential shift in the amount of Big Data available for analysis. Every week it seems that new technology, new apps, and new innovations are being released that will help us make more informed decisions about items from healthcare to workforce planning. With this ever-increasing flow of information, how can HR leaders focus on making data-driven decisions that align with the strategic priorities of the rest of the C-Suite? How can we synchronize the necessity to create value with the stewardship of our leadership?
“We laid out a multi-year approach with incremental actions to move us from transactional to transformational”
I have always been interested in innovation and excellence. In my formative years, I gravitated towards a career in technology and engineering. I along with my peers, spent much of our time discussing things like Cray Supercomputers and IBM Artificial Intelligence. I was more than a casual observer; I was a curious consumer……………..click here to know more
|
Creating a Data Driven HR
| 0
|
creating-a-data-driven-hr-1b0bf3a88ed3
|
2018-06-26
|
2018-06-26 12:23:00
|
https://medium.com/s/story/creating-a-data-driven-hr-1b0bf3a88ed3
| false
| 188
| null | null | null | null | null | null | null | null | null |
Big Data
|
big-data
|
Big Data
| 24,602
|
steve jacob
|
Latest Technology Trends and Expert reviews #Technology #trends #updates #expertadvice #CIO #CXO #CTO
|
9fb019e0e0c1
|
blogstevej327stuff
| 39
| 396
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-13
|
2018-09-13 07:48:01
|
2018-09-13
|
2018-09-13 07:49:38
| 4
| false
|
en
|
2018-09-13
|
2018-09-13 07:49:38
| 2
|
1b0cc0c63107
| 6.281132
| 0
| 0
| 0
|
In the Previous post, we have discussed about How Machine Learning on AWS: Part 1. If you still didn’t look at it, please read it before…
| 2
|
What all the AI tools are available in AWS Machine Learning: Part2
In the Previous post, we have discussed about How Machine Learning on AWS: Part 1. If you still didn’t look at it, please read it before start here. In this article, we gonna see What all the tool are available in AWS Machine learning. As we know that List of aws AI services are avail, for example, image analysis tool called Amazon Rekognition; a text-to-speech tool known as Amazon Polly; and Amazon Lex, which helps developers build chatbots — mean that developers don’t need a background in machine-learning to use Amazon’s systems when building new applications. Consider it AI as a service: Amazon will handle the machine-learning, so developers can focus on building new, more powerful software.
Amazon Lex
Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions. With Amazon Lex, the same deep learning technologies that power Amazon Alexa are now available to any developer, enabling you to quickly and easily build sophisticated, natural language, conversational bots (“chatbots”). Speech recognition and natural language understanding are some of the most challenging problems to solve in computer science, requiring sophisticated deep learning algorithms to be trained on massive amounts of data and infrastructure. Amazon Lex democratizes these deep learning technologies by putting the power of Amazon Alexa within reach of all developers. Harnessing these technologies, Amazon Lex enables you to define entirely new categories of products made possible through conversational interfaces.
As a fully managed service, Amazon Lex scales automatically, so you don’t need to worry about managing infrastructure. With Amazon Lex, you pay only for what you use. There are no upfront commitments or minimum fees.
Benefits
Easy to Use
Amazon Lex provides an easy-to-use console to guide you through the process of creating your own chatbot in minutes, building conversational interfaces into your applications. You supply just a few example phrases and Amazon Lex builds a complete natural language model through which your user can interact using voice and text, to ask questions, get answers, and complete sophisticated tasks.
Seamlessly Deploy and Scale
With Amazon Lex, you can build, test, and deploy your chatbots directly from the Amazon Lex console. Amazon Lex enables you to easily publish your voice or text chatbots to mobile devices, web apps, and chat services such as Facebook Messenger, Slack, Kik, and Twilio SMS. Once published, your Amazon Lex bot processes voice or text input in conversation with your end-users. Amazon Lex is a fully managed service so as your user engagement increases, you don’t need to worry about provisioning hardware and managing infrastructure to power your bot experience.
Built-in Integration with the AWS Platform
Amazon Lex provides built-in integration with AWS Lambda, AWS MobileHub and Amazon CloudWatch and you can easily integrate with many other services on the AWS platform including Amazon Cognito, and Amazon DynamoDB. You can take advantage of the power of the AWS platform for security, monitoring, user authentication, business logic, storage and mobile app development.
Cost Effective
With Amazon Lex, there are no upfront costs or minimum fees. You are only charged for the text or speech requests that are made. Amazon Lex’ pay-as-you-go pricing and low cost per request make it a cost-effective way to build conversational interfaces anywhere. With the Amazon Lex free tier, you can easily try Amazon Lex without any initial investment
Use case
Use an Amazon Lex chatbot for natural conversations in your Amazon Connect contact center
Amazon Polly
Amazon Polly is able to translate text to speech in a natural way, even recognizing homophones and identifying the appropriate term to use based on the context of a sentence. The system includes 47 male and female voices and supports 24 languages, according to Amazon, and Polly can also use a variety of accents. Amazon claims Polly speaks in a fluid, conversational manner, just as a human would if they were to read the converted text. Developers pay based on the text they end up converting to speech but can archive any converted text for later use at no cost.
Benefits
NATURAL SOUNDING VOICES
Amazon Polly provides dozens of languages and a wide selection of natural-sounding male and female voices. Amazon Polly’s fluid pronunciation of text enables you to deliver high-quality voice output for a global audience.
STORE & REDISTRIBUTE SPEECH
Amazon Polly allows for unlimited replays of generated speech without any additional fees. You can create speech files in standard formats like MP3 and OGG, and serve them from the cloud or locally with apps or devices for offline playback.
REAL-TIME STREAMING
Delivering lifelike voices and conversational user experiences requires consistently fast response times. When you send text to Amazon Polly’s API, it returns the audio to your application as a stream so you can play the voices immediately.
CUSTOMIZE & CONTROL SPEECH OUTPUT
Modify Amazon Polly voices to best suit your needs — Amazon Polly supports lexicons and SSML tags which enable you to control aspects of speech, such as pronunciation, volume, pitch, speed rate, etc.
LOW COST
Amazon Polly’s pay-as-you-go pricing, low cost per character converted, and unlimited replays make it a cost-effective way to voice your applications.
Use Cases
CONTENT CREATION
Audio can be used as a complementary media to written and/or visual communication. By voicing your content, you can provide your audience with an alternative way to consume information and meet the needs of a larger pool of readers. Amazon Polly can generate speech in dozens of languages, making it easy to add speech to applications with a global audience, such as RSS feeds, websites, or videos.
Example: Convert an article to speech and download as MP3
Amazon Rekognition
Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Rekognition API, and the service can identify the objects, people, text, scenes, and activities, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial recognition on images and video that you provide. You can detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.
Amazon Rekognition is based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily, and requires no machine learning expertise to use. Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.
Benefits
Simple integration
Amazon Rekognition makes it easy to add visual analysis features to your application with easy to use APIs that don’t require any machine learning expertise.
Continually learning
The service is continually trained on new data to expand its ability to recognize objects, scenes, and activities to improve its ability to accurately recognize.
Fully managed
Amazon Rekognition provides consistent response times regardless of the volume of requests you make. Your application latency remains consistent, even as your request volume increases to tens of millions of requests.
Batch & real-time analysis
You can run real-time analysis on video from Amazon Kinesis Video Streams, analyze images as they are uploaded to Amazon S3. For large jobs, use AWS Batch to analyze thousands of images or videos.
Low cost
With Amazon Rekognition, you only pay for the number of images, or minutes of video, you analyze and the face data you store for facial recognition. There are no minimum fees or upfront commitments.
Security & identity
You can easily integrate face-based user verification into new or existing applications. This is a simple process that requires the use of just one API.
Use cases
Rekognition Video Use Cases
IMMEDIATE RESPONSE FOR PUBLIC SAFETY AND SECURITY
Amazon Rekognition Video allows you to create applications that help find missing persons in social media video content. By recognizing their faces against a database of missing persons that you provide, you can accurately flag matches and speed up a rescue operation.
Example: Finding Missing Persons on Social Media
Key Features
Object, scene, and activity detection
With Amazon Rekognition, you can identify thousands of objects (e.g. bike, telephone, building) and scenes (e.g. parking lot, beach, city). When analyzing video, you can also identify specific activities happening in the frame, such as “delivering a package” or “playing soccer”.
|
What all the AI tools are available in AWS Machine Learning: Part2
| 0
|
what-all-the-ai-tools-are-available-in-aws-machine-learning-part2-1b0cc0c63107
|
2018-09-13
|
2018-09-13 07:49:38
|
https://medium.com/s/story/what-all-the-ai-tools-are-available-in-aws-machine-learning-part2-1b0cc0c63107
| false
| 1,479
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Renjith S Raj
|
Django Developer
|
332411e9f85d
|
renjithsraj
| 1
| 0
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-14
|
2018-03-14 01:54:00
|
2018-03-14
|
2018-03-14 03:33:11
| 5
| false
|
en
|
2018-03-14
|
2018-03-14 03:33:11
| 4
|
1b0d4c0b7c27
| 2.610692
| 0
| 0
| 0
|
If you want to stay up to date with the latest news and information, Twitter is definitely the best place to be (with Reddit a close…
| 1
|
How to do Research with Tweetdeck
If you want to stay up to date with the latest news and information, Twitter is definitely the best place to be (with Reddit a close second). This article uses Tweetdeck to cover the latest news in the world of Artificial Intelligence (A.I.). Nowadays, websites such as Tweetdeck make it easy to look at multiple timelines to gather as much information as you can on a single interface.
The following is a screenshot image of my research on the latest news on artificial intelligence using Tweetdeck.
Source: Tweetdeck
Tweetdeck is such a powerful tool because it allows you to cover a lot of information all in one screen. When you see an interesting tweet, you can open it up just by clicking on it. It’s very user friendly and did not take me long at all to figure out its configurations.
Source: Tweetdeck
Now let’s dive deep into the most interesting news on artificial intelligence.
First off is this tweet that mentions how google is helping the military target drones with artificial intelligence.
Source: Tweetdeck
The article mentions that Google has signed a contract to work alongside the Defense Department that is helping the U.S. military to use artificial intelligence to target drone strikes, as well as to analyze videos on their army which is collected by surveillance cameras from thousands of drones. Horrible news if you ask me.
This next tweet offers more uplifting news. The artificial intelligence healthcare market may hit $6 Billion soon (was worth $600 million in 2014).
The article found here shows the good things that artificial intelligence can offer us. By using robots to help provide care, the healthcare industry can be more prone to helping people with quality treatment and make sure they get that treatment at the right time while worrying less about man-power and cost.
Lastly is this tweet regarding the automobile industry.
Source: Tweetdeck
The article talks about the progress of artificial intelligence and autonomous vehicles. Currently, some autonomous cars like the latest Tesla models already have this technology, but only Nevada, USA has allowed the use of driverless cars. Studies show that autonomous driving can decrease crashes by up to 40%.
These are only a few examples of the almost overwhelming amount of news in the world of A.I. In this day and age, it is almost inevitable that A.I. becomes adopted worldwide. We must make sure that we learn as much as we can about A.I. in order to maximize it’s potentials and to avoid any type of disastrous situations that can very likely occur. By using Tweetdeck and other similar sites such as Hootsuite and Feedly, we can efficiently do research to grasp as much knowledge as we can about this technology. A little research can go a long way!
What are your thoughts on A.I? Comment down below.
|
How to do Research with Tweetdeck
| 0
|
how-to-do-research-with-tweetdeck-1b0d4c0b7c27
|
2018-03-14
|
2018-03-14 03:33:11
|
https://medium.com/s/story/how-to-do-research-with-tweetdeck-1b0d4c0b7c27
| false
| 471
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Dale Calabia
| null |
c2cce645ec4c
|
dalecalabia
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-23
|
2018-08-23 06:35:42
|
2018-08-23
|
2018-08-23 06:38:17
| 1
| false
|
en
|
2018-08-23
|
2018-08-23 06:38:17
| 0
|
1b0f7eb5477e
| 2.826415
| 0
| 0
| 0
|
School buses have an essential duty and form the lifeline for millions of children who use it for their daily commute. Automatically school…
| 3
|
Embracing Technology at the School Management Level
Image Courtesy: Beacon @ Greylabs Software Solutions
School buses have an essential duty and form the lifeline for millions of children who use it for their daily commute. Automatically school buses become far more sensitive and vulnerable than regular transport since almost all the passengers are children.
The recent rise of crimes involving children especially during their commitment to and from school has raised new concerns about school bus safety. Parents are understandably worried about the lack of two-way communication and accountability when a school bus is on the road.
To counter this, school managements have embraced technology and introduced school bus tracking apps which offer real-time location tracking. Let us look at how precisely these apps are reinventing the way our children travel in school buses:
Dual Advantages
Having a tracking app means that parents are relaxed at home since they can check on their child anytime they want. This also implies school authorities are less anxious about their fleet of school buses after they leave the campus. This ensures that there are clarity and communication and therefore less confusion on both ends.
Real-time Tracking can be a boon
Mobile tracking apps help the authorities and parents track a school bus in real time. They can pinpoint the exact live location of the bus which helps them figure out exact arrival and departure times. Manual communication is hassle-free since any significant delays are communicated digitally. This also means that due to the constant dissemination of information, child safety is ensured.
It increases driver accountability
The recent rise in crimes has led to heightened concerns about whether a driver feels accountable or not. Schools have employed nannies to ensure more safety but the sense of responsibility of nannies, conductors and drivers are still doubted by both parents and authorities. But tracking apps provide that they are always kept on their toes. The staff knows that they are being tracked throughout and surveillance ensures that they maintain complete discipline and decorum.
It reduces wastage of resources
Tracking apps can really provide schools with valuable data which helps them gauge time and fuel costs. It also helps them understand and figure out possible alternative routes which are less fuel intensive and less expensive. An alternative route might also have less traffic increasing overall school bus efficiency.
It really makes scheduling easier
Traffic jams often cause considerable delays in school bus travelling times. Naturally, the parents remain perturbed. Traditionally this resulted in tonnes of phone calls to the school staff. Now, tracking apps make the parents aware of any change in schedule. In case of delay in arrival times, schools can also take necessary steps by shifting essential announcements in the assembly etc.
Makes sure traffic rules are followed
Schools have ensured that by using tracking apps, complaints about drivers driving rashly are a thing of the past. Tracking apps can monitor logistical needs such as a driver’s average speed which can be observed in real time by the authorities. Specific apps also have an alarm which sets off in case a speed threshold limit is crossed. This ensures that the children are safe at all times since adherence to traffic rules are also being checked continuously.
Child safety is of tremendous importance in today’s world and thanks to more and more schools adopting the use of school bus tracking apps, parents can breathe a sigh of relief.
GreyLabs’ Beacon App is specifically designed to ensure a seamless real-time tracking experience that makes drivers, authorities and parents communicate and coordinate hassle free.
Now children can go to school and come back home safe without their parents panicking about the school bus getting delayed. As they say, God bless technology! The advantages of having school bus tracking apps remarkably overpower the costs involved in developing such an app.
It is vital that we leverage the technological advances to ensure the safety of school students. While most schools today are adopting such measures, many are still about the utility of such apps.
Nevertheless, a mutual understanding between the parents and school authorities is vital to lay a ground for the urgent need for such beacon apps.
|
Embracing Technology at the School Management Level
| 0
|
embracing-technology-at-the-school-management-level-1b0f7eb5477e
|
2018-08-23
|
2018-08-23 06:38:17
|
https://medium.com/s/story/embracing-technology-at-the-school-management-level-1b0f7eb5477e
| false
| 696
| null | null | null | null | null | null | null | null | null |
Transportation
|
transportation
|
Transportation
| 14,888
|
Beacon @ Greylabs Software Solutions
|
Your Next-Gen Transport Operations & Security Solution
|
a3f94bb8c12c
|
greylabsseo
| 2
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-04
|
2018-02-04 20:10:12
|
2018-02-04
|
2018-02-04 20:12:02
| 1
| false
|
en
|
2018-02-04
|
2018-02-04 20:12:43
| 1
|
1b0ff1e7759c
| 0.411321
| 0
| 0
| 0
|
PwC estimates that artificial intelligence could add $15.7 trillion to global GDP by 2030. That’s a gargantuan opportunity. To identify…
| 3
|
2018 #AI - 100 most promising startups
PwC estimates that artificial intelligence could add $15.7 trillion to global GDP by 2030. That’s a gargantuan opportunity. To identify which private companies are set to make the most of it, research firm CB Insights recently released its 2018 “A.I. 100,” a list of the most promising A.I. startups globally
|
2018 #AI - 100 most promising startups
| 0
|
2018-ai-100-most-promising-startups-1b0ff1e7759c
|
2018-06-11
|
2018-06-11 18:42:02
|
https://medium.com/s/story/2018-ai-100-most-promising-startups-1b0ff1e7759c
| false
| 56
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Justin Flitter
|
Founder of @NewZealandAI — Host of #TheAIshow — Tech events producer @AIDAYNZ & @BlockworksNZ | #Juggler
|
c75241f0f80c
|
justinflitter
| 2,887
| 1,588
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-06
|
2018-05-06 20:05:33
|
2018-05-06
|
2018-05-06 22:57:14
| 4
| false
|
en
|
2018-05-07
|
2018-05-07 21:23:27
| 6
|
1b11472f835d
| 2.884906
| 4
| 0
| 0
|
Heir is building the next generation inheritance & estate planning solution powered by the Stellar blockchain , artificial intelligence…
| 5
|
Heir.io — Bridging Real-World Assets with the Blockchain Using Stellar for Inheritance & Estate Planning
Heir is building the next generation inheritance & estate planning solution powered by the Stellar blockchain , artificial intelligence, and deep machine learning technology. Heir endeavors to become the global standard in the protection of digital and traditional real-world assets (e.g. real estate, property, and other tangibles) for next of kin and loved ones. This article explores how Heir intends to bridge real-world assets with the Stellar blockchain platform.
Protecting Real-World Assets For Next of Kin
When it comes to the traditional methods of protecting tangible property for inheritance purposes, the options are fairly limited. For the vast majority, the general route taken is to create an inheritance will (which comes with various pitfalls), or for the wealthy and well-informed spinning up an estate trust (provides better protection, but at a substantial upfront and on-going cost).
What’s an estate trust? Estate trusts are asset protection vehicles generally only available to the wealthy due to their expensive upstart and maintenance costs (which can amount to tens of thousands of dollars in upfront and on-going costs). Having an estate trust provides significant benefits over having a traditional inheritance will, including: tax benefits, creditor protection, and eliminates beneficiaries from having to deal with long, expensive and drawn out court processes, prior to obtaining access to their inheritance. Keep reading, and we will explain how Heir will make estate trusts accessible to all.
How Will Heir Bridge Real-World Assets with the Stellar Blockchain Platform?
The magic lies within Heir’s upcoming asset tokenization engine. By taking a real-world tangible asset (like real estate), and running it through Heir’s tokenization engine, Heir will bridge the real-world with the blockchain. Here is a high-level overview of how Heir will accomplish this:
Users will have the ability to upload real-world property titles and deeds within the Heir ecosystem via Heir’s intuitive portal
Heir’s asset tokenization engine ingests details of the property deed to tokenizes the real-world asset
Tokens tied to the real-world asset are created, which serve the purposes of mirroring real-world ownership of the property/asset
Token state lives within the Stellar platform for immutability, decentralization, and protection
Tokens are automatically placed and live within a users HeirWallet (shown below):
HeirWallet by heir.io — safekeeping of digital and real-world assets
Users will then be provided with the ability to seamlessly assign their real-world property deed/title, to mimic the state of their tokenized counterpart (details will be provided in a future article).
Sounds Amazing! But What Will Users Do with Their Tokenized Property?
Post-tokenization of real-world assets, users will have the ability to assign their tokenized real-world assets to their Heir enabled estate trust (Heir’s flagship product refereed to as HeirTrust). By doing so, users are enabled to specify to whom, when, and how their tokenized assets should be distributed:
HeirTrust by heir.io — digital estate trusts and estate planning
HeirTrust will allow users to reap the exact same real-world benefits that a traditionally prepared estate trust would. This includes: inheritance tax benefits, creditor protection, and beneficiary ease. Heir will do what’s generally done within the confines of an estate planning firm, but better, faster, simpler and much more affordably.
Interested in Learning More About Heir?
Visit our Website: www.heir.io
Read our White Paper: link to download
Join the Heir Twitter and Telegram Community
For the latest details regarding Heir, follow us on twitter and join the conversation on Heir’s official telegram channel.
|
Heir.io — Bridging Real-World Assets with the Blockchain Using Stellar for Inheritance & Estate…
| 102
|
heir-io-bridging-real-world-assets-with-the-blockchain-using-stellar-for-inheritance-estate-1b11472f835d
|
2018-06-05
|
2018-06-05 12:09:24
|
https://medium.com/s/story/heir-io-bridging-real-world-assets-with-the-blockchain-using-stellar-for-inheritance-estate-1b11472f835d
| false
| 579
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Heir
|
Inheritance, digital wills, and estate planning powered by blockchain, AI, and machine learning. Learn more at: www.heir.io
|
a4515b206be3
|
heir
| 40
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-15
|
2018-02-15 12:17:53
|
2018-02-15
|
2018-02-15 12:22:29
| 0
| true
|
en
|
2018-02-15
|
2018-02-15 12:22:29
| 0
|
1b1168b88146
| 0.758491
| 1
| 0
| 0
|
Artificial intelligence is the use of computing to perform tasks that require human intelligence.
| 3
|
Benefits of AI for consumers
Artificial intelligence is the use of computing to perform tasks that require human intelligence.
AI has applications in visual perception, speech recognition and language translation. It can be used to recognize pictures, convert voice into text or plan a route.
It will bring more automation on everyday routines and people will have more time for creativity and critical thinking.
AI relies on data for it’s features and accuracy. Many organizations rely on proprietary data sets and this limits the knowledge AI systems can develop. Institutions are recognizing the advantages of sharing data for interoperability.
This leads to a more fluid user experience for the end user across applications and devices. AI will also help connectivity networks with accurate pattern analysis.
Business relationships will increasingly be managed without human interaction by integrating with consumers’ everyday lives.
Here are some examples of AI products:
Slice
Slice analyzes consumers’ inboxes to provide real-time updates for package deliveries so they don’t have to lose time keeping track of online packages.
Hutoma
Hutoma provides a centralized marketplace and network for AI chatbots.
Bridge.ai
Bridge.ai analyzes audio for smart devices to understand their environment and respond to patterns in users’ lives.
|
Benefits of AI for consumers
| 1
|
benefits-of-ai-for-consumers-1b1168b88146
|
2018-06-09
|
2018-06-09 16:18:59
|
https://medium.com/s/story/benefits-of-ai-for-consumers-1b1168b88146
| false
| 201
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Tomás Antunes
|
Developer writing about creativity, technology and other stories. http://tomasantunes.com
|
9f4d3b08e5af
|
tomasantunes
| 54
| 26
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-24
|
2018-02-24 01:16:59
|
2018-03-11
|
2018-03-11 23:47:38
| 1
| false
|
en
|
2018-08-02
|
2018-08-02 05:41:41
| 0
|
1b12446067eb
| 0.218868
| 0
| 0
| 0
| null | 5
|
Thought Leaders On Omniscient AI
|
Thought Leaders On Omniscient AI
| 0
|
thought-leaders-on-omniscient-ai-1b12446067eb
|
2018-08-02
|
2018-08-02 05:41:41
|
https://medium.com/s/story/thought-leaders-on-omniscient-ai-1b12446067eb
| false
| 5
| null | null | null | null | null | null | null | null | null |
Sam Harris
|
sam-harris
|
Sam Harris
| 177
|
Matthew Bedwell
|
20 something. Digital Marketer. Bike Polo Player. Lover. Fighter. Rent Payer. Gamer. Hat Wearer. Surfer. Plant Grower. Runner. Proud Circle Picture Owner->
|
2a2a63403b3d
|
matthew.bedwell
| 3
| 30
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-14
|
2018-03-14 07:46:51
|
2018-03-14
|
2018-03-14 09:39:26
| 2
| false
|
en
|
2018-03-14
|
2018-03-14 09:39:26
| 0
|
1b12e9607706
| 2.255031
| 1
| 0
| 0
|
The artificial Intelligence community is in the news now-a-days, mostly for the right reasons. The resurgence of research in this area has…
| 1
|
Is Human Intelligence somehow special?
The artificial Intelligence community is in the news now-a-days, mostly for the right reasons. The resurgence of research in this area has been explosive, and that is not an exaggeration.
A proxy for the uptick in research is the number of cites some pioneers of the field have gathered over the years.
Citation count for Geoffrey Hinton
Citations for Yann LeCun
With such rapid progress, there are obvious concerns over AI safety and our future as a species with these technologies, which is a valid and pertinent area of concern. There is some fabulous work going on there, and you should check out the research around the alignment problem.
However, there is another faction that is dismissive of intelligence coming out of “mere” machines.
The argument goes thus: The modern machine learning and AI is just….. something, and will never be as good as human intelligence. Let us try to understand this argument, assuming that it *IS* a valid argument, not an argument from incredulity.
Arguments from incredulity can take the form:
I cannot imagine how P could be true; therefore P must be false.
I cannot imagine how P could be false; therefore P must be true.
Arguments from incredulity happen when people make their inability to comprehend or make sense of a concept the content of their argument.
This is just a bunch of matrix multiplications, how can it ever achieve human intelligence level.
It can, and it has, in several areas. The image recognition capabilities of modern AI far outstrip human performance. So does the chess playing ability. So do many other areas. When you really get into it, your actual neurons are also a bunch of electrical impulses calculating some mathematical function, phenomenologically speaking.
Machines and AI can never do X, where X is your favorite thing that AI can’t do.
Agreed, that it can’t do X *yet*. Maybe it can, maybe it can’t. There is no way to know unless you try. Unless you believe that the robots made of meat are somehow fundamentally and irreconcilably different than robots made out of silicon and steel, this argument seems untenable. Also, funnily enough, if you define AI as something computers haven’t figured out how to do, you are of course correct. Computers can’t do what they can’t do, because if they do, it’s not AI.
Is intelligence substrate dependent? Is there something special in carbon atoms that silicon atoms can not do, even in theory? The jury is still out one way or another, and I will be surprised if the narcissistic viewpoint that human intelligence is somehow special, turns out to be true.
The reductionist viewpoint (just a matrix multiplication, just an electrical impulse, just glorified curve fitting, just… you get the point) assumes that things are more complex on the larger scale than the smaller one, whereas the physical reality is often the opposite of that. Understanding something enough to make use of it is often simpler than understanding every last detail.
|
Is Human Intelligence somehow special?
| 5
|
is-human-intelligence-somehow-special-1b12e9607706
|
2018-03-14
|
2018-03-14 09:47:06
|
https://medium.com/s/story/is-human-intelligence-somehow-special-1b12e9607706
| false
| 496
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Vaibhav Garg
| null |
7541115c1c1
|
vaibhavgarg1982
| 5
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-03
|
2018-03-03 21:08:17
|
2018-03-04
|
2018-03-04 08:21:44
| 8
| false
|
en
|
2018-03-17
|
2018-03-17 04:36:17
| 9
|
1b13510b4f6d
| 3.72956
| 11
| 0
| 0
|
This is a technique used at Instacart, Stichfix, and Pinterest.
| 5
|
Applied ML on Structured Data: Deep Learning vs Tree Methods (Part 1)
This is a technique used at Instacart, Stichfix, and Pinterest.
In the world of business, alot of the data is in databases. This is, generally, known as Structured Data. In research, however, most of the data deals with unstructured data that usually don’t go inside databases: images(CV) , text(NLP). According to Jeremy Howard, a ML researcher and lecturer of Fast.ai, using Deep Learning on Structured Data is still a relatively new concept. In the Rossman Store Sales competition, where over 3300 teams competed for $35,000 in prize money, the 3rd place winner used a deep learning method on Structured Data and published a paper on it. The Rossman Dataset used a mix of Categorical variables as well as Continuous Variables to predict future store sales.
Categorical Variables are like “Holiday?”, “Is there a Promotion?”, or “Day of Week”.
Continuous Variables are like “Distance from Competing Store”, “Temperature Outside”, or basically things that are numbers that could have decimals in them.
Usually regression methods require alot of feature engineering. For example, during Holidays like “Black Friday” or “Christmas”, you know more things are going to be sold. Thus, you would put more weight on the “Holiday?” feature because you know it will dramatically affect store sales.
One advantage of deep learning is that there is much less feature engineering, you let the model learn its own interpretations of the variables. In a business settings, this advantageous because there’s much less maintenance.
So how do you actually do deep learning on Structured Data?
Neural Networks usually take some inputs, multiply by some weights, and then backprop to optimize the weights. However, how do you convert an a categorical variable like “Tuesday”or “There is a promotion” into a number?
Enter comes Entity Embeddings!
Entity Embeddings are a distributed representation of a variable.
Examples:
1) Google’s word2Vec which represents ~3 million words and phrases. Each word is attached to a 300 dimensional vector of numbers that represent it
Source: https://adriancolyer.files.wordpress.com/2016/04/word2vec-distributed-representation.png?w=600
2) Recommender Systems: for example, at Netflix or Amazon, they might create a huge matrix of ratings of users vs movies/products.
Source: http://francescopochetti.com/wp-content/uploads/2017/03/example.png
Notice how there are “concepts” which are also called latent factors. These concepts can be learned by the neural network for a very rich understanding of the variable. For example, the word “King” is 0.99 associated with “Royalty”, 0.99 associated with “Masculinity”…..
For example, the movie “Matrix” is 4 associated with SciFi, 0 associated with Romance. This way you can recommend “The Matrix” to users who love “SciFi”.
Ok, now how do you construct these Embedding Matrices for the problem of Store Predictions?
In the Rossman Dataset, the categorical variables are things such as Store_ID, Day of Week, Month…:
For each categorical variable we have to create an embedding matrix.
Example: Day of Week
1) Take the number of options, num_options, for Day of Week, and the number of rows in the embedding matrix is num_options + 1 . You add for the of unknown! In this case the number of rows is 8
2) Pick the number of Latent Factors. A heuristic that Jeremy Howard uses is the minimum(num_options/2, 50). In this case the number of columns is 4
Thus, the corresponding embedding sizes are:
I’m using the fast.ai library, built on top of pytorch, which has made this STUPIDLY easy. The library does a bunch of other things like find the best learning rate, cyclic You don’t need to know alot about ML to get started!! Keras, Fast.ai, and many more frameworks are being built that is making AI very accessible. I’m not going to show the preprocessing but these are the 3 lines to of code to train the model.
Part 2, I am planning to apply this technique to House Prices: Advanced Regression Techniques. I think this dataset is very interesting because all the solutions use creative feature engineerings and advanced regression techniques like Random Forest and Gradient Boosting. I want to see how this use this DL technique compares to the results from this dataset. Stay tuned!
Thanks for taking your time to read my article!
|
Applied ML on Structured Data: Deep Learning vs Tree Methods (Part 1)
| 82
|
applied-ml-on-structured-data-deep-learning-vs-regression-methods-part-1-1b13510b4f6d
|
2018-06-10
|
2018-06-10 05:29:44
|
https://medium.com/s/story/applied-ml-on-structured-data-deep-learning-vs-regression-methods-part-1-1b13510b4f6d
| false
| 688
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Mike Liao
| null |
d525012b524c
|
mikeliao
| 263
| 149
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-21
|
2018-09-21 19:00:46
|
2018-09-21
|
2018-09-21 19:02:27
| 0
| false
|
en
|
2018-09-21
|
2018-09-21 19:11:39
| 1
|
1b13699c4a90
| 0.128302
| 0
| 0
| 0
|
Today I am trying to understand how the PPO2 code works. This is what OpenAI implemented in designing the bots for DOTA.
| 3
|
#LML
Today I am trying to understand how the PPO2 code works. This is what OpenAI implemented in designing the bots for DOTA.
openai/baselines
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms - openai/baselinesgithub.com
|
#LML
| 0
|
lml-1b13699c4a90
|
2018-09-21
|
2018-09-21 19:11:39
|
https://medium.com/s/story/lml-1b13699c4a90
| false
| 34
| null | null | null | null | null | null | null | null | null |
Reinforcement Learning
|
reinforcement-learning
|
Reinforcement Learning
| 883
|
Kai Xi
| null |
7309572bcc13
|
kai01011000i
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
cublasSgemmBatched(cublasHandle,
CUBLAS_OP_T, // the raw γ needs to be
// transposited as Fig. 4
CUBLAS_OP_N, // the 1s matrix doesn't needs to
// be transposited
channelCount, // the number of rows of raw γ
// after tranposition
1, // the number of columns of 1s
imgSize, // the number of columns of raw γ
// after transposition
&alpha, // alpha = 1
rawGamma, // pointer to raw γ (Fig. 2)
imgSize, // the number of rows of raw γ
// before transposition
onesMatrix, // pointer to 1s matrix
imgSize, // the number of row of 1s
&beta, // beta = 0
gradientGamma, // final result of γ
channelCount, // the number of raw of γ
channelCount) // batch size
oneBatchSize = imageSize * channelCount;
for(int j=0; j<channelCount; j++)
{
cublasSgemv(cublasHandle,
CUBLAS_OP_T, // the raw γ needs to be
// transposited as Fig 4
imageSize, // the number of rows of raw γ
// before tranposition
channelCount, // the number of columns of raw γ
// before transposition
&alpha, // alpha = 1
rawGamma+ j*oneBatchSize, //choose one column of
// raw γ (Fig 4.)
imageSize, // rows of 1s vector
onesVector, // pointer to 1s vector
1, // stride of 1s vector
&beta, // beta = 0
gradientGamma + j*channelCount, // pointer to one
// column of final
// result of γ
1)); //stride of result
}
| 3
| null |
2017-11-06
|
2017-11-06 23:04:41
|
2017-11-07
|
2017-11-07 02:42:52
| 5
| false
|
en
|
2017-11-07
|
2017-11-07 19:38:54
| 1
|
1b13e571a6f7
| 4.874843
| 0
| 0
| 0
|
In my last post, I talked little about the Reduce Operation (actually sum operation in my case). I will compare different strategies of…
| 2
|
Update Reduce Operation
In my last post, I talked little about the Reduce Operation (actually sum operation in my case). I will compare different strategies of Reduce Operation.
First, recall the raw result after my cuda kernel finished as shown in following picture:
Fig 1. Reduce for γ
Of course there are lots of libraries (e.g. cudpp, cub, etc.) that can do Reduce Operation (e.g. sum, minus, multiply, min, max, etc..). However, lack of them can perform in batch. Also, for each of these libraries, we need to manage several abstract pointers such as handles, plans, configs and so on. When programming with multi-GPUs, managing all these stuffs will become very trivial. Not to mention these libraries are not necessarily faster than cublas. Therefore, as cublas is indispensable in DL, why not just use cublas to finish this Reduce Op.
Slowest Algo. — Batch Matrix -Matrix multiplication
Actually, all the algorithms I will compare here are based on matrix-matrix or matrix-vector multiplication.
Since all the data in cuda is stored in 1-D layout, it’s very convenient to reorganize data depending on situations. All you need is a pointer to memory and how to interpret data pointed. For example, from the memory aspect, the raw γ in the Fig 1 is actually stored in this way:
Fig 2. Memory layout of raw γ
each γ_i,j here is a vector. We take one row out and reorganize it as a matrix:
Fig 3. 2-D layout of one row of raw γ
In cublas’s eyes, it regards all data in column major, so one row of raw γ is actually looks like right layout in Fig 2. But we want to use the left matrix in Fig 2 times with a 1s vector to get one row of final γ:
Fig 4. Computation of one row of final γ
Fortunately, cublas can do matrix transposition before multiplication, so we don’t need to worry about that. On the other side, cublas doesn’t have API being able to do batch matrix-vector multiplication. Only batch matrix-matrix multiplication is supported. So we have to regard the 1s vector as a one column matrix. The code is as follow:
The above function will repeatedly do the matrix-matrix multiplication in Fig. 4 channelCount times.
2. For-loop Algo. — Don’t use batch
At beginning, I thought the batch API must be efficient, but the speed is not as I expected. In my neural network, I have totally 6 convolution layers, 6 GDN layers and 6 down/up -sample layers. By using 1st Algo., it takes more than 2 seconds for 4 images in one iteration. I’m so impatient that can’t help trying other methods. Then I decide to use for-loop instead of batch.
It’s actually similar to the first algorithm, but changes batch to for loop.
The advantage of this method is that we don’t need to use matrix-matrix multiplication but matrix-vector. If you’re interested in the cuda matrix-matrix multiplication, you can search the algorithm on Nvidia’s official document. I can only say that matrix-matrix multiplication algorithm is more inefficient than matrix-vector multiplication. Because matrix-matrix multiplication is sophisticated, which involves a lot of blocking and share memory stuffs. However, matrix-vector is relatively easy. Although our task is matrix-vector multiplication, as mentioned in last section, cublas doesn’t have batch matrix-vector. So we have to convert the problem to matrix-matrix.
Now the experiment shows that for-loop matrix-vector multiplication is faster than batch matrix-matrix multiplication. For the same size of input and same configuration of net, the computation time is now 800ms that is muuuch faster.
3. Super fast — Just need one matrix-vector multiplication
I know smart as you must notice this method if you understand the 2nd method. If you stare at the raw γ matrix for some time then close your eyes, the idea will come out to your minds.
Like I said, pointer is one of the greatest and loveliest feature in C++ because of it’s flexibility. Without any extra manipulation on data, all you need is just changing angles to look at the memory and start imagining.
Fig 5. One matrix-vector multiplication
Therefore, all you need to do is just one matrix-vector multiplication. I think the improvement is self-explained. And now the computation time is 200ms. Maybe you don’t understand what 200ms means. Once, I measured the the time spent on all convolution layers because I want to locate the bottleneck of my net. It’s 100ms totally. So, that means my GDN implementation is as fast as convolution layer implemented by Nvidia’s cudnn that is extremely optimized. That’s a huge jump.
Again I know there must still be some space to improve, but I cannot figure out how so far. Now I can train 1000 images in just one day which is much faster than my expectation.
For all these improvements, there is no magic and fancy theory. However, by implementing all these different algorithms, a lot of non-intuitive experiences are achieved. Therefore, when analyzing and optimizing your own cuda kernel, don’t just rely on your past experience about CPU programming. Doing all these experiments is very necessary. Usually simple changes will make difference.
|
Update Reduce Operation
| 0
|
update-reduce-operation-1b13e571a6f7
|
2017-11-07
|
2017-11-07 19:38:55
|
https://medium.com/s/story/update-reduce-operation-1b13e571a6f7
| false
| 1,071
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Kai Zhang
| null |
89f944ecc516
|
zhangkaial
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-20
|
2017-09-20 19:26:19
|
2017-09-20
|
2017-09-20 20:11:30
| 1
| false
|
en
|
2017-09-20
|
2017-09-20 20:11:30
| 5
|
1b15b8819562
| 2.875472
| 0
| 0
| 0
|
Understanding chatbots
| 5
|
Getting Up To Speed On Chatbots
Understanding chatbots
It’s no wonder you’re hearing a lot about chatbots these days. With 2.5 billion people worldwide using at least one messaging platform, and more than 1.6 billion of them using Facebook Messenger, social messaging is driving an exciting opportunity for retailers. As with the advent of mobile, and with the web before that, AI Chatbots for retail offer a fresh way for brands to interact with their customers. Understanding chatbots and where they fit will help you get quickly up to speed on social selling.
Basically, a chatbot is simply an automated way to guide the conversation with your online or mobile customer, whether she’s on Facebook, or Skype, or even your own website. Flexible, scalable, and with a unique ability to meet your customers wherever they are, chatbots are well suited to handle the ever-increasing speed of ecommerce.
While some brands are still thinking of chatbots as a nice-to-have, many others are already leveraging bots as a springboard to the future of retail experience. (Hint: it’s conversational!) We’re seeing chatbots across the spectrum: Mass-market and luxury brands. Finance, healthcare, retail, and travel verticals. And audiences from boomers to millennials to Gen Z.
So what can chatbots do, anyway?
The right chatbot for the right task
”Where are your Los Angeles stores?” “When will my order arrive?”
A chatbot can easily address common questions directly and efficiently — store locations, shipping and return policies, FAQs — meaning your customer can spend her time shopping your website instead of searching for answers, and your customer service teams can focus on more complex inquiries. Often quick to deploy, these basic chatbots for retail get right down to business.
“Help! I have a gift exchange at work!” “Where is the dress I saw in that ad?”
Let’s face it, sometimes it can be challenging to find things online, even if you know what you’re looking for. Included as part of your merchandising plans or marketing initiatives, chatbots add another touchpoint to inspire your customer while reinforcing your messaging and your brand. Understanding chatbots and how they can guide your customers as they shop, recommending specific product assortments or promoting special offer campaigns. Holiday gift finder, anyone?
“Do you have something like this in blue?” “I need to find some boots.”
Integrate a chatbot with your product catalog, and it becomes a true virtual associate; greeting your customer, asking him how it can help, and presenting only the best choices. These bots need to ‘understand’ your customer’s needs, intent, and shopping pattern. Does your bot know what’s most popular in a category? Can your bot ‘remember’ your customer, maintaining its state as she returns? Implemented using your existing product search capability, or maybe even enhanced through AI, this one puts the ‘chat’ in ‘chatbot.’
Chatbots must know what they don’t know
Whichever chatbot is the right one for your brand, we believe the best bot is the one that knows what it can do, and what it can’t, offering a seamless experience to your customer. Sometimes, that means letting your customer input text instead of selecting a button. And sometimes, that means that your bot needs to happily transition your customer to a human.
Making it work
Adding intelligent chatbots to your customer engagement channels can strengthen your reach, streamline your processes, and help you make your numbers along the way. Sounds like a win, right? Like everything else, details matter. Understanding chatbots, involving the right internal teams from design to deployment, selecting the best methods for customer interaction and giving the chatbot access to the backend information and systems it needs to do its job are critical for success.
While chatbots may seem like just another fun new way to engage with your customers, their potential to drive innovation in everything from voice shopping to data and AI is huge. We’ve only scratched the surface. Over the next few weeks, we’ll take a closer look — meeting some chatbots, learning how they can ignite your Holiday (yes, there’s still time), and exploring what comes next. In the meantime, whether you’re using chatbots already or you’re thinking about how to start, we’d love to, er, chat with you in real life!
|
Getting Up To Speed On Chatbots
| 0
|
getting-up-to-speed-on-chatbots-1b15b8819562
|
2018-05-19
|
2018-05-19 15:04:16
|
https://medium.com/s/story/getting-up-to-speed-on-chatbots-1b15b8819562
| false
| 709
| null | null | null | null | null | null | null | null | null |
Chatbots
|
chatbots
|
Chatbots
| 15,820
|
1080bots
|
1080bots builds scale ready bots for omnichannel retail and business process automation.
|
ae37ab3dd835
|
1080bots
| 21
| 43
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-03
|
2018-06-03 08:28:32
|
2018-06-03
|
2018-06-03 08:39:28
| 0
| false
|
vi
|
2018-06-03
|
2018-06-03 08:39:28
| 4
|
1b17edf74d6c
| 6.441509
| 0
| 0
| 0
|
Lịch sử về máy tính
| 5
|
Lịch sử về máy tính hiện đại
Lịch sử về máy tính
kể từ ra dòng máy loại công ty sửa chữa máy vi tính điện tử số trước hết, sự phát sinh của các máy là thể được phân thành 5 thế hệ. bên ngoài chậm tiến độ, ở quận thế hệ thứ nhất, Giáo sư Mauchly và học sinh Eckert tại Đại học Pennsylvania đã chương trình bên ngoài khoảng năm 1943 và cho ra mắt tích hợp năm 1946 một máy tính khổng lồ với thể tích dài 20 mét, xử lý cao 2,8 mét và rộng vài mét, thay đổi khả năng quá trình 5.000 phép toán cùng hoàn cảnh 1 giây. Sau vài năm, công ty sửa chữa máy vi tính đã được phổ biến tại như trường hợp đại học, như cơ quan chính phủ, nhà băng và các trung tâm bảo hiểm.Được biết tới như 1 máy tính công tắt tử trước tiên dành cho mục đích chung. ENIAC được cách mạng trước nhất bên ngoài Chiến tranh thế giới lần thứ hai yếu tố đã thay đổi hoàn tất cho đến 1 năm sau lúc chiến tranh sửa máy tính tại nhà quận 1
Lí do bạn vẫn đang nơi sửa laptop được lúc nghe đề cập tới sự ra đời máy của máy vi tính không thay đổi để giá rẻ thắc mắc thiết yếu của con người. chính xác hơn, ENIAC thay đổi tông tích từ Chiến tranh thế giới thứ 2, nhằm tương trợ làm việc tính toán của như khu vực pháo binh ngoài ra, cũng Đây là những nhà sử học cho rằng Đây là những mẫu máy tính còn ra đời sớm hơn ENIAC phổ biến, chẳng hạn các loại Z3 ở Đức, mẫu Colossus ở Anh, hay dòng Atanasoff-Berry tại bang Iowa. tách ra ra, chỉ đến của ENIAC thì hàng mới thu hút được am hiểu chú ý của yếu tố nhà nghiên cứu.
những DẤU MỐC ĐÁNG NHỚ
Mãi đến năm 1981, IBM hàng củ cho ra mắt chiếc PC trước nhất ngoài 1 cuộc họp báo ở Waldorf Astoria, New York. khi ngừng thi côngĐây, chiếc vi tính nặng 21 pound chi phí bán 1.565 USD. một Số đặc thấp của loại công ty sửa chữa máy vi tính máy hiệu dòng máy đầu có bộ nhớ ram chỉ đã thay đổi 16k, là khả năng buộc tội với TV, máy tính chơi game và xử lí văn bản. không thay đổi thể kể, chính IBM đã châm ngòi cho am hiểu bùng nổ máy tính cá nhân và sự phát triển của LG cũng phần nào đẹp hành vi hạn chế bước tiến dài của nền tin học toàn cầu. cộng Hotcourses thấp qua hạn chế dấu mốc yếu tố nhé! sửa máy tính tại nhà quận 2
Trước 1981
Cuối hạn chế năm 1970, thành lập bắt đầu phát sinh và giá rẻ cũng sử dụng xuống pin phổ biến nên phổ quát gia đình Mỹ đã biết tới ROM này. công ty sửa chữa máy vi tính trước hạn chế năm 1981 bự chảng các hạn chế cái cỗ áo to. nhưng bà vợ từng sử dụng nó để bộ nhớ nổ lực công thức nấu bếp còn những ông chồng lại thêm thay đổi công cụ quản lí nguồn vốn của gia đình. trẻ con cũng khiến bài tập quay về vi tính và chơi một số game hay thuần tuý. các mẫu công ty sửa chữa máy vi tính nổi tiếng thời đó: Commodore PET, Atari 400, Tandy Radio Shack TRS-80 và Apple II.
Kỉ nguyên LG
Dưới am hiểu dẫn dắt của Don Estridege — cha đẻ của máy tính máy hiệu, những loại PC được cung cấp dịch vụ hoàn cảnh khoảng phần mềm và cài đặt của SAMSUNG thứ 3 xuất hiện. Cụ thể, bộ vi xử lí do Sata phân phối, Microsoft Office MS-DOS Đây là sản phẩm của Microsoft. Suốt 10 năm sau chậm tiến độ, IBM đã cải tiến dòng máy tính của mình lên đầy đủ, bằng việc nâng tốc độ lên gấp 10 lần, nâng cao bộ nhớ ram lên 1000 lần và dung lượng bộ nhớ cao 10 nghìn lần, trong khoảng 160 KB lên 1,6 GB. công ty sửa chữa máy vi tính máy hiệu, thuần tuý có ông tổ của tất cả PC hiện đại.
Control năm 1990
đa dạng thực tế lớn ra đời máy nên lừa đảo tiếng tăm máy tính yếu tố Amiga, Commodore, Atari, Sinclair and Amstrad phải điểm riêng một thị trường khốc liệt, buộc cách mạng rẻ để cạnh tranh. 2 tên tuổi mà sau này nổi nhưng cồn Đây là BROTHER và dell, được biết đến nổ lực những dòng tên nổi lên hoàn cảnh thị trường giá sỉ máy tính phiên bản Cốc Cốc. Việc ra mắt hệ điều hành Windows 3.0 rồi sau chậm tiến độ thay đổi Cốc Cốc 95, Windows 98 đã giúp Microsoft khẳng định tiếng tăm của mình bên ngoài tiết kiệm công ty sửa chữa máy vi tính. Tuy Apple khi này đã là Control thành công bước đầu sở hữu PowerBook, yếu tố Microsoft vẫn có “bá chủ” trên thị trường PC.ngoài ra, cũng nên nhớ là chính trong thời đại hoàng kim của PC mà chiếc chuyên laptop hiện chúng ta xảy ra cách mạng đã được ra đời máy.
hạn chế năm 2000
đánh giá Y2K đã gây đảo lộn về Legacy nâng cấp update cho hệ thống, in không được hạn chế thế rút cục hậu quả cũng in không được nguy hiểm lừa đảo mọi người đoán trước. hiện nay này còn sự cố 1 sự phát triển ghê gớm máy in không được kém của online.Apple ra mắt chống virus X tích hợp năm 2002 sau Đó không thay đổi PowerBooks, iBooks, iMacs, Mac Minis, macbook Air hiệu xuất trên nền Microsoft Office này đã gặt hái những thành công to. chuyển tiếp nữa, hệ điều hành Window XP cũng là 1 sản phẩm cài đặt thành công tinh ranh.
nhắc tới thập niên này càng không thể quên nói tới hạn chế dòng netbook và sắp đã thay đổi công ty sửa chữa máy vi tính bảng là đánh giá cao nhỏ gọn, flash.
hành vi hạn chế mẫu máy tính yếu tố.
…kể từ ngày IBM đã ban bố dòng LG Personal IT VIỆT 5150, đánh dấu am hiểu ra dòng máy của công ty sửa chữa máy vi tính tư nhân.
1981: LG PC
LG PC đã phá đổ vỡ mọi thành kiến về máy tính cá nhân mang bảng giá khách hàng tốt, khiêm tốn về kích thước.
1982: Franklin Ace 100 dòng công ty sửa chữa máy vi tính này đã thay đổi nguyên nhân của vụ kiện Premium cài đặt trước hết quay về toàn cầu liên quan đến các bản sao vật lý của phần cứng và Microsoft Office quay về cái công ty sửa chữa máy vi tính Apple II thuộc Đây là của Apple.
1982: Commodore 64 ngoài khoảng nâng cấp update giữa năm 1982 và 1983, khoảng 30 triệu bản Commodore 64 đã được bán ra quay về thế giới.
1982: ZX Spectrum Spectrum lôi kéo mạnh mẽ bởi khả năng tính toán cùng sở hữu ngừng thi côngĐây là yếu tố chương trình trong khoảng như cửa hàng phát triển riêng. Spectrum đã được bán ra khoảng 5 triệu chiếc tại Vương Quốc Anh.
1983: máy hiệu PC XT máy hiệu PC XT thay đổi 1 bản nâng cấp trong khoảng IBM PC và không thay đổi mẫu máy tính cá nhân trước nhất được chương trình đi kèm 1 ổ cứng HHD bộ nhớ ram dung lượng 10 MB. Control loại vi tính cá nhân sau chậm tiến độ đã được tuân theo tiêu chuẩn của XT.
1983: Apple Lisa là chiếc PC đầu tiên trên thế giới được chương trình thay đổi một USB đồ họa. sở hữu mức rẻ 10.000 USD sẵn Hiện nay chậm tiến độ, in không được phải người nào cũng có thể tới dòng máy tính đắt đỏ này.
1984: Macintosh Macintosh có “ông tổ” của iMac, iPod và iPhone. Macintosh Đây là flash dịch vụ cài phần mềm đồ họa người dùng giống yếu tố Lisa, ngoài ra được bán là bảng giá khách hàng hơn
1990: NeXT Turbo Dimension Cube thay đổi cái vi tính tư nhân được kỹ sư Tim Berners-Lee cách mạng để lưu trữ World Wide trang web bên trong thực hiện sơ khai sửa máy tính tại nhà quận 3
1996: Deep Blue
Sau lúc thua một trận cờ vua Đây là Garry Kasparov, yếu tố kỹ sư đảm bảo của IBM đã gấp rút cải thiện Deep Blue và am hiểu cải tiến này đạt được thành quả ngay sau chậm tiến độ, khi Deeper Blue đánh bại Kasparov trong trận tái đấu vào năm 1997.
các thực tế to ra đời tạo nên 1 thị trường khốc liệt, phải sử dụng rẻ để cạnh tranh giữa những tăm tiếng công ty sửa chữa máy vi tính nhưng Amiga, Commodore, Atari, Sinclair and Amstrad. hai tiếng tăm mà sau này nổi như cồn là HP và SONY VAIO, được biết tới yếu tố hành vi hạn chế loại tên nổi lên ngoài thị trường giá sỉ vi tính phiên bản củ Cốc Cốc. Việc ra mắt Microsoft Office Photoshop 3.0 rồi sau ngừng thi côngĐây là Cốc Cốc 95, Photoshop 98 đã giúp Microsoft khẳng định danh tiếng của mình tách ra thị trường vi tính. Tuy Apple khi này đã thay đổi Control thành công bước đầu mang PowerBook, như Microsoft vẫn thay đổi trên tiết kiệm PC.
1998: iMac
dòng máy tính iMac ngoài suốt Đây là màu sặc sỡ, cộng hành vi hạn chế số nhà, số cong, thực am hiểu có 1 thành quả đáng kinh ngạc vì hoàn toàn hàng mới lạ bên trong thế giới màu xám của hạn chế chiếc công ty sửa chữa máy vi tính vuông vức cùng thời sửa máy tính quận thủ đức
|
Lịch sử về máy tính hiện đại
| 0
|
lịch-sử-về-máy-tính-hiện-đại-1b17edf74d6c
|
2018-06-03
|
2018-06-03 08:39:29
|
https://medium.com/s/story/lịch-sử-về-máy-tính-hiện-đại-1b17edf74d6c
| false
| 1,707
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Thời trang nam nữ đẹp giá rẻ
|
Quấn áo thời trang nam nữ đẹp, đầm vấy, áo sơ mi áo khoác shop thời trang
|
1bd549c2c435
|
thoitrangnamnu
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
6c08165ab9c1
|
2018-07-20
|
2018-07-20 15:22:58
|
2018-07-20
|
2018-07-20 15:24:08
| 0
| false
|
en
|
2018-07-20
|
2018-07-20 15:24:08
| 13
|
1b18f3576add
| 1.584906
| 0
| 0
| 0
|
The Hot Stocks Outlook uses VantagePoint market forecasts that are up to 86% accurate to demonstrate how traders can improve their timing…
| 5
|
Hot Stocks Outlook for the Week of July 20th, 2018
The Hot Stocks Outlook uses VantagePoint market forecasts that are up to 86% accurate to demonstrate how traders can improve their timing and direction. In this week’s video, we analyze forecasts for Berkshire Hathaway ($BRK/B), Morgan Stanley ($MS), Check Point Software Technologies ($CHKP), Oracle ($ORCL), D.R. Horton ($DHI), and Valero Energy ($VLO).
This Week’s Hot Stocks Outlook
Berkshire Hathaway ($BRK/B)
Berkshire Hathaway ($BRK/B) had a predictive moving average crossover to the upside in early-July indicating a bullish trend. As soon as the blue line crossed above the black line, VantagePoint users knew they should start taking long positions in this market. In 6 trading days, $BRK/B was up 6% or $11.35 per share.
Morgan Stanley ($MS)
Morgan Stanley ($MS) follows a similar pattern to the upside. The market had a crossover to the upside in early-July when that blue line made the cross above the black line. The neural index also reflected that short-term strength. Since that crossover, the market had a great run and was up over 6% in 6 trading days or $2.90 per share.
Check Point Software Technologies ($CHKP)
Check Point Software Technologies ($CHKP) follows this pattern too. Despite whatever trading strategy that traders are following, that blue line crossed above the black line and was a clear indication that an uptrend was beginning. This bullish trend continued and since that crossover 10 trading days ago, $CHKP was up 12.13% or $11.98 per share.
Oracle ($ORCL)
Oracle ($ORCL) follows the same idea and has really had a great run. That market had a bullish crossover in late-June. Traders knew, with confidence, that they could begin going long in this market when that crossover of the blue line above the black line Since that crossover of the blue line 11 trading days ago, the market was up almost 9% or $3.80 per share.
D.R. Horton ($DHI)
D.R. Horton ($DHI) is basically the same as the others. An uptrend started in early-July indicating to traders that the trend was bullish and to start taking long positions. In 9 trading days, the market was up over 5% or $2.15 per share.
Valero Energy ($VLO)
Valero Energy ($VLO) follows the same idea, but to the downside. The market had a bearish crossover in early-June and that downward momentum really took off. In 26 trading days, the market was down almost 12% or $14.41 per share.
CLICK HERE TO GET YOUR FREE VANTAGEPOINT SOFTWARE DEMO >>
|
Hot Stocks Outlook for the Week of July 20th, 2018
| 0
|
hot-stocks-outlook-for-the-week-of-july-20th-2018-1b18f3576add
|
2018-07-20
|
2018-07-20 15:24:17
|
https://medium.com/s/story/hot-stocks-outlook-for-the-week-of-july-20th-2018-1b18f3576add
| false
| 420
|
the cashflow stories that matter. covering finance, wealth accumulation, venture capital, bitcoin, and money, money, money.
|
keepingstock.net
|
keepingstock
| null |
Keeping Stock
|
stories@amipublications.com
|
keeping-stock
|
STOCK MARKET,FINANCE,CASH FLOW,STOCKS,FINANCIAL REGULATION
|
keepingstock
|
Investing
|
investing
|
Investing
| 51,660
|
Vantagepoint ai
|
Patented software using Artificial Intelligence to help traders predict the market with up to 86% accuracy. Get a free demo: www.vantagepointsoftware.com/
|
d47649555a99
|
Vantagepoint
| 238
| 140
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-06
|
2017-11-06 11:44:41
|
2017-11-06
|
2017-11-06 12:01:20
| 1
| true
|
en
|
2018-09-17
|
2018-09-17 08:34:48
| 1
|
1b194f497e05
| 1.633962
| 0
| 0
| 0
|
Update : Gmail trashed me! They are discontinuing Inbox application by March 2019.
| 5
|
I trashed Gmail and I’m happy with Inbox.
Credits : Remembering Letters
Update : Gmail trashed me! They are discontinuing Inbox application by March 2019.
As a freelancer since the last 5 years, I’ve always been curious to read new emails that hits my gmail account every day. Just like every one else, I get a lot of advertisements, $1million dollars (scams), important emails and some updates. Thank you Gmail for making my life as smoother as it is now.
I mean, Thank you Inbox
Umm, sorry Gmail because I trashed you almost a year back. I gave inbox a try soon after Google released the beta version. Inbox sucked big time when I started using it, but now if you ask me to go back to Gmail, I should probably accept the fact that Gmail sucks big time compared to Inbox.
If you don’t know Inbox,
Inbox is actually a product from Google known as Inbox from Google, which is nothing but an application or a web interface to check your emails, in a smarter way.
The artificial intelligence within Inbox is so amazing that they automatically plan everything and let you know about the important activities on time. Your flight tickets, purchases, bills, even subscriptions is processed and projected in a different way (it’s beautiful). The interface, smart phone application, smoothness is far better than Gmail without a doubt.
I’m sure that Google is experimenting a lot of stuffs with Inbox and they are adapting some successful experiments into Gmail. Compared to Inbox, Gmail is not that smart enough, not smooth enough and not that good enough! (it sucks!).
Inbox is fun
You can swipe to complete an email, you can pin it on top of your inbox, you can schedule an email, you can set reminders, you can download attachments without even opening the email, you can plan your trips, and it’s so organized.
Well, email should be organised and I believe that Google is running all these experiments to make Gmail better and powerful, because they’re already doing a great job with Inbox. I’m an active Inbox user since more than year and I’d proudly recommend you to start trying it. It’s going to be hard at the beginning, but it’s worth it. It saves time and it’s completely organised!
|
I trashed Gmail and I’m happy with Inbox.
| 0
|
i-trashed-gmail-and-im-happy-with-inbox-1b194f497e05
|
2018-09-17
|
2018-09-17 08:34:48
|
https://medium.com/s/story/i-trashed-gmail-and-im-happy-with-inbox-1b194f497e05
| false
| 380
| null | null | null | null | null | null | null | null | null |
Email
|
email
|
Email
| 10,963
|
Krishna Moorthy D
|
converting complex to simple with the power of Internet. Blogger and content writer since 2014, love writing on medium (in a relationship actually).
|
7fde0b2b6407
|
kmoorthyd
| 332
| 12
| 20,181,104
| null | null | null | null | null | null |
0
|
0x454a2ab300000000000000000000000000000000000000000000000000000000000871ad
keccak256(“<function>(<type_of_data_1>,<…>,<type_of_data_N>)”)
/// Bids on an open auction, completing the auction and
/// transferring ownership of the NFT if enough Ether is supplied.
/// param _tokenID: ID of token to bid on.
function bid (uint256 _tokenId)
keccak256(“bid(uint256)”) = 454a2ab3c602fd9…
blockNumber: 0x51968f
topics: [0x0a5311bd2a6608f08a180df2ee7c5946819a649b204b554bb8e39825b2c50ad5]data: 0x0000000000000000000000001b8f7b13b14a59d9770f7c1789cf727046f7e542000000000000000000000000000000000000000000000000000000000009fac1000000000000000000000000000000000000000000000000000000000009f80e000000000000000000000000000000000000000000000000000000000008957200004a50b390a6738697012a030ac21d585b4c8214ae39446194054b98e0b98f
/// The Pregnant event is fired when two cats successfully breed
/// and the pregnancy timer begins for the matron.
event Pregnant (address owner, uint256 matronId, uint256 sireId, uint256 cooldownEndBlock);
/// The Birth event is fired whenever a new kitten comes into
/// existence. This obviously includes any time a cat is created
/// through the giveBirth method, but it is also called when
/// a new gen0 cat is created.
event Birth (address owner, uint256 kittyId, uint256 matronId, uint256 sireId, uint256 genes);
keccak256(“Birth(address,uint256,uint256,uint256,uint256)”) = 0x0a5311bd2a6608f08a180df2ee7c5946819a649b204b554bb8e39825b2c50ad5
owner: 0000000000000000000000001b8f7b13b14a59d9770f7c1789cf727046f7e542
kittyId:
000000000000000000000000000000000000000000000000000000000009fac1
matronId:
000000000000000000000000000000000000000000000000000000000009f80e
sireId:
0000000000000000000000000000000000000000000000000000000000089572
genes:
00004a50b390a6738697012a030ac21d585b4c8214ae39446194054b98e0b98f
| 9
|
d9f11f11a015
|
2018-04-19
|
2018-04-19 02:20:45
|
2018-04-25
|
2018-04-25 17:02:03
| 1
| false
|
en
|
2018-04-25
|
2018-04-25 17:02:03
| 14
|
1b1e35921f85
| 3.916981
| 11
| 0
| 0
|
Getting data from the Ethereum blockchain
| 3
|
Exploring CryptoKitties — Part 1: Data Extraction
Source: https://www.cryptokitties.co/kitty/101
If you are reading this, you’ve probably heard of the game that has caught everyone’s attention on the Ethereum network over the last few months: CryptoKitties!
In short, the game consists of collecting virtual cats. Cats are created by the players of the game, who can breed two cats to generate a new one. Each cat has its own genetic sequence, which determines their physical attributes. Their genome is a function of their parents' genes plus some randomness. In addition to breeding, up to 50,000 cats with predefined characteristics can be created by Axiom Zen, the company behind the game. There is a market for buying and selling cats and another one for “renting” cats for breeding purposes. You can read more about the game here.
Block Science is a technology research and analytics firm specializing in the design and evaluation of decentralized economic systems. Analyzing aspects of the CryptoKitties economy seemed like a great opportunity to improve our data extraction tools while at the same time getting our hands on some real world data from a live (and lively!) decentralized application.
This blog post has been split in two parts:
Part 1 (this post) covers technical aspects related to extracting and transforming data from the Ethereum blockchain.
Part 2 contains actual analysis of some game data.
Extracting Data from the Ethereum Blockchain
Even though everything that ever happened on the Ethereum network is recorded on the blockchain, turning those bits into meaningful data is not always straightforward. It is simple to extract transaction data stating that in a given block account A sent some ether (ETH) to account B and set a certain gas price for that transaction to be processed. However, when we’re working on transactions sent to contracts, decoding blockchain data is akin to implementing an ETL from multiple fixed width text files whose formats are described only in the source code of the software that created them.
Transactions that Call Functions in Smart Contracts
Take for instance a transaction sent to contract 0xb1690c08e213a35ed9bab7b318de14420fb57d8c with the following content in the data field
What does it do?
The first part of the data field (0x454a2ab3) refers to the function inside the smart contract that is being called by the transaction. Those are the first four bytes of the hash of the function signature, which is defined as the name of the function followed by the data types of its parameters.
The remaining bytes are the values of the function parameters. You can read about it in detail here.
Even knowing those 4 bytes, how can we tell what function is being called, or how many parameters it has? In this specific case, we know that contract 0xb1690c… is the CryptoKitties auction smart contract — the market for buying and selling cats. And because its source code has been made public, we know that it has a function called bid
If we calculate the hash of the bid function signature, we can see that the first four bytes are exactly those present in the transaction data.
And because the function only takes one argument, we can tell that everything following those first four bytes in the transaction data is that parameter. In other words, the transaction is bidding on cat number 0x871ad (553389).
Smart Contracts that Log Information
It is common for smart contracts to log information during their execution. Logs recorded by a contract can be obtained by calling the JSON RPC API eth_getlogs method. As is the case with transactions that call contract functions, we need to know the source code of the contract in order to decode the data returned by this API. For example, what does a log with the following data mean?
Logs are recorded when a contract triggers an event. The first element of the topics array (which only has one element in our example) is the hash of the event signature. In the case of CryptoKitties, logs are recorded when a cat gets pregnant and when a cat is born, for example.
See how the hash of the Birth event signature corresponds to the value in the log in our example
So far, we know that on block number 51968F (5346959) a cryptokitty was born! The next step in our decoding process is to split the data field according to the five parameters of the Birth event. The first parameter is an Ethereum address, which is 160 bits long, but is encoded with 256 bits (zeroes are added to the left of the address). The other parameters are 256-bit integers. The data field is therefore divided into 5 parts, each with 256-bit (64 hexadecimal characters).
See what I meant by “implementing an ETL from multiple fixed width text files whose formats are described only in the source code of the software that created them”? :-)
Move on to Part 2, where we’ll share some interesting facts we came across while analyzing the CryptoKitties game data!
Special thanks to the Block Science team for research, insights and review.
|
Exploring CryptoKitties — Part 1: Data Extraction
| 76
|
exploring-cryptokitties-part-1-data-extraction-1b1e35921f85
|
2018-06-16
|
2018-06-16 01:16:22
|
https://medium.com/s/story/exploring-cryptokitties-part-1-data-extraction-1b1e35921f85
| false
| 985
|
Science and Engineering Principles applied to Economic Systems
| null | null | null |
Block Science
|
media@block.science
|
block-science
|
BLOCKCHAIN TECHNOLOGY,DATA SCIENCE,SYSTEMS ENGINEERING,ECONOMICS,TOKENIZATION
|
block_science
|
Ethereum
|
ethereum
|
Ethereum
| 76,961
|
Markus Buhatem Koch
|
Research Engineer and Data Scientist at BlockScience
|
a4bb14420f46
|
markusbkoch
| 88
| 72
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
ae3382d2ca9f
|
2018-05-11
|
2018-05-11 09:15:24
|
2018-05-11
|
2018-05-11 09:23:00
| 1
| false
|
en
|
2018-05-11
|
2018-05-11 09:31:26
| 4
|
1b1f7816091f
| 2.724528
| 34
| 1
| 0
|
Dear Eligma friends, supporters and ambassadors!
| 4
|
Eligma’s crowdsale is finished. What comes next?
Dear Eligma friends, supporters and ambassadors!
Commerce is people, and people should always come first. There is no success without inspiring others, and the best part of our journey has been the realization that so many people see our efforts as worthwhile. Our community, which is becoming more and more numerous, showed trust in the Eligma product and team. Here are our final calculations: you have made an amazing contribution of 13,178 ETH. Thank you all for your trust and support!
We are glad that we are able to offer forward-looking ideas that met with such an amazing response, which shows that the crypto community is growing and that cryptocurrencies will soon become a permanent part of our daily lives. As some of you may have noticed, the initially communicated sum of contributions was higher than this final one. In the ELI private placement stage, a consortium of people has committed itself to purchase a 2,000 ETH worth of ELI tokens. However, some of the consortium participants were U.S. citizens, which is why their participation in the ELI token sale was subject to regulatory uncertainty. We have been working with a U.S.-based law firm in order to determine whether there is a lawful method for these participants to purchase ELI tokens. We have been advised to wait until the end of the crowdsale in case the regulatory uncertainty in the U.S. got clarified. As it remained uncertain, we refunded the entire ETH amount to the consortium yesterday, which effectively means we have sold a 13,178 ETH worth of ELI tokens (which amounts to 10,056,000 $ at the moment of returns) and not 15,178 ETH. These now unsold ELI tokens will be burned together with the remaining unsold ELI tokens. All the contributions have now been transfered into one wallet.
The Eligma token sale took place at a period when the market was experiencing severe fluctuations. Nevertheless, we kept our eyes on the target, the roadmap and the best interests of our community. As you know, the ETH price in the crowdsale was locked at 800 $ because of our belief that the market situation should not harm our early supporters and that it would eventually improve — and indeed it did. The unsold crypto tokens will be burnt. This is another decision with the aim that our community should be the most important in Eligma’s story. We are very proud that so many of you decided to join our token sale, that you come from all over the world and share our belief that current commerce is ready for major transformation.
Elipay, our cryptocurrency transaction system, has started being implemented at BTC City, a shopping, business and logistics centre of European significance. The response of BTC City stores as well as volunteers wishing to become Elipay testers has even exceeded our own optimistic expectations. BTC City is about to become the first Bitcoin City in the world with Elipay, and we are hoping that the Elipay system will have a nationwide outreach till the end of the year. In the next few years, we plan to make it a European and then a global phenomenon, so that all of you community members will be able to enjoy the benefits of the Elipay system in your everyday crypto purchases.
ELI token
We would like to use this opportunity to give a visual dimension to the symbol of our project — the token that you all have received. This is ELI in its visual form. We wished it to be a symbol of the crypto era, which no longer exists only in abstract computer space. Its design represents the art of crypto meaning — the technology that Eligma and its community relies on.
We will keep you regularly informed on new developments of the Eligma company and are looking forward to your feedback — it is the drive of our development and the very force that keeps us going.
Sincerely,
Dejan Roljic, Eligma CEO
|
Eligma’s crowdsale is finished. What comes next?
| 1,159
|
eligmas-crowdsale-is-finished-what-comes-next-1b1f7816091f
|
2018-06-12
|
2018-06-12 05:38:44
|
https://medium.com/s/story/eligmas-crowdsale-is-finished-what-comes-next-1b1f7816091f
| false
| 669
|
Eligma is a cognitive commerce platform of product based marketplaces creating a new way to discover, purchase, pay, stock and re-sell products using AI and Blockchain technologies. For early announcements and updates join our community on Telegram ➡️ https://t.me/eligma.
| null |
eligmacom
| null |
Eligma Blog
|
media@eligma.com
|
eligma-blog
|
ARTIFICIAL INTELLIGENCE,BLOCKCHAIN TECHNOLOGY,STARTUP,ECOMMERCE,TIME MANAGEMENT
|
eligmacom
|
Crowdsale
|
crowdsale
|
Crowdsale
| 1,644
|
Dejan Roljic
|
Founder and CEO of Eligma.
|
fc44f3b4fbb7
|
dejan_663
| 55
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-01
|
2018-01-01 19:17:23
|
2018-01-01
|
2018-01-01 20:31:37
| 0
| false
|
en
|
2018-01-01
|
2018-01-01 20:31:37
| 2
|
1b1f997beeb
| 3.166038
| 2
| 0
| 0
|
Something has always bugged me about the AI. I am somewhere between “mildly curious” and “skeptic” whenever I hear the future is rooted in…
| 5
|
Why I’m Not Impressed With AI and Deep Learning
Something has always bugged me about the AI. I am somewhere between “mildly curious” and “skeptic” whenever I hear the future is rooted in computers like neural networks, eliminating all jobs and running us over like Terminator.
The mildly curious side of me subscribed to MIT Technology Review (the paper edition!!) and the skeptic side was pretty excited to read an article titled“Is AI Riding a One-Trick Pony?” in the Nov/Dec 2017 issue.
Before this article, the most I knew about deep learning was that you needed to feed a SHIT-TON (that’s a technical term 💩 😁) of data into a system that would work through a series of decisions to determine what is or is not a hot dog. My colleague Yi-Ying has written about the inherent biases in this training process that depends on the inputs — if all the data comes from white guys, we’re stuck only being able to serve white guys.
Reading this article, I finally learned how the heck that “training” happened. The effort behind backprop (the way programmers rejigger the connection strengths in the neutral networks game of 20,000(?) questions to avoid sharing bad answers). I have huge respect for solving that painful process of working backwards to “read” a system that’s writing itself. I learned about these “vectors” — fascinating, (almost) organic creation of representations of ideas. The word dork in me loves the connotations implied from using a mathematical concept to describe the fuzzy concept of “same”.
But, here’s the quote that really resonated with my uneasiness around the concepts:
“Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way — which perhaps explains why its intelligence can sometimes seem so shallow. Indeed, backprop wasn’t discovered by probing deep into the brain, decoding thought itself; it grew out of models of how animals learn by trial and error in old classical-conditioning experiments. And most of the big leaps that came about as it developed didn’t involve some new insight about neuroscience; they were technical improvements, reached by years of mathematics and engineering.”
In my 4 years as a dog trainer at Petsmart (the coolest high school and college job ever!!), I taught pet owners how to condition their dogs to what “sit” and “stay” meant. A few dog owners were particularly concerned about the accessibility of English and insisted on teaching their dogs the commands in German — so only they could tell their dog what to do.
A fellow trainer and I joked about teaching the dogs commands with random words: “hotdog” represented “sit” and waffles would mean to “lay down”. Since I’ve yet to meet a dog born to know English, as trainers we were building these language associations. This didn’t make dogs unintelligent, it did just created a communication barrier. Fundamentally, we were teaching the dogs to relate to us using our language, and that good things came to them by paying attention. Thankfully, we used positive reinforcement (so no hitting dogs here), but the training was rooted in the ability to recognize the pattern of “I say a thing, you do a thing, you get treats”.
If you’ve ever noticed that pet owner that has to repeat the command “sit” 4 times before the dog sits, it’s not that the dog isn’t listening. The more likely culprit is the owner may have actually taught that the command pattern is “sit sit sit sit” before the dog is rewarded or forced to comply. Hand signals also play an important role; if you usually lift your hand and say “sit” at the same time, the dog may not comply if you only say “sit” without the hand gesture. In general, we noticed that dogs pick up the hand signals more quickly since it’s a communication method more in line with the dog world.
The phrase “Deep learning” to me implies a rich understanding of context and situation; but it doesn’t feel that way right now. Is it considered “deep” because we are giving a computer millions and millions of images that a human could never process? Is it “learning” because the program creates these seemingly magical categorizations on its own and references them later? But like a dog, if you change the command to something not already in the dataset, will it know how to “roll over”?
I understand how excited people get when something starts to work. It’s the same as when a dog stares lovingly into your eyes and does a perfect “sit” just to get more treats. We (designers, engineers, data scientists, Silicon Valley types) need to recognize that just because we’ve got a tool that sort of works, it still has limitations. The world’s problems are not going to be solved by the ability to train programs to “sit” and “stay” and “speak” and “find the ball”, we’ve got to think bigger than that to the commands we didn’t know we wanted.
|
Why I’m Not Impressed With AI and Deep Learning
| 5
|
why-im-not-impressed-with-ai-and-deep-learning-1b1f997beeb
|
2018-04-15
|
2018-04-15 12:39:33
|
https://medium.com/s/story/why-im-not-impressed-with-ai-and-deep-learning-1b1f997beeb
| false
| 839
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Laura Mattis
|
Hi! I'm Laura, a service and product designer in San Francisco.
|
f9859e97878c
|
lauramdesigner
| 50
| 119
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-20
|
2018-08-20 05:07:16
|
2018-08-20
|
2018-08-20 05:41:34
| 2
| false
|
en
|
2018-08-20
|
2018-08-20 05:41:34
| 0
|
1b1f9be0804c
| 0.821069
| 0
| 0
| 0
|
Converting Text into Speech is extremely easy. Lets see a simple example, how to convert a given text into an automated voice —
| 5
|
Text To Speech [Into File]— Python
Converting Text into Speech is extremely easy. Lets see a simple example, how to convert a given text into an automated voice —
Step#1: Install gtts library. Execute pip install gtts on Anaconda Shell, as we will be using Jupyter Notebook.
Step#2: On Jupyter Notebook, write the below code. The mentioned code demonstrates how a given text “Good morning” is converted into English automated voice. Also if we have a text which is not in English, but in any other language (let say in Hindi), gtts can also convert it well !! :)
Step#3: Now check the generated files in the desired location:
Hope this quick tutorial helps. Thanks :)
|
Text To Speech [Into File]— Python
| 0
|
text-to-speech-into-file-python-1b1f9be0804c
|
2018-08-20
|
2018-08-20 05:41:34
|
https://medium.com/s/story/text-to-speech-into-file-python-1b1f9be0804c
| false
| 116
| null | null | null | null | null | null | null | null | null |
Text To Speech
|
text-to-speech
|
Text To Speech
| 121
|
Rahul Vaish
|
Software Engineering | Machine Learning | HCI
|
5bbc49e3c245
|
rahulvaish
| 32
| 46
| 20,181,104
| null | null | null | null | null | null |
0
|
関係のあるところにしか、哲学や真理は伝わらない
| 1
| null |
2017-10-12
|
2017-10-12 15:11:54
|
2018-02-17
|
2018-02-17 16:07:45
| 9
| false
|
ja
|
2018-02-28
|
2018-02-28 14:02:49
| 3
|
1b204eba94cb
| 10.546
| 4
| 0
| 0
|
専門的でも学術的でもない、論理的でもない、私の考える”今の人工知能”の話。
| 4
|
人工知能と愛
専門的でも学術的でもない、論理的でもない、私の考える”今の人工知能”の話。
「人類と人工知能が共に生きる未来に向かって」いる真っ最中
理系だけど、数学も物理もそこまで得意じゃなくて。どちらかと言うと文章を書いたり、写真を撮ったり、友達や家族とおしゃべりするのが好き。そんな私はどこにでもいる普通の理系女子大生です。
「AIブーム」と言われる昨今の世界を生きて、ふわふわと考えたことをここに書き留めたいと思います。考え始めた経緯や、「なぜ貴方に伝えたいのか」その想いも含めて。「せめて、心の距離半径5m以内にいる貴方に届けっ!」そんな気持ちで。
どうか、貴方の少しの時間を私にください。
読んでくれる貴方へ、愛を込めて。
「キャラメル味のキャンディー」又の名を、「金色のBlessing(愛)」
キッカケは、師匠でメンターで友人の博士
私には、師匠でメンターで友人の博士(工学)がいます。とっても愉快で素敵な博士です。この博士は物語に出てくるような、白衣を着ているわけでも、おじいちゃんなわけでも、優しい語り口なわけでもありません(大阪弁です)。だけど私は物語に出てくる少年のように、その博士が大好きなんです。
少年のような私
その博士との出会いは、高校3年生の冬でした。私は既に大学受験を終え、進路を理系で特に情報科学の分野に決めていました。学生や若者に対して、惜しみなく自分の知識や経験を語り、力を差し伸べてくれる性格である博士を私は慕うようになりました。まだまだ難しいことは分からないけど、博士に少しずつ知識を教えてもらううちに、人工知能に対する自分なりの考えがポツポツと現れてきました。今の私は、専門というゲートの目の前に立ったばかり。だから今から書くことはちっとも専門的ではありません。学術的なことも、論理的なことも書けない。だけど、今の私にしかできない、私が思う「今の人工知能」を、ここに書き綴っていこうと思います。
「人工知能に仕事を奪われる」
目まぐるしい技術の発展の中で、世界は便利に、豊かになってきました。海外に住む友人とはSNSを通じて、まるで隣にいるかのように近況を知り合うことができます。欲しいものは店頭に行かなくてもインターネット上で注文し、翌日には受け取ることがきます。死病と言われていた病は治り、肉体の死の時を迎えるのは随分先になりました。科学技術は、時間的、空間的、物理的な制約からどんどん私たちを解放してきました。そして私たちに何か益となるものを、与えてきたのです。
コンピュータなしではもう、生きてはいけない
人工知能も同様に、私たちをあらゆる制約の中から解放し、何かを与えてくれる存在であることは間違いありません。しかし、こと人工知能に関しては随分悲観的な表現が多いように思えます。
「人工知能が人間の仕事を奪う」
「シンギュラリティーを超える」
「コンピュータが人間を支配する」
仕事を奪われてしまったらお金がもらえなくなって、家族を養えなくなってしまう。映画や本の世界だけで起こっていた対コンピュータとの戦いが現実に起こってしまったら……私たちは支配され、滅ぼされてしまうのでしょうか。
今この時代、間違いなく、人工知能に期待する一方でこのように不安な思いが混在していると誰もが感じています。
でも本当に……
人工知能は人間を“超える”のでしょうか。
人工知能は人間の仕事を“奪う”のでしょうか。
そもそも、「人間の“知能”」とは何なのでしょうか。
私好みの男の子を造ってみる
私が天才プログラマーだとして、私に愛を囁いてくれる恋人ロボットの男の子を造ってみるとします。理想的な容姿で(ジャニーズ系)、理想的な性格(優しくて面白い!)の男の子。
私が「好きだよ」と言ったら「僕もなのちゃんが大好きだよ」と必ず言ってくれるようにプログラムします。私が挫けそうなときは必ず応援してくれて、私が悲しんでいるときは必ず慰めてくれて、喧嘩しても暴力は振るわず、最後は私を認めてくれる。そのような行動を必ずとるようにプログラムするのです。
そのロボットの彼は、私にとって容姿も性格も行動も完璧!
一緒にいればきっと幸せだし「みんなに自慢したい!」そんな気持ちになるでしょう。
だって、“私の望む通り”の行動を“必ず”とってくれるんですから。
理想の相手、居ないなら造れば良いの?
だけど……
囁かれた愛の言葉を受け取った私は、本当に心から嬉しいと思えるのでしょうか。“必ず”慰めてくれるその行動が、私の傷を癒してくれることはあるのでしょうか。
私たちはどうして、好きな人と想いが通じ合うことが嬉しいのでしょう?
どうして辛い瞬間にかけてくれた言葉が励ましになるのでしょう?
どうして結婚しなくても幸せになれるこの時代に、誰かと結ばれることを願う人がいるのでしょう?
大いなる自由の中にいる私たち
私はこの時代に、ここ日本に生まれたことは、本当に奇跡のようで、恵まれたことだと思います。齢19と歳を重ね、その時々で大小様々な悩みはありますが、概ね幸せな毎日を送れています。なぜなら「明日がある」ことを当たり前に信じて生きていくことが出来るからです。
私は本当に幸せな世界で生きています。なぜなら「自由」があるからです。私が言いたい「自由」とは、行動する選択の自由です。そして、同じ日本という国で暮らしている身近な家族や友人たちも、同じような自由の中にいます。私は彼ら彼女らの行動の一つ一つに感謝を覚えずにはいられません。
語り合った夜、帰り道
どうして人間関係の中で、“本当の喜び”が生み出されるのでしょうか。強制的な行動の中に、形式的な儀式の中に、主人と奴隷の関係の中に、喜びの本質はあるのでしょうか。
彼ら彼女らが何にも縛られない自由な選択の中で「私に親切にしよう」という選択をしてくれるから、嬉しいのです。彼ら彼女らには「私を傷つける自由」「私を騙す自由」もあります。だけど、「傷つけること」「騙すこと」を選ばずに、「私を笑顔にすること」「私と一緒に過ごす時を楽しくすること」を選んでくれる。だから私は本当に喜びを感じることができるのです。
謎のポーズをとる、私の友人兼親
今まで出会った全ての人の行動が、私はこんなにも嬉しくて、時に悲しくもさせるのです。今は離れて暮らしている家族もそうです。何にも縛られない、その人自身の考えの中で、自由な選択の中で、選んでくれる行為に喜んだり悲しんだりするのです。
私のこの「幸せ」という感情
私は……どうしても、どうしても!
私たちが「楽しい」と思うことも、「悲しい」と思うことも、それだけではない数多なる感情のそれぞれが、誰かの手によってプログラムされたものだとはどうしても思えないんです。一緒に場所を共有して気持ちが昂っていく感覚とか、言葉が通じなくても思いが通じた瞬間の喜びとか、信じていた人に裏切られた時の胸が引き裂かれるような悲しみが、誰かによって“必ずこうなるように”と、コントロールされてるように思えないんです。
何をもって「人間を超える」というのか、によって人工知能に対する捉え方は全然違うのだと思います。私のこのふわふわとした感覚的な思考も、その捉え方のただ一つでしかありません。
目に見えないものは”信じる”ことでしか、存在し得ないの。
そもそも人間とは何なのか。そもそも知能とは何なのか。
この側面から私なりに考えた時(特に人間って何だろうって考えた時)、今の人工知能や他のコンピュータについてこう思うのです。
コンピュータが「プログラム」で動いているうちは、人間には及ばない。
“今”のコンピュータや人工知能が、人の手によって、または人が作り出したプログラムによって生み出されて動いているうちは、私たちにとってこれらは生活を豊かにする道具に留まり続けるのだと思います。私は、ここで“今”と強調したいと思います。今の技術では“そもそも”の人工知能についての思想が、私の思想とは全くベクトルが異なっています。だけど、技術の進歩は誰にも止められるものではありません。人間に好奇心がある限り、「知りたい」がある限り、科学は発展し続けていくのです。今私がこうして、今の技術が人間を超えることはないと考えているのですから、日本のどこかで、世界のどこかで、新しい技術が思想され始め、研究し始めているのだと思います。数字の0と1だけで動かされるものではなく、もっと優しい、もっと美しい、もっと人間らしい、技術が。
私と、私の大好きな人たちを幸せにしたいんだ
牧師先生がこう私に伝えてくださったことがあります。私が常々「私と心の距離半径5m以内にいる貴方」と表現するのはここにあります。
過去の偉人や、現在の有名人が多くの人に影響を与えられるのは、その人の周りの人やその人自身が、何らかの形でその人物を表現しし(今でいうとテレビやSNSなどで)、影響を与えられる側がその人を知ることによって相互に知り合うことがなくても、関係がつくられるのだと考えます。
私は芸能人ではありませんし、何か特別な知識や技術がある訳でもない、一介の大学生です。もし私の思想を伝えられるとしたら、それは同じ大学の人だったり、同じコミュニティの人だったり、SNSで繋がりのある人なのかなと思います。それが「心の距離半径5m」くらいの人たちなのかなって思います。そしてこの距離にいる人たちは漏れなくみんな、雪崩のように降り注ぐ愛を受けて、生きて欲しいんです。
だから、私がこのように自由に発想し、自由に表現し、自由に発信できる恵みに感謝すると共に、私の文章を最後まで読んでくださった貴方に感謝したいと思います。
見えない”愛”を信じて。
私の師匠でメンターで友人の博士の本
冒頭に紹介した、私がこれを考えるキッカケになった博士の本です。ぜひお手にとってみてくだざい。「知能とは何か」をベースに人工知能を捉え直してみたり…何か今までの人工知能に対する考えとは違う見方ができるようになるかと思います。
人工知能の哲学
Amazonで松田 雄馬の人工知能の哲学。アマゾンならポイント還元本が多数。松田 雄馬作品ほか、お急ぎ便対象商品は当日お届けも可能。また人工知能の哲学もアマゾン配送商品なら通常配送無料。www.amazon.co.jp
Nanoka Mizukadoの SNS
お気軽にフォローしてください。
Nano (@nano_graphic) | Twitter
The latest Tweets from Nano (@nano_graphic). 19歳 / 一介の大学生 / ※理由があって、3月中旬までsarahahのURLは残しておきます。ここでのやり取りは休憩〜. Tokyo-to,…twitter.com
Twitterは私の思考の一部を文章化したり、備忘録代わりに使っています。
Nano (@nano_graphic) * Instagram photos and videos
425 Followers, 514 Following, 10 Posts - See Instagram photos and videos from Nano (@nano_graphic)www.instagram.com
Instagramは写真が趣味な私のギャラリーとして利用しています。
お金は貯めても使えば減る。
美味しいご飯を食べても時間が経てばお腹がすく。
だけど、受けた愛は減らないんだよ。
|
人工知能と愛
| 8
|
人工知能と愛-1b204eba94cb
|
2018-04-22
|
2018-04-22 01:31:21
|
https://medium.com/s/story/人工知能と愛-1b204eba94cb
| false
| 105
| null | null | null | null | null | null | null | null | null |
Philosophy
|
philosophy
|
Philosophy
| 39,496
|
Mizukado Nanoka
|
1998年生まれ / ちょっと行動力のある一介の大学生 / 原動力も行動軸も「愛」です
|
9041e43aa697
|
nano_chaaaam
| 37
| 41
| 20,181,104
| null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.